1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-19 10:53:58 +00:00
Commit Graph

3190 Commits

Author SHA1 Message Date
Konstantin Belousov
7253a5ec63 Initialize paddr to handle the case of zero size.
Reported and reviewed by:	Conrad Meyer <cemeyer@uw.edu>
MFC after:	1 week
2014-03-12 16:38:55 +00:00
Konstantin Belousov
2309fa9b92 Do not vdrop() the tmpfs vnode until it is unlocked. The hold
reference might be the last, and then vdrop() would free the vnode.

Reported and tested by:	bdrewery
MFC after:	1 week
2014-03-12 15:13:57 +00:00
Dimitry Andric
2367b4ddc4 After r251709, avoid a clang 3.4 warning about an unused static const
variable (uma_max_ipers), when asserts are disabled.

Reviewed by:	glebius
MFC after:	3 days
2014-02-14 17:47:18 +00:00
Attilio Rao
14a5dc1780 Fix-up r254141: in the process of making a failing vm_page_rename()
a call of pager_swap_freespace() was moved around, now leading to freeing
the incorrect page because of the pindex changes after vm_page_rename().

Get back to use the correct pindex when destroying the swap space.

Sponsored by:	EMC / Isilon storage division
Reported by:	avg
Tested by:	pho
MFC after:	7 days
2014-02-14 03:34:12 +00:00
Gleb Smirnoff
5f3563b0a5 Fix function name in KASSERT().
Submitted by:	hiren
2014-02-12 20:11:20 +00:00
John Baldwin
8add0ced70 Correct assertion to assert that the existing device VM object uses the
same type rather than asserting in the case where we just created a new
VM object.

Reviewed by:	kib
2014-02-11 22:05:21 +00:00
Gleb Smirnoff
49fef6a202 Create two public UMA_ZONE_PCPU zones: 64 bit sized and pointer sized.
Sponsored by:	Nginx, Inc.
2014-02-10 19:59:46 +00:00
Gleb Smirnoff
f947570e35 Style. 2014-02-10 19:51:15 +00:00
Gleb Smirnoff
48343a2f34 Make M_ZERO flag work correctly on UMA_ZONE_PCPU zones.
Sponsored by:	Nginx, Inc.
2014-02-10 19:48:26 +00:00
Alan Cox
7b9b301c6b Don't call vm_fault_prefault() on zero-fill faults. It's a waste of time.
Successful prefaults after a zero-fill fault are extremely rare.
2014-02-09 01:59:52 +00:00
Gleb Smirnoff
0a5a3ccb81 Provide macros that allow easily export uma(9) zone limits and
current usage via sysctl(9):

  SYSCTL_UMA_MAX()
  SYSCTL_ADD_UMA_MAX()
  SYSCTL_UMA_CUR()
  SYSCTL_ADD_UMA_CUR()

Sponsored by:	Nginx, Inc.
2014-02-07 14:29:03 +00:00
Alan Cox
63281952f0 Make prefaulting more aggressive on hard faults. Previously, we would only
map a fraction of the pages that were fetched by vm_pager_get_pages() from
secondary storage.  Now, we map them all in order to avoid future soft
faults.  This effect is most evident when a memory-mapped file is accessed
sequentially.  Previously, there were 6 soft faults for every hard fault.
Now, these soft faults are eliminated.

Sponsored by:	EMC / Isilon Storage Division
2014-02-02 20:21:53 +00:00
Alan Cox
793d14076a In an effort to diagnose possible corruption of struct vm_page on some
sparc64 machines make the page queue assert in vm_page_dequeue() more
precise.  While I'm here switch the page lock assert to the newer style.
2014-01-24 19:08:42 +00:00
John Baldwin
ab46f63e8f Fix a couple of typos. 2014-01-21 03:27:47 +00:00
Gleb Smirnoff
7ebba1f8ff ANSIfy declarations.
Ok'ed by:	alc
2014-01-20 18:47:56 +00:00
Alan Cox
86fa24710e Style changes in vm_pageout_scan():
1. Be consistent in the style of "act_delta" manipulations between the
   inactive and active queue scans.

2. Explicitly compare to zero.

3. The deactivation of a page is based is based on its recent history
   and not just the current call to vm_pageout_scan().  The variable
   "act_delta" represents the current state of the page, and not its
   history.  Avoid possible confusion by not (ab)using "act_delta" for
   the making the deactivation decision.

Submitted by:	kib [1]
Reviewed by:	kib [2,3]
2014-01-18 20:02:59 +00:00
Alan Cox
9099545af1 Correctly update the count of stuck pages, "addl_page_shortage", in
vm_pageout_scan().  There were missing increments in two less common cases.

Don't conflate the count of stuck pages and the pageout deficit provided by
vm_page_alloc{,_contig}().  (A proposed fix to the OOM code depends on this.)

Handle held pages consistently in the inactive queue scan.  In the more
common case, we did not move the page to the tail of the queue.  Whereas, in
the less common case, we did.  There's no particular reason to move the page
in the less common case, so remove it.

Perform the calculation of the page shortage for the active queue scan a
little earlier, before the active queue lock is acquired.  The correctness
of this calculation doesn't depend on the active queue lock being held.

Eliminate a redundant variable, "pcount".  Use the more descriptive
variable, "maxscan", in its place.

Apply a few nearby style fixes, e.g., eliminate stray whitespace and excess
parentheses.

Reviewed by:	kib
Sponsored by:	EMC / Isilon Storage Division
2014-01-12 19:04:20 +00:00
Alan Cox
000fb817d8 Since the introduction of the popmap to reservations in r259999, there is
no longer any need for the page's PG_CACHED and PG_FREE flags to be set and
cleared while the free page queues lock is held.  Thus, vm_page_alloc(),
vm_page_alloc_contig(), and vm_page_alloc_freelist() can wait until after
the free page queues lock is released to clear the page's flags.  Moreover,
the PG_FREE flag can be retired.  Now that the reservation system no longer
uses it, its only uses are in a few assertions.  Eliminating these
assertions is no real loss.  Other assertions catch the same types of
misbehavior, like doubly freeing a page (see r260032) or dirtying a free
page (free pages are invalid and only valid pages can be dirtied).

Eliminate an unneeded variable from vm_page_alloc_contig().

Sponsored by:	EMC / Isilon Storage Division
2013-12-31 18:25:15 +00:00
Alan Cox
a08c151546 Add "popmap" assertions: The page being freed isn't already free, and the
page being allocated isn't already allocated.

Sponsored by:	EMC / Isilon Storage Division
2013-12-29 04:54:52 +00:00
Alan Cox
ec17932242 MFp4 alc_popmap
Change the way that reservations keep track of which pages are in use.
  Instead of using the page's PG_CACHED and PG_FREE flags, maintain a bit
  vector within the reservation.  This approach has a couple benefits.
  First, it makes breaking reservations much cheaper because there are
  fewer cache misses to identify the unused pages.  Second, it is a pre-
  requisite for supporting two or more reservation sizes.
2013-12-28 04:28:35 +00:00
Konstantin Belousov
b61a53d43d Do not coalesce stack entry, vm_map_stack() asserts that the requested
region is claimed by a new entry.

Pass MAP_STACK_GROWS_DOWN and MAP_STACK_GROWS_UP flags to
vm_map_insert() from vm_map_stack(), to really turn off coalescing
code and call to vm_map_simplify_entry() [1].

Reported by:	avg, peter, many
Tested by:	avg, peter
Noted by:	avg [1]
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-12-27 16:59:47 +00:00
Marcel Moolenaar
938b0f5b75 For ia64, use pmap_remove_pages() and not pmap_remove(). The problem is
that we don't have a good way (yet) to iterate over the mapped pages by
virtual address and simply try each page within the range. Given that we
call pmap_remove() over the entire 2^63 bytes of address space, it takes
a while for pmap_remove to have tried all 2^50 pages.
By using pmap_remove_pages() we use the PV list to find all mappings.

Change derived from a patch by: alc
2013-12-26 05:46:10 +00:00
Dimitry Andric
d395270d06 In sys/vm/vm_pageout.c, since vm_pageout_worker() takes a void * as
argument, cast the incoming 0 argument to void *, to silence a warning
from clang 3.4 ("expression which evaluates to zero treated as a null
pointer constant of type 'void *' [-Wnon-literal-null-conversion]").

MFC after:	3 days
2013-12-25 22:32:34 +00:00
Alan Cox
703b304f33 Eliminate a redundant parameter to vm_radix_replace().
Improve the wording of the comment describing vm_radix_replace().

Reviewed by:	attilio
MFC after:	6 weeks
Sponsored by:	EMC / Isilon Storage Division
2013-12-08 20:07:02 +00:00
Craig Rodrigues
a3845534a0 In keg_dtor(), print out the keg name in the "Freed UMA keg was not empty"
message printed to the console.  This makes it easier to track down
the source of certain memory leaks.

Suggested by: adrian
2013-11-29 08:04:45 +00:00
Alexander Motin
03175483c2 - Add bucket size column to show uma DDB command.
- Add `show umacache` command to show alike stats for cache-only UMA zones.
2013-11-28 19:20:49 +00:00
Alexander Motin
cec48e002a Make UMA to not blindly force offpage slab header allocation for large
(> PAGE_SIZE) zones.  If zone is not multiple to PAGE_SIZE, there may
be enough space for the header at the last page, so we may avoid extra
header memory allocation and hash table update/lookup.

ZFS creates bunch of odd-sized UMA zones (5120, 6144, 7168, 10240, 14336).
This change gives good use to at least some of otherwise lost memory there.

Reviewed by:	avg
2013-11-27 20:56:10 +00:00
Alexander Motin
f7104ccd94 Don't count bucket allocation failures for UMA zones as their own failures.
There are good reasons for this to happen, such as recursion prevention, etc.
and they are not fatal since buckets are just an optimization mechanism.
Real bucket allocation failures are any way counted by the bucket zones
themselves, and we don't need double accounting there.
2013-11-27 20:16:18 +00:00
Alexander Motin
e8a720fe60 Fix bug introduced at r252226, when udata argument passed to bucket_alloc()
was used without making sure first that it was really passed for us.

On some of my systems this bug made user argument passed by ZFS code to
uma_zalloc_arg() unexpectedly block UMA per-CPU caches for those zones.
2013-11-27 19:55:42 +00:00
Alexander Motin
8a8d9d1475 When purging per-CPU UMA caches do not return empty buckets into the global
full bucket cache to not trigger assertion if allocation happen before that
global cache get purged.
2013-11-23 13:42:56 +00:00
Konstantin Belousov
79e9451f07 Vm map code performs clipping when map entry covers region which is
larger than the operational region.  If the op region size is zero,
clipping would create a zero-sized map entry.  The result is that vm
map splay starts behaving inconsistently, sometimes returning
zero-sized entry, sometimes the next (or previous) entry.

One step further, it could result in e.g. vm_map_wire() setting
MAP_ENTRY_IN_TRANSITION on the zero-sized entry, but failing to clear
it in the done part.  The vm_map_delete() than hangs forever waiting
for the flag removal.

Verify for zero-length requests and act as if it is always successfull
without performing any action on the address space.

Diagnosed by:	pho
Tested by:	pho (previous version)
Reviewed by:	alc (previous version)
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-20 09:03:48 +00:00
Konstantin Belousov
ff3ae454c0 Add assertions to cover all places in the wiring and unwiring code
where MAP_ENTRY_IN_TRANSITION is set or cleared.

Tested by:	pho
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-20 08:47:54 +00:00
Konstantin Belousov
7e14088d93 Revert back to use int for the page counts. In vn_io_fault(), the i/o
is chunked to pieces limited by integer io_hold_cnt tunable, while
vm_fault_quick_hold_pages() takes integer max_count as the upper bound.

Rearrange the checks to correctly handle overflowing address arithmetic.

Submitted by:	bde
Tested by:	pho
Discussed with:	alc
MFC after:	1 week
2013-11-20 08:45:26 +00:00
Alexander Motin
a2de44abf5 Implement mechanism to safely but slowly purge UMA per-CPU caches.
This is a last resort for very low memory condition in case other measures
to free memory were ineffective.  Sequentially cycle through all CPUs and
extract per-CPU cache buckets into zone cache from where they can be freed.
2013-11-19 10:51:46 +00:00
Alexander Motin
4d104ba024 Grow UMA zone bucket size also on lock congestion during item free.
Lock congestion is the same, whether it happens on alloc or free, so
handle it equally.  Now that we have back pressure, there is no problem
to grow buckets a bit faster.  Any way growth is much slower then in 9.x.
2013-11-19 10:17:10 +00:00
Alexander Motin
f3932e9025 Add two new UMA bucket zones to store 3 and 9 items per bucket.
These new buckets make bucket size self-tuning more soft and precise.
Without them there are buckets for 1, 5, 13, 29, ... items.  While at
bigger sizes difference about 2x is fine, at smallest ones it is 5x and
2.6x respectively.  New buckets make that line look like 1, 3, 5, 9, 13,
29, reducing jumps between steps, making algorithm work softer, allocating
and freeing memory in better fitting chunks.  Otherwise there is quite a
big gap between allocating 128K and 5x128K of RAM at once.
2013-11-19 10:10:44 +00:00
Alexander Motin
ace66b568e Implement soft pressure on UMA cache bucket sizes.
Every time system detects low memory condition decrease bucket sizes for
each zone by one item.  As result, higher memory pressure will push to
smaller bucket sizes and so smaller per-CPU caches and so more efficient
memory use.

Before this change there was no force to oppose buckets growth as result
of practically inevitable zone lock conflicts, and after some run time
per-CPU caches could consume enough RAM to kill the system.
2013-11-19 10:05:53 +00:00
Konstantin Belousov
d005ed537c Avoid overflow for the page counts.
Reported and tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-12 08:47:58 +00:00
Konstantin Belousov
1bd7d0b7db If filesystem declares that it supports shared locking for writes, use
shared vnode lock for VOP_PUTPAGES() as well.  The only such
filesystem in the tree is ZFS, and it uses
vnode_pager_generic_putpages(), which performs the pageout with
VOP_WRITE().

Reviewed by:	alc
Discussed with:	avg
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
2013-11-09 20:36:29 +00:00
Konstantin Belousov
9ded9474d3 Do not coalesce if the swap object belongs to tmpfs vnode. The
coalesce would extend the object to keep pages for the anonymous
mapping created by the process.  The pages has no relations to the
tmpfs file content which could be written into the corresponding
range, causing anonymous mapping and file content aliasing and
subsequent corruption.

Another lesser problem created by coalescing is over-accounting on the
tmpfs node destruction, since the object size is substracted from the
total count of the pages owned by the tmpfs mount.

Reported and tested by:	bdrewery
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2013-11-05 06:18:50 +00:00
Alan Cox
eb2f42fbb0 Tidy up the output of "sysctl vm.phys_free".
Approved by:	re (glebius)
Sponsored by:	EMC / Isilon Storage Division
2013-10-10 16:11:45 +00:00
Alan Cox
f872f6eaf5 Both the vm_map and vmspace zones are defined as "no free". So, there is no
point in defining a fini function for these zones.

Reviewed by:	kib
Approved by:	re (glebius)
Sponsored by:	EMC / Isilon Storage Division
2013-09-22 17:48:10 +00:00
Neel Natu
74d1d2b7cc Merge the following changes from projects/bhyve_npt_pmap:
- add fields to 'struct pmap' that are required to manage nested page tables.
- add a parameter to 'vmspace_alloc()' that can be used to override the
  default pmap initialization routine 'pmap_pinit()'.

These changes are pushed ahead of the remaining changes in 'bhyve_npt_pmap'
in anticipation of the upcoming KBI freeze for 10.0.

Reviewed by:	kib@, alc@
Approved by:	re (glebius)
2013-09-20 17:06:49 +00:00
Alan Cox
deb179bb4c The pmap function pmap_clear_reference() is no longer used. Remove it.
pmap_clear_reference() has had exactly one caller in the kernel for
several years, more precisely, since FreeBSD 8.  Now, that call no
longer exists.

Approved by:	re (kib)
Sponsored by:	EMC / Isilon Storage Division
2013-09-20 04:30:18 +00:00
John Baldwin
55648840de Extend the support for exempting processes from being killed when swap is
exhausted.
- Add a new protect(1) command that can be used to set or revoke protection
  from arbitrary processes.  Similar to ktrace it can apply a change to all
  existing descendants of a process as well as future descendants.
- Add a new procctl(2) system call that provides a generic interface for
  control operations on processes (as opposed to the debugger-specific
  operations provided by ptrace(2)).  procctl(2) uses a combination of
  idtype_t and an id to identify the set of processes on which to operate
  similar to wait6().
- Add a PROC_SPROTECT control operation to manage the protection status
  of a set of processes.  MADV_PROTECT still works for backwards
  compatability.
- Add a p_flag2 to struct proc (and a corresponding ki_flag2 to kinfo_proc)
  the first bit of which is used to track if P_PROTECT should be inherited
  by new child processes.

Reviewed by:	kib, jilles (earlier version)
Approved by:	re (delphij)
MFC after:	1 month
2013-09-19 18:53:42 +00:00
Konstantin Belousov
9eab548476 PG_SLAB no longer serves a useful purpose, since m->object is no
longer abused to store pointer to slab. Remove it.

Reviewed by:    alc
Sponsored by:   The FreeBSD Foundation
Approved by:	re (hrs)
2013-09-17 07:35:26 +00:00
Konstantin Belousov
3846a82284 Remove zero-copy sockets code. It only worked for anonymous memory,
and the equivalent functionality is now provided by sendfile(2) over
posix shared memory filedescriptor.

Remove the cow member of struct vm_page, and rearrange the remaining
members.  While there, make hold_count unsigned.

Requested and reviewed by:	alc
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
Approved by:	re (delphij)
2013-09-16 06:25:54 +00:00
Konstantin Belousov
196beb5359 If the last page of the file is partially full and whole valid
portion is invalidated, invalidate the whole page.  Otherwise,
partially valid page appears on a page queue, which is wrong.  This
could only happen for the last page, because only then buffer which
triggered invalidation could not cover the whole page.

Reported and tested by:	pho (previous version)
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
Approved by:	re (delphij)
MFC after:	2 weeks
2013-09-14 10:11:38 +00:00
John Baldwin
6a87d217e2 Fix an off-by-one error when populating mincore(2) entries for
skipped entries.  lastvecindex references the last valid byte,
so the new bytes should come after it.

Approved by:	re (kib)
MFC after:	1 week
2013-09-12 20:46:32 +00:00
John Baldwin
edb572a38c Add a mmap flag (MAP_32BIT) on 64-bit platforms to request that a mapping use
an address in the first 2GB of the process's address space.  This flag should
have the same semantics as the same flag on Linux.

To facilitate this, add a new parameter to vm_map_find() that specifies an
optional maximum virtual address.  While here, fix several callers of
vm_map_find() to use a VMFS_* constant for the findspace argument instead of
TRUE and FALSE.

Reviewed by:	alc
Approved by:	re (kib)
2013-09-09 18:11:59 +00:00