1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-24 11:29:10 +00:00
Commit Graph

110 Commits

Author SHA1 Message Date
Alan Cox
d9dd6bfb56 Remove an unused field from the vmspace structure. 2004-06-26 19:16:35 +00:00
Alan Cox
4da4d293df In cases where a file was resident in memory mmap(..., PROT_NONE, ...)
would actually map the file with read access enabled.  According to
http://www.opengroup.org/onlinepubs/007904975/functions/mmap.html this is
an error.  Similarly, an madvise(..., MADV_WILLNEED) would enable read
access on a virtual address range that was PROT_NONE.

The solution implemented herein is (1) to pass a vm_prot_t to
vm_map_pmap_enter() describing the allowed access and (2) to make
vm_map_pmap_enter() responsible for understanding the limitations of
pmap_enter_quick().

Submitted by:	"Mark W. Krentel" <krentel@dreamscape.com>
PR:		kern/64573
2004-04-24 03:46:44 +00:00
Warner Losh
05eb3785e7 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999.

Approved by: core
2004-04-06 20:15:37 +00:00
Peter Wemm
2965c04576 Part 2 of rev 1.68. Update comment to match reality now that vm_endcopy
exists and we no longer copy to the end of the struct.

Forgotten by:  alfred and green
2004-03-12 00:16:48 +00:00
Alan Cox
950f8459d4 - Rename vm_map_clean() to vm_map_sync(). This better reflects the fact
that msync(2) is its only caller.
 - Migrate the parts of the old vm_map_clean() that examined the internals
   of a vm object to a new function vm_object_sync() that is implemented in
   vm_object.c.  At the same, introduce the necessary vm object locking so
   that vm_map_sync() and vm_object_sync() can be called without Giant.

Reviewed by:	tegge
2003-11-09 05:25:35 +00:00
Dag-Erling Smørgrav
a86fa82659 Whitespace cleanup. 2003-11-03 16:14:45 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Marcel Moolenaar
fd75d71049 Part 2 of implementing rstacks: add the ability to create rstacks and
use the ability on ia64 to map the register stack. The orientation of
the stack (i.e. its grow direction) is passed to vm_map_stack() in the
overloaded cow argument. Since the grow direction is represented by
bits, it is possible and allowed to create bi-directional stacks.
This is not an advertised feature, more of a side-effect.

Fix a bug in vm_map_growstack() that's specific to rstacks and which
we could only find by having the ability to create rstacks: when
the mapped stack ends at the faulting address, we have not actually
mapped the faulting address. we need to include or cover the faulting
address.

Note that at this time mmap(2) has not been extended to allow the
creation of rstacks by processes. If such a need arises, this can
be done.

Tested on: alpha, i386, ia64, sparc64
2003-09-27 22:28:14 +00:00
Marcel Moolenaar
b21a0008ba Introduce MAP_ENTRY_GROWS_DOWN and MAP_ENTRY_GROWS_UP to allow for
growable (stack) entries that not only grow down, but also grow up.
Have vm_map_growstack() take these flags into account when growing
an entry.

This is the first step in adding support for upward growable stacks.
It is a required feature on ia64 to support the register stack (or
rstack as I like to call it -- it also means reverse stack). We do
not currently create rstacks, so the upward growing is not exercised
and the change should be a functional no-op.

Reviewed by: alc
2003-08-30 21:25:23 +00:00
Alan Cox
46add12552 Reduce the size of the vm map (and by inclusion the vm space) on 64-bit
architectures by moving a field within the structure.
2003-08-13 03:13:22 +00:00
Bruce M Simpson
abd498aa71 Add the mlockall() and munlockall() system calls.
- All those diffs to syscalls.master for each architecture *are*
   necessary. This needed clarification; the stub code generation for
   mlockall() was disabled, which would prevent applications from
   linking to this API (suggested by mux)
 - Giant has been quoshed. It is no longer held by the code, as
   the required locking has been pushed down within vm_map.c.
 - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES
   to express their intention explicitly.
 - Inspected at the vmstat, top and vm pager sysctl stats level.
   Paging-in activity is occurring correctly, using a test harness.
 - The RES size for a process may appear to be greater than its SIZE.
   This is believed to be due to mappings of the same shared library
   page being wired twice. Further exploration is needed.
 - Believed to back out of allocations and locks correctly
   (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC).

PR:             kern/43426, standards/54223
Reviewed by:    jake, alc
Approved by:    jake (mentor)
MFC after:	2 weeks
2003-08-11 07:14:08 +00:00
Alan Cox
0551c08dee Introduce vm_map_pmap_enter(). Presently, this is a stub calling the MD
pmap_object_init_pt().
2003-06-29 23:32:55 +00:00
Alan Cox
bf5f21b622 Remove an unnecessary forward declaration. 2003-06-15 07:28:33 +00:00
David Schultz
72d97679ff - When the VM daemon is out of swap space and looking for a
process to kill, don't block on a map lock while holding the
  process lock.  Instead, skip processes whose map locks are held
  and find something else to kill.
- Add vm_map_trylock_read() to support the above.

Reviewed by:	alc, mike (mentor)
2003-03-12 23:13:16 +00:00
Alan Cox
09c80124a3 Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.
Discussed on:	arch@
2003-03-06 03:41:02 +00:00
Alan Cox
ea0081b61e Add a needed #include.
Reported by:	ia64 tinderbox
2003-01-01 00:13:01 +00:00
Alan Cox
36daaecd04 Implement a variant locking scheme for vm maps: Access to system maps
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock.  Why?  No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map.  Thus, a
a mutex isn't appropriate.  On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock.  This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock.  Fortunately,
we never sleep while holding a system map lock.
2002-12-31 19:38:04 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Alan Cox
e94ce82689 o Update some comments. 2002-09-22 04:33:43 +00:00
Alfred Perlstein
8209f090f1 Change struct vmspace->vm_shm from void * to struct shmmap_state *, this
removes the need for casts in several cases.
2002-07-22 16:22:27 +00:00
Alfred Perlstein
2cc593fd8e Remove caddr_t. 2002-07-22 16:12:55 +00:00
Alan Cox
9688f93163 o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()
and kmem_free_wakeup().  Previously, kmem_free_wakeup() always
   called wakeup().  In general, no one was sleeping.
 o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
   for use in vm_kern.c.
2002-07-11 02:39:24 +00:00
Alan Cox
366838ddfe o Eliminate vmspace::vm_minsaddr. It's initialized but never used.
o Replace stale comments in vmspace by "const until freed" annotations
   on some fields.
2002-06-25 18:14:38 +00:00
Alan Cox
1d7cf06c8c o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
Alan Cox
e27e17b711 o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
Alan Cox
acd9a301ec o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
Alan Cox
61c075b67f o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
Alan Cox
794316a866 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
Alan Cox
4e94f40222 o Replace the vm_map's hint by the root of a splay tree. By design,
the last accessed datum is moved to the root of the splay tree.
   Therefore, on lookups in which the hint resulted in O(1) access,
   the splay tree still achieves O(1) access.  In contrast, on lookups
   in which the hint failed miserably, the splay tree achieves amortized
   logarithmic complexity, resulting in dramatic improvements on vm_maps
   with a large number of entries.  For example, the execution time
   for replaying an access log from www.cs.rice.edu against the thttpd
   web server was reduced by 23.5% due to the large number of files
   simultaneously mmap()ed by this server.  (The machine in question has
   enough memory to cache most of this workload.)

   Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
   due to the overhead of maintaining the splay tree.  I believe that
   some or all of this can be eliminated through optimizations
   to the code.

Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by:	jeff
2002-05-24 01:33:24 +00:00
Alan Cox
b3a882e936 o Header files shouldn't depend on options: Provide prototypes
for uiomoveco(), uioread(), and vm_uiomove() regardless
   of whether ENABLE_VFS_IOOPT is defined or not.

Submitted by:	bde
2002-05-06 06:20:04 +00:00
Alan Cox
e86256c1f4 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
Alan Cox
c50fe92b8d o Condition the compilation of uiomoveco() and vm_uiomove()
on ENABLE_VFS_IOOPT.
 o Add a comment to the effect that this code is experimental
   support for zero-copy I/O.
2002-05-05 22:42:40 +00:00
Alan Cox
569687d02f o Remove dead and lockmgr()-specific debugging code. 2002-05-02 02:32:09 +00:00
Alan Cox
780b1c0997 Pass the caller's file name and line number to the vm_map locking functions. 2002-04-28 23:12:52 +00:00
Alan Cox
d974f03c69 o Introduce and use vm_map_trylock() to replace several direct uses
of lockmgr().
 o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
   on the vm_map before traversing it.
2002-04-28 06:07:54 +00:00
Alan Cox
089b073345 o Begin documenting the (existing) locking protocol on the vm_map
in the same style as sys/proc.h.
 o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
   (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
   revision 1.206, de-inlining these methods increased the kernel's size.)
2002-04-27 22:01:37 +00:00
Alan Cox
b208d0633f Remove an unused option, VM_FAULT_HOLD, to vm_fault(). 2002-04-17 02:23:57 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Brian Feldman
25adb370be Back out the modification of vm_map locks from lockmgr to sx locks. The
best path forward now is likely to change the lockmgr locks to simple
sleep mutexes, then see if any extra contention it generates is greater
than removed overhead of managing local locking state information,
cost of extra calls into lockmgr, etc.

Additionally, making the vm_map lock a mutex and respecting it properly
will put us much closer to not needing Giant magic in vm.
2002-03-18 15:08:09 +00:00
Brian Feldman
0e0af8ecda Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.
While doing this, move it earlier in the sysinit boot process so that the
VM system can use it.

After that, the system is now able to use sx locks instead of lockmgr
locks in the VM system.  To accomplish this, some of the more
questionable uses of the locks (such as testing whether they are
owned or not, as well as allowing shared+exclusive recursion) are
removed, and simpler logic throughout is used so locks should also be
easier to understand.

This has been tested on my laptop for months, and has not shown any
problems on SMP systems, either, so appears quite safe.  One more
user of lockmgr down, many more to go :)
2002-03-13 23:48:08 +00:00
Eivind Eklund
a128794977 - Remove a number of extra newlines that do not belong here according to
style(9)
- Minor space adjustment in cases where we have "( ", " )", if(), return(),
  while(), for(), etc.
- Add /* SYMBOL */ after a few #endifs.

Reviewed by:	alc
2002-03-10 21:52:48 +00:00
Alfred Perlstein
582ec34cd8 Fix a race with free'ing vmspaces at process exit when vmspaces are
shared.

Also introduce vm_endcopy instead of using pointer tricks when
initializing new vmspaces.

The race occured because of how the reference was utilized:
  test vmspace reference,
  possibly block,
  decrement reference

When sharing a vmspace between multiple processes it was possible
for two processes exiting at the same time to test the reference
count, possibly block and neither one free because they wouldn't
see the other's update.

Submitted by: green
2002-02-05 21:23:05 +00:00
Matthew Dillon
e302698320 Don't let pmap_object_init_pt() exhaust all available free pages
(allocating pv entries w/ zalloci) when called in a loop due to
an madvise().  It is possible to completely exhaust the free page list and
cause a system panic when an expected allocation fails.
2001-10-31 03:06:33 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Matthew Dillon
1b40f8c036 Change inlines back into mainline code in preparation for mutexing. Also,
most of these inlines had been bloated in -current far beyond their
original intent.  Normalize prototypes and function declarations to be ANSI
only (half already were).  And do some general cleanup.

(kernel size also reduced by 50-100K, but that isn't the prime intent)
2001-07-04 20:15:18 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
Matthew Dillon
ff2b5645b5 Two fixes to the out-of-swap process termination code. First, start killing
processes a little earlier to avoid a deadlock.  Second, when calculating
the 'largest process' do not just count RSS.  Instead count the RSS + SWAP
used by the process.  Without this the code tended to kill small
inconsequential processes like, oh, sshd, rather then one of the many
'eatmem 200MB' I run on a whim :-).  This fix has been extensively tested on
-stable and somewhat tested on -current and will be MFCd in a few days.

Shamed into fixing this by: ps
2001-06-09 18:06:58 +00:00
Alfred Perlstein
2395531439 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
Mark Murray
559034b748 Putting sys/lockmgr.h in here allows us to depollute userland includes
a bit.
OK'ed by:	bde
2001-05-03 11:33:51 +00:00
Alfred Perlstein
305dd591ee Fix the botched rev 1.59 where I made it such that without INVARIANTS
the map is never locked.

Submitted by: tegge
2001-04-18 05:30:24 +00:00