1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-24 11:29:10 +00:00
Commit Graph

1475 Commits

Author SHA1 Message Date
Alan Cox
dc907f6632 - Hold the page queues lock around vm_page_wakeup(). 2002-12-24 04:24:58 +00:00
Alan Cox
6e14fce9d9 - Hold the kernel_object's lock around vm_page_insert(..., kernel_object,
...).
2002-12-23 20:39:15 +00:00
Alan Cox
7af7dd3c6f Eliminate some dead code. (Any possible use for this code died with
vm/vm_page.c revision 1.220.)

Submitted by:	bde
2002-12-23 04:35:38 +00:00
Matthew Dillon
9991ea7178 The UP -current was not properly counting the per-cpu VM stats in the
sysctl code.  This makes 'systat -vm 1's syscall count work again.

Submitted by:	Michal Mertl <mime@traveller.cz>
Note:		also slated for 5.0
2002-12-22 05:04:30 +00:00
Alan Cox
671e427ce9 Increase the scope of the kmem_object locking in kmem_malloc(). 2002-12-20 18:59:23 +00:00
Alan Cox
4b420d501f Add a mutex to struct vm_object. Initialize and destroy that mutex
at appropriate times.  For the moment, the mutex is only used on
the kmem_object.
2002-12-20 05:10:32 +00:00
Alan Cox
cf3e6e4837 Remove the hash_rand field from struct vm_object. As of revision 1.215 of
vm/vm_page.c, it is unused.
2002-12-19 20:01:22 +00:00
Alan Cox
24c9ad6bed - Remove vm_page_sleep_busy(). The transition to vm_page_sleep_if_busy(),
which incorporates page queue and field locking, is complete.
 - Assert that the page queue lock rather than Giant is held in
   vm_page_flag_set().
2002-12-19 07:23:46 +00:00
Alan Cox
9a96b6382a - Hold the page queues lock when performing vm_page_busy() or
vm_page_flag_set().
 - Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-19 01:20:24 +00:00
Alan Cox
bd82dc7460 - Hold the page queues lock when performing vm_page_busy().
- Replace vm_page_sleep_busy() with proper page queues locking
   and vm_page_sleep_if_busy().
2002-12-18 04:39:15 +00:00
Alan Cox
b365ea9e30 Hold the page queues lock when performing vm_page_flag_set(). 2002-12-18 04:02:02 +00:00
Alan Cox
d8e7c54e1e Hold the page queues lock when performing vm_page_flag_set(). 2002-12-17 19:55:28 +00:00
Matthew Dillon
fa7dd9c5bc Change the way ELF coredumps are handled. Instead of unconditionally
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.

Certain applications which map large amounts of read-only data will
produce much larger cores.  A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.

This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.

PR:		kern/45994
Submitted by:	"Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by:	jdp, dillon
MFC after:	7 days
2002-12-16 19:24:43 +00:00
Alan Cox
4b36fe0cbd Perform vm_object_lock() and vm_object_unlock() on kmem_object
around vm_page_lookup() and vm_page_free().
2002-12-15 21:09:09 +00:00
Matthew Dillon
92da00bb24 This is David Schultz's swapoff code which I am finally able to commit.
This should be considered highly experimental for the moment.

Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after:	3 weeks
2002-12-15 19:17:57 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Alan Cox
2840cabe6a As per the comments, vm_object_page_remove() now expects its caller to lock
the object (i.e., acquire Giant).
2002-12-15 07:30:51 +00:00
Alan Cox
5e83956af5 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 07:16:51 +00:00
Alan Cox
475e8011ab Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 05:41:56 +00:00
Alan Cox
495bedfbd0 Assert that the page queues lock is held in vm_page_unhold(),
vm_page_remove(), and vm_page_free_toq().
2002-12-15 00:06:02 +00:00
Alan Cox
bc105a6797 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Make the style of the pmap_protect() calls
consistent.

Approved by:	re (blanket)
2002-12-01 18:57:56 +00:00
Alan Cox
38857e7f73 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Nearby, remove an unnecessary semicolon and
return statement.

Approved by:	re (blanket)
2002-12-01 05:40:18 +00:00
Alan Cox
78f7187d01 Increase the scope of the page queue lock in vm_pageout_scan().
Approved by:	re (blanket)
2002-12-01 00:02:39 +00:00
Alan Cox
e80b7b691e Lock page field accesses in mincore().
Approved by:	re (blanket)
2002-11-28 08:01:39 +00:00
Alan Cox
85e0124324 Hold the page queues lock when performing pmap_clear_modify().
Approved by:	re (blanket)
2002-11-27 19:51:48 +00:00
Alan Cox
3a199de3d9 Hold the page queues lock while performing pmap_page_protect().
Approved by:	re (blanket)
2002-11-27 08:03:24 +00:00
Alan Cox
85e03a7e1e Acquire and release the page queues lock around calls to pmap_protect()
because it updates flags within the vm page.

Approved by:	re (blanket)
2002-11-25 22:00:31 +00:00
Alan Cox
13dc71ed40 Extend the scope of the page queues/fields locking in vm_freeze_copyopts()
to cover pmap_remove_all().

Approved by:	re
2002-11-24 06:13:38 +00:00
Alan Cox
178949e021 Hold the page queues/flags lock when calling vm_page_set_validclean().
Approved by:	re
2002-11-23 19:10:31 +00:00
Alan Cox
ba0208b945 Assert that the page queues lock rather than Giant is held in
vm_pageout_page_free().

Approved by:	re
2002-11-23 08:08:54 +00:00
Alan Cox
e8a27959f6 Add page queue and flag locking in vnode_pager_setsize().
Approved by:	re
2002-11-23 03:58:35 +00:00
Jeff Roberson
855a310fcb - Add an event that is triggered when the system is low on memory. This is
intended to be used by significant memory consumers so that they may drain
   some of their caches.

Inspired by:	phk
Approved by:	re
Tested on:	x86, alpha
2002-11-21 09:17:56 +00:00
Jeff Roberson
74c924b553 - Wakeup the correct address when a zone is no longer full.
Spotted by:	jake
2002-11-18 08:27:14 +00:00
Alan Cox
a12cc0e489 Remove vm_page_protect(). Instead, use pmap_page_protect() directly. 2002-11-18 04:05:22 +00:00
Jeff Roberson
f3da1873bc - Don't forget the flags value when using boot pages.
Reported by:	grehan
2002-11-16 20:57:41 +00:00
Alan Cox
4fec79bef8 Now that pmap_remove_all() is exported by our pmap implementations
use it directly.
2002-11-16 07:44:25 +00:00
Alan Cox
81b9ee99e7 Remove dead code that hasn't been needed since the demise of share maps
in various revisions of vm/vm_map.c between 1.148 and 1.153.
2002-11-13 19:50:06 +00:00
Alan Cox
eea85e9bb6 Move pmap_collect() out of the machine-dependent code, rename it
to reflect its new location, and add page queue and flag locking.

Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
2002-11-13 05:39:58 +00:00
Olivier Houchard
f64e99baa2 Remove extra #include<sys/vmmeter.h>. 2002-11-11 13:57:50 +00:00
Matt Jacob
81f71edaec atomic_set_8 isn't MI. Instead, follow Jake's suggestions about
ZONE_LOCK.
2002-11-11 11:50:03 +00:00
Alan Cox
6372d61e3e - Clear the page's PG_WRITEABLE flag in the i386's pmap_changebit()
if we're removing write access from the page's PTEs.
 - Export pmap_remove_all() on alpha, i386, and ia64.  (It's already
   exported on sparc64.)
2002-11-11 05:17:34 +00:00
Matt Jacob
7ca05a39c7 Use atomic_set_8 on the us_freelist maps as they are not otherwise
protected. Furthermore, in some RISC architectures with no normal
byte operations, the surrounding 3 bytes are also affected by the
read-modify-write that has to occur.
2002-11-10 16:16:44 +00:00
Alan Cox
d154fb4fe6 When prot is VM_PROT_NONE, call pmap_page_protect() directly rather than
indirectly through vm_page_protect().  The one remaining page flag that
is updated by vm_page_protect() is already being updated by our various
pmap implementations.

Note: A later commit will similarly change the VM_PROT_READ case and
eliminate vm_page_protect().
2002-11-10 07:12:04 +00:00
Alan Cox
f6116791a2 Fix an error case in vm_map_wire(): unwiring of an entry during cleanup
after a user wire error fails when the entry is already system wired.

Reported by:	tegge
2002-11-09 21:26:49 +00:00
Alan Cox
1f7c5f98d7 In vm_page_remove(), avoid calling vm_page_splay() if the object's memq
is empty.
2002-11-09 08:27:42 +00:00
Thomas Moestl
0fca57b8b8 Move the definitions of the hw.physmem, hw.usermem and hw.availpages
sysctls to MI code; this reduces code duplication and makes all of them
available on sparc64, and the latter two on powerpc.
The semantics by the i386 and pc98 hw.availpages is slightly changed:
previously, holes between ranges of available pages would be included,
while they are excluded now. The new behaviour should be more correct
and brings i386 in line with the other architectures.

Move physmem to vm/vm_init.c, where this variable is used in MI code.
2002-11-07 23:57:17 +00:00
Maxime Henrion
bf1001fa0f Better printf() formats. 2002-11-07 23:16:22 +00:00
Maxime Henrion
e47cd172e0 Some more printf() format fixes. 2002-11-07 23:03:04 +00:00
Maxime Henrion
cd034a5be9 Correctly print vm_offset_t types. 2002-11-07 22:49:07 +00:00
Alan Cox
ada2a050be Export the function vm_page_splay(). 2002-11-04 19:21:39 +00:00
Alan Cox
c71f01affe - Remove the memory allocation for the object/offset hash table
because it's no longer used.  (See revision 1.215.)
 - Fix a harmless bug: the number of vm_page structures allocated wasn't
   properly adjusted when uma_bootstrap() was introduced.  Consequently,
   we were allocating 30 unused vm_page structures.
 - Wrap a long line.
2002-11-03 22:20:42 +00:00
Alan Cox
02af9de6fc Remove the vm page buckets mutex. As of revision 1.215 of vm/vm_page.c,
it is unused.
2002-11-02 22:39:30 +00:00
Jeff Roberson
48eea37508 - Add support for machine dependant page allocation routines. MD code
may define UMA_MD_SMALL_ALLOC to make use of this feature.

Reviewed by:	peter, jake
2002-11-01 01:01:27 +00:00
Jeff Roberson
026aa839a4 - Add a new flag to vm_page_alloc, VM_ALLOC_NOOBJ. This tells
vm_page_alloc not to insert this page into an object.  The pindex is
   still used for colorization.
 - Rework vm_page_select_* to accept a color instead of an object and
   pindex to work with VM_PAGE_NOOBJ.
 - Document other VM_ALLOC_ flags.

Reviewed by:	peter, jake
2002-11-01 00:59:03 +00:00
Robert Watson
03ce2c0c9b Merge from MAC tree: rename mac_check_vnode_swapon() to
mac_check_system_swapon(), to reflect the fact that the primary
object of this change is the running kernel as a whole, rather
than just the vnode.  We'll drop additional checks of this
class into the same check namespace, including reboot(),
sysctl(), et al.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-27 06:54:06 +00:00
Jeff Roberson
bbee39c629 - Now that uma_zalloc_internal is not the fast path don't be so fussy about
extra function calls.  Refactor uma_zalloc_internal into seperate functions
   for finding the most appropriate slab, filling buckets, allocating single
   items, and pulling items off of slabs.  This makes the code significantly
   cleaner.
 - This also fixes the "Returning an empty bucket." panic that a few people
   have seen.

Tested On:	alpha, x86
2002-10-24 07:59:03 +00:00
Jeff Roberson
bba739abf9 - Move the destructor calls so that they are not called with the zone lock
held.  This avoids a lock order reversal when destroying zones.
   Unfortunately, this also means that the free checks are not done before
   the destructor is called.

Reported by:	phk
2002-10-24 06:17:30 +00:00
Robert Watson
3e732e7d7d Invoke mac_check_vnode_mmap() during mmap operations on vnodes,
permitting policies to restrict access to memory mapping based on
the credential requesting the mapping, the target vnode, the
requested rights, or other policy considerations.

Approved by:	re
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-22 15:56:44 +00:00
Robert Watson
1cbfd977fd Introduce MAC_CHECK_VNODE_SWAPON, which permits MAC policies to
perform authorization checks during swapon() events; policies
might choose to enforce protections based on the credential
requesting the swap configuration, the target of the swap operation,
or other factors such as internal policy state.

Approved by:	re
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-10-22 15:53:43 +00:00
John Baldwin
1c865ac70e - Check that a process isn't a new process (p_state == PRS_NEW) before
trying to acquire it's proc lock since the proc lock may not have been
  constructed yet.
- Split up the one big comment at the top of the loop and put the pieces
  in the right order above the various checks.

Reported by:	kris (1)
2002-10-22 14:31:32 +00:00
Sheldon Hearn
29b4d52653 Fix typo in comments (misspelled "necessary"). 2002-10-22 12:10:27 +00:00
Alan Cox
f3b676f0ad o Reinline vm_page_undirty(), reducing the kernel size. (This reverts
a part of vm_page.h revision 1.87 and vm_page.c revision 1.167.)
2002-10-20 19:57:55 +00:00
Alan Cox
f4ecdf056e Complete the page queues locking needed for the page-based copy-
on-write (COW) mechanism.  (This mechanism is used by the zero-copy
TCP/IP implementation.)
 - Extend the scope of the page queues lock in vm_fault()
   to cover vm_page_cowfault().
 - Modify vm_page_cowfault() to release the page queues lock
   if it sleeps.
2002-10-19 18:34:39 +00:00
Matthew Dillon
b86ec922be Replace the vm_page hash table with a per-vmobject splay tree. There should
be no major change in performance from this change at this time but this
will allow other work to progress:  Giant lock removal around VM system
in favor of per-object mutexes, ranged fsyncs, more optimal COMMIT rpc's for
NFS, partial filesystem syncs by the syncer, more optimal object flushing,
etc.  Note that the buffer cache is already using a similar splay tree
mechanism.

Note that a good chunk of the old hash table code is still in the tree.
Alan or I will remove it prior to the release if the new code does not
introduce unsolvable bugs, else we can revert more easily.

Submitted by:	alc	(this is Alan's code)
Approved by:	re
2002-10-18 17:24:30 +00:00
Poul-Henning Kamp
af045176d1 Properly put macro args in ().
Spotted by:	FlexeLint.
2002-10-16 10:52:15 +00:00
Julian Elischer
d524d69b16 Remove old useless debugging code 2002-10-14 20:31:54 +00:00
Jeff Roberson
b43179fbe8 - Create a new scheduler api that is defined in sys/sched.h
- Begin moving scheduler specific functionality into sched_4bsd.c
 - Replace direct manipulation of scheduler data with hooks provided by the
   new api.
 - Remove KSE specific state modifications and single runq assumptions from
   kern_switch.c

Reviewed by:	-arch
2002-10-12 05:32:24 +00:00
John Baldwin
551cf4e150 Rename the mutex thread and process states to use a more generic 'LOCK'
name instead.  (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of
TD_ON_MUTEX())  Eventually a turnstile abstraction will be added that
will be shared with mutexes and other types of locks.  SLOCK/TDI_LOCK will
be used internally by the turnstile code and will not be specific to
mutexes.  Making the change now ensures that turnstiles can be dropped
in at a later date without affecting the ABI of userland applications.
2002-10-02 20:31:47 +00:00
Scott Long
316ec49abd Some kernel threads try to do significant work, and the default KSTACK_PAGES
doesn't give them enough stack to do much before blowing away the pcb.
This adds MI and MD code to allow the allocation of an alternate kstack
who's size can be speficied when calling kthread_create.  Passing the
value 0 prevents the alternate kstack from being created.  Note that the
ia64 MD code is missing for now, and PowerPC was only partially written
due to the pmap.c being incomplete there.
Though this patch does not modify anything to make use of the alternate
kstack, acpi and usb are good candidates.

Reviewed by:	jake, peter, jhb
2002-10-02 07:44:29 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Jeff Roberson
3ef3e7c42b - Get rid of the unused LK_NOOBJ. 2002-09-25 01:24:58 +00:00
Jeff Roberson
6a2eac8acc - Lock access to numoutput on the swap devices. 2002-09-25 01:24:17 +00:00
Jeff Roberson
63e7e60dba - Add a ASSERT_VOP_LOCKED in vnode_pager_alloc.
- Lock access to v_iflags.
2002-09-25 01:23:43 +00:00
Matthew N. Dodd
4a2eca23ca Modify vm_map_clean() (and thus the msync(2) system call) to support
invalidation of cached pages for objects of type OBJT_DEVICE.

Submitted by:	Christian Zander <zander@minion.de>
Approved by:	alc
2002-09-22 08:22:32 +00:00
Alan Cox
e94ce82689 o Update some comments. 2002-09-22 04:33:43 +00:00
Jake Burkholder
05ba50f522 Use the fields in the sysentvec and in the vm map header in place of the
constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS.
This is mainly so that they can be variable even for the native abi, based
on different machine types.  Get stack protections from the sysentvec too.
This makes it trivial to map the stack non-executable for certain abis, on
machines that support it.
2002-09-21 22:07:17 +00:00
Alan Cox
8aadcc5368 Reduce namespace pollution.
Submitted by:	bde
2002-09-21 07:51:44 +00:00
Jeff Roberson
f461cf2297 - Use my freebsd email alias in the copyright.
- Remove redundant instances of my email alias in the file summary.
2002-09-19 06:05:32 +00:00
Jeff Roberson
99571dc345 - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.
- Remove all instances of the mallochash.
 - Stash the slab pointer in the vm page's object pointer when allocating from
   the kmem_obj.
 - Use the overloaded object pointer to find slabs for malloced memory.
2002-09-18 08:26:30 +00:00
Nate Lawson
06be2aaa83 Remove all use of vnode->v_tag, replacing with appropriate substitutes.
v_tag is now const char * and should only be used for debugging.

Additionally:
1. All users of VT_NTS now check vfsconf->vf_type VFCF_NETWORK
2. The user of VT_PROCFS now checks for the new flag VV_PROCDEP, which
is propagated by pseudofs to all child vnodes if the fs sets PFS_PROCDEP.

Suggested by:   phk
Reviewed by:    bde, rwatson (earlier version)
2002-09-14 09:02:28 +00:00
Julian Elischer
71fad9fdee Completely redo thread states.
Reviewed by:	davidxu@freebsd.org
2002-09-11 08:13:56 +00:00
Seigo Tanimura
b1f99ebe2b - Do not swap out a process if it is in creation. The process may have no
address space yet.

- Check whether a process is a system process prior to dereferencing
  its p_vmspace.  Aio assumes that only the curthread switches the address
  space of a system process.
2002-09-09 09:05:06 +00:00
Julian Elischer
1faf202ea9 Use UMA as a complex object allocator.
The process allocator now caches and hands out complete process structures
*including substructures* .

i.e. it get's the process structure with the first thread (and soon KSE)
already allocated and attached, all in one hit.

For the average non threaded program (non KSE that is) the allocated thread and its stack remain attached to the process, even when the process is
unused and in the process cache. This saves having to allocate and attach it
later, effectively bringing us (hopefully) close to the efficiency
of pre-KSE systems where these were a single structure.

Reviewed by:	davidxu@freebsd.org, peter@freebsd.org
2002-09-06 07:00:37 +00:00
Bruce Evans
6af7f1e511 Use `struct uma_zone *' instead of uma_zone_t, so that <sys/uma.h> isn't
a prerequisite.
2002-09-05 14:04:34 +00:00
David Xu
1279572a92 s/SGNL/SIG/
s/SNGL/SINGLE/
s/SNGLE/SINGLE/

Fix abbreviation for P_STOPPED_* etc flags, in original code they were
inconsistent and difficult to distinguish between them.

Approved by: julian (mentor)
2002-09-05 07:30:18 +00:00
Alan Cox
8a59b15cd4 o Synchronize updates to struct vm_page::cow with the page queues lock. 2002-09-02 04:04:12 +00:00
Matthew Dillon
ec61f55d42 Reduce the maximum KVA reserved for swap meta structures from 70 to 32 MB.
Reduce the swap meta calculation by a factor of 2, it's still massive overkill.

X-MFC after: immediately
2002-08-31 21:15:29 +00:00
Peter Wemm
447b3772dc Change hw.physmem and hw.usermem to unsigned long like they used to be
in the original hardwired sysctl implementation.

The buf size calculator still overflows an integer on machines with large
KVA (eg: ia64) where the number of pages does not fit into an int.  Use
'long' there.

Change Maxmem and physmem and related variables to 'long', mostly for
completeness.  Machines are not likely to overflow 'int' pages in the
near term, but then again, 640K ought to be enough for anybody.  This
comes for free on 32 bit machines, so why not?
2002-08-30 04:04:37 +00:00
Alan Cox
6508a194aa o Retire pmap_pageable(). It's an advisory routine that none
of our platforms implements.
2002-08-25 04:20:05 +00:00
Alan Cox
fff6062ab6 o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever since
pmap_zero_page() and pmap_zero_page_area() were modified to accept
   a struct vm_page * instead of a physical address, vm_page_zero_fill()
   and vm_page_zero_fill_area() have served no purpose.
2002-08-25 00:22:31 +00:00
Alan Cox
15c176c119 o Use vm_object_lock() in place of directly locking Giant.
Reviewed by:	md5
2002-08-24 18:44:52 +00:00
Alan Cox
4eaa117956 o Use vm_object_lock() in place of Giant when manipulating a vm object
in vm_map_insert().
2002-08-24 17:52:08 +00:00
Alan Cox
d52bc3438c o Resurrect vm_object_lock() and vm_object_unlock() from revision 1.19.
(For now, they simply acquire and release Giant.)
2002-08-24 07:15:14 +00:00
Archie Cobbs
55f7c614fd Don't use "NULL" when "0" is really meant. 2002-08-21 23:39:52 +00:00
Alan Cox
60582cbe6d o Assert that the page queues lock is held in vm_page_activate(). 2002-08-11 00:21:40 +00:00
Alan Cox
99cb3c4c0f o Lock page queue accesses by vm_page_activate(). 2002-08-11 00:14:10 +00:00
Alan Cox
67ef391e00 o Lock page queue accesses by vm_page_activate(). 2002-08-10 23:53:59 +00:00
Alan Cox
a9911f9a0f o Move a call to vm_page_wakeup() inside the scope of the page queues lock. 2002-08-10 23:27:06 +00:00
Alan Cox
38f612e053 o Remove the setting and clearing of the PG_MAPPED flag from the alpha and
ia64 pmap.
 o Remove the PG_MAPPED flag's declaration.
2002-08-10 18:01:39 +00:00
Alan Cox
db44450b11 o Remove the setting and clearing of the PG_MAPPED flag. (This flag is
obsolete.)
2002-08-10 07:11:16 +00:00