1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-23 11:18:54 +00:00
Commit Graph

94832 Commits

Author SHA1 Message Date
Bruce M Simpson
dcf59a59fc Fix a security problem in sysctl() the long way round.
Use pre-emption detection to avoid the need for wiring a userland buffer
when copying opaque data structures.

sysctl_wire_old_buffer() is now a no-op. Other consumers of this
API should use pre-emption detection to notice update collisions.

vslock() and vsunlock() should no longer be called by any code
and should be retired in subsequent commits.

Discussed with:	pete, phk
MFC after:	1 week
2003-10-05 09:37:47 +00:00
Bruce M Simpson
0c9601bc6b Add a pre-emption counter, td_generation, so that threads can notice
when they have been pre-empted by other threads. This is bumped from
within mi_switch() every time a context switch takes place.

Discussed with:	pete
2003-10-05 09:35:08 +00:00
Hiroki Sato
fa94ec29ff Add a sentence forgotten in the previous commit. 2003-10-05 09:17:25 +00:00
Yoshihiro Takahashi
98120869b4 MFi386: revisions 1.572, 1.573 and 1.574. 2003-10-05 09:05:45 +00:00
Yoshihiro Takahashi
d4b3b85f35 MFi386: revision 1.205 2003-10-05 08:56:49 +00:00
Bruce M Simpson
51830edcc5 Fold the vslock() and vsunlock() calls in this file with #if 0's; they will
go away in due course. Involuntary pre-emption means that we can't count
on wiring of pages alone for consistency when performing a SYSCTL_OUT()
bigger than PAGE_SIZE.

Discussed with:	pete, phk
2003-10-05 08:38:22 +00:00
Hiroki Sato
fc8d724c56 New release note: SA-03:18. 2003-10-05 08:17:53 +00:00
Hiroki Sato
a7d73c0e2d New errata: SA-03:14, SA-03:17, SA-03:18. 2003-10-05 08:15:54 +00:00
Bruce Evans
201e0377ca Include <sys/mutex.h>. Don't depend on namespace pollution in <sys/vnode.h>.
Fixed a nearby style bug.  The include of vcoda.h used angle brackets and
was not used.
2003-10-05 07:44:45 +00:00
Jeff Roberson
2f05568aa8 - Check the XLOCK before inspecting v_data.
- Slightly rewrite the fsync loop to be more lock friendly.  We must
   acquire the vnode interlock before dropping the mnt lock.  We must
   also check XLOCK to prevent vclean() races.
 - Use LK_INTERLOCK in the vget() in ffs_sync to further prevent vclean()
   races.
 - Use a local variable to store the results of the nvp == TAILQ_NEXT
   test so that we do not access the vp after we've vrele()d it.
 - Add an XXX comment about UFS_UPDATE() not being protected by any lock
   here.  I suspect that it should need the VOP lock.
2003-10-05 07:16:45 +00:00
Jeff Roberson
98d7d155c1 - Apply a big giant lock around the namecache. This has been sitting in
my tree since BSDcon.
2003-10-05 07:13:50 +00:00
Jeff Roberson
bdcfcdecea - Fix an XXX. Check the error of vn_lock() in vflush(). Don't specify
LK_RETRY either, we don't want this vnode if it turns into another.
 - Remove the code that checks the mount point after acquiring the lock
   we are guaranteed to either fail or get the vnode that we wanted.
2003-10-05 07:12:38 +00:00
Alan Cox
5a3970febf Assert that the containing vm object's lock is held in
vm_page_set_invalid().
2003-10-05 06:58:07 +00:00
Jeff Roberson
53938b4a86 - Skip over xvp if XLOCK is set. 2003-10-05 06:48:37 +00:00
Jeff Roberson
1b01fc6ef7 - Remove an incorrect XXX comment. This code does respect the XLOCK since
it uses vget() which will fail if the identity changes.
2003-10-05 06:47:56 +00:00
Jeff Roberson
b21ad23fff - Check the XLOCK before we inspect the vnode. 2003-10-05 06:46:45 +00:00
Jeff Roberson
4b1a52f639 - We don't need to cache_purge() in nfs_reclaim(), vclean() does it for us. 2003-10-05 06:46:02 +00:00
Jeff Roberson
4ab2c8bd52 - Check the XLOCK prior to inspecting v_data. 2003-10-05 06:44:53 +00:00
Jeff Roberson
055cfed702 - Check XLOCK prior to accessing v_data. 2003-10-05 06:43:30 +00:00
Jeff Roberson
6acdfd69be - File systems that wish to inspect the vnode contents or their private
v_data field before calling vget/vn_lock must check VI_XLOCK manually to
   be sure that v_data is still valid.  Implement this check in two places
   here.
2003-10-05 06:43:03 +00:00
Bruce M Simpson
ea5c2ab712 Correct a typo on line 552 of revision 1.92 which was breaking GENERIC:-
_FreeBSD_version should be __FreeBSD_version.
2003-10-05 06:06:09 +00:00
Bruce M Simpson
5be99846fc Remove magic numbers surrounding locking state in the sysctl module, and
replace them with more meaningful defines.
2003-10-05 05:38:30 +00:00
Jeff Roberson
45503a37dd - Rename vcanrecycle() to vtryrecycle() to reflect its new role.
- In vtryrecycle() try to vgonel the vnode if all of the previous checks
   passed.  We won't vgonel if someone has either acquired a hold or usecount
   or started the vgone process elsewhere.  This is because we may have been
   removed from the free list while we were inspecting the vnode for
   recycling.
 - The VI_TRYLOCK stops two threads from entering getnewvnode() and recycling
   the same vnode.  To further reduce the likelyhood of this event, requeue
   the vnode on the tail of the list prior to calling vtryrecycle().  We can
   not actually remove the vnode from the list until we know that it's
   going to be recycled because other interlock holders may see the VI_FREE
   flag and try to remove it from the free list.
 - Kill a bogus XXX comment.  If XLOCK is set we shouldn't wait for it
   regardless of MNT_WAIT because the vnode does not actually belong to
   this filesystem.
2003-10-05 05:35:41 +00:00
Jeff Roberson
85311d4b59 - Don't cache_purge() in getnewvnode. It's done in vclean(). With this
purge, the purge in vclean, and the filesystems purge, we had 3 purges
   per vnode.
 - Move the insmntque(vp, 0) to vclean() so that we may remove it from the
   two vgone() functions and reduce the number of lock operations required.
2003-10-05 02:48:04 +00:00
Jeff Roberson
7bfaa956e8 - Don't cache_purge() in cd9660_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:45:36 +00:00
Jeff Roberson
5c014b9d6d - Don't cache_purge() in ufs_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:45:00 +00:00
Jeff Roberson
fcde834c1f - Don't cache_purge() in ext2_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:44:22 +00:00
Jeff Roberson
9c695a2697 - Don't cache_purge() in *_reclaim routines. vclean() does it for us so
this is redundant.
2003-10-05 02:43:30 +00:00
Bruce M Simpson
6bcda17f48 Update the page_req classes VM_ALLOC_NOOBJ and VM_ALLOC_ZERO.
Suggested by:	alc
2003-10-05 01:31:51 +00:00
Jeff Roberson
ce13b187e7 - Solve a LOR with the sync_mtx by using the VI_ONWORKLST flag to determine
whether or not the sync failed.  This could potentially get set between
   the time that we VOP_UNLOCK and VI_LOCK() but the race would harmelssly
   lead to the sync being delayed by an extra 30 seconds.  If we do not move
   the vnode it could cause an endless loop if it continues to fail to sync.
 - Use vhold and vdrop to stop the vnode from changing identities while we
   have it unlocked.  Other internal vfs lists are likely to follow this
   scheme.
2003-10-05 00:35:41 +00:00
Alan Cox
ab87e2fb83 Don't bother setting a page table page's valid field. It is unused and
not setting it is consistent with other uses of VM_ALLOC_NOOBJ pages.
2003-10-05 00:12:16 +00:00
Jeff Roberson
894fbf9769 - Move the xlock 'locking' code into vx_lock() and vx_unlock().
- Create a new function, vgonechrl(), which performs vgone for an in-use
   character device.  Move the code from vflush() that did this into
   vgonechrl().
 - Hold the xlock across the entirety of vgonel() and vgonechrl() so that
   at no point will an invalid vnode exist on any list without XLOCK set.
 - Move the xlock code out of vclean() now that it is in the vgone*()
   functions.
2003-10-05 00:02:41 +00:00
Alan Cox
6caf7e9fa4 Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 23:37:38 +00:00
Alan Cox
6ec2fca505 Eliminate some unnecessary uses of the vm page queues lock around the
vm page's valid field.  This field is being synchronized using the
containing vm object's lock.
2003-10-04 22:47:20 +00:00
Josef Karthauser
272487f507 Make it easier to run this code on RELENG_4.
Submitted by:	luoqi
2003-10-04 22:13:21 +00:00
Peter Wemm
b63331a498 Fix the apm problem for real. We leave the first 4K page for the bios to
work in, but we had it mapped read-only.  While this has always been the
case, the PG_PS enable hack hid it and the apm bios code ended up taking
advantage of it.
2003-10-04 22:04:54 +00:00
Alan Cox
874f526de6 Assert that the containing vm object's lock is held in
vm_page_zero_invalid().
2003-10-04 21:56:27 +00:00
Josef Karthauser
cd719b55bd Make it easier to run this code on RELENG_4.
Submitted by:	luoqi
2003-10-04 21:41:01 +00:00
Alan Cox
cbfbaad8be Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 21:35:48 +00:00
Alan Cox
ccf78b6895 Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 20:38:32 +00:00
Alan Cox
bf0da100d6 - Extend the scope the vm object lock to cover calls to
vm_page_is_valid().
 - Assert that the lock on the containing vm object is held in
   vm_page_is_valid().
2003-10-04 19:23:29 +00:00
Alan Cox
49c06616ae Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 19:13:27 +00:00
Ruslan Ermilov
829340b175 Retired the "most" and "installmost" targets -- they just
do not have a chance to work nowadays as we have a lot of
internal libraries in lib/.

Discussed with:	marcel, wollman
2003-10-04 18:53:38 +00:00
Warner Losh
e2b40c9599 any -> ? for new entry (to allow time for people to upgrade their pccardd) 2003-10-04 18:44:29 +00:00
Warner Losh
7c9b63d96a Ooops. Committed sin number 1: updating the code w/o updating the comments.
Update the comments too.
2003-10-04 18:43:21 +00:00
Warner Losh
9cd2bd7564 I've been burned about half a dozen times by the old PAO syntax for
'any' interrupt.  There's no reason not to be liberal here and accept
the PAO syntax.

MFC After: 2 weeks
2003-10-04 18:40:36 +00:00
Jeff Roberson
6f4b0863e0 - In sched_sync() test our preconditions prior to dropping the sync_mtx.
This is so that we may grab the interlock while still holding the
   sync_mtx.  We have to VI_TRYLOCK() because in all other cases the lock
   order runs the other way.
 - If we don't meet any of the preconditions, reinsert the vp into the
   list for the next second.
 - We don't need to panic if we fail to sync here because each FSYNC
   function handles this case.  Removing this redundant code also
   simplifies locking.
2003-10-04 18:03:53 +00:00
Jeff Roberson
b2b64a90d2 - Consistently set sopt_dir.
Pointed out by:		pete@isilon.com
2003-10-04 17:41:59 +00:00
Jeff Roberson
8ec82641d8 - Change a lame iterative algorithm to a constant time algorithm. Remove
the XXX that complains about it as well.

Submitted by:	ThomasWuerfl@gmx.de
2003-10-04 17:41:13 +00:00
Jeff Roberson
3c92540393 - Set the sopt_dir member of the sockopt structure, otherwise, this parameter
will not actually be set even though we're calling sosetopt.  sosetopt
   calls down to a single ctloutput function if the name or level is
   implemented by a specific protocol.

Submitted by:	pete@isilon.com
2003-10-04 17:37:51 +00:00