Tell vop_strategy_pre() to use this instead.
- Ignore B_CLUSTER bufs. Their components are locked but they don't really
exist so they don't have to be. This isn't ideal but it is safe.
vm_mmap() as well as the GETATTR etc.
- If the handle is a vnode in vm_mmap() assert that it is locked.
- Wiggle Giant around a little to account for the extra vnode operation.
- Cache a pointer to the vnode's object in the buf.
- Hold a reference to that object in addition to the vnode's reference just
to be consistent.
- Cleanup code that got the object indirectly through the vp and VOP calls.
This fixes at least one case where we were calling GETVOBJECT without a lock.
It also avoids an expensive layered call at the cost of another pointer in
struct buf.
- Grab the vnode object early in exec when we still have the vnode lock.
- Cache the object in the image_params.
- Make use of the cached object in imgact_*.c
- Switch to the new vop_strategy_pre for lock validation.
VOP_STRATEGY requires only that the buf is locked UNLESS the block numbers need
to be translated. There may be other reasons, but as long as the underlying
layer uses a VOP to perform the operations they will be caught later.
- Disable original vop_strategy lock specification.
- Switch to the new vop_strategy_pre for lock validation.
VOP_STRATEGY requires only that the buf is locked UNLESS the block numbers need
to be translated. There may be other reasons, but as long as the underlying
layer uses a VOP to perform the operations they will be caught later.
in the VOP inlines. This is intended to replace the simple locking
specifications for calls that have more complicated behavior such as rename and
lookup.
The syntax of the new entries is:
#! name pre/post function
If the function is marked 'pre' it is executed prior to calling the VOP and
takes a pointer to a struct vop_{name}_args as it's only parameter.
If the function is marked 'post' it is executed after the VOP call and takes
a pointer to a struct vop_{name}_args as it's first parameter and the integer
return value from the vop as the second paramter.
now it should support all the instructions of the old ipfw.
Fix some bugs in the user interface, /sbin/ipfw.
Please check this code against your rulesets, so i can fix the
remaining bugs (if any, i think they will be mostly in /sbin/ipfw).
Once we have done a bit of testing, this code is ready to be MFC'ed,
together with a bunch of other changes (glue to ipfw, and also the
removal of some global variables) which have been in -current for
a couple of weeks now.
MFC after: 7 days
internal PHY on the 3COM 3C905B and 3C905C parts, however I've rigged it so
that xlphy (aka exphy) takes precedence for the time being.
If people try this with their xl cards and decide that it's a better choice,
we can switch this later.
This is the PHY used in various iMacs and possibly other GMAC-equipped
Macintoshes with 10/100 PHYs (the ones with 10/100/1000 appear to use brgphy).
Obtained from: NetBSD
- Tell IS_LOCKING_VFS to ignore block and character devices. specfs vnodes
aren't locked for io and they just generate lots of false positives.
- Add newlines to the badlock prints.
we just have to deal with the kstack when told to. We do not have a
UMA-managed cache for the proc struct and its associated upage yet. So,
go back to the old lazy mechanism. Note that if UMA destroys pages that
used to contain proc structures, we'll lose the corresponding upage
forever. (zones never did this - once a page was allocated, it stayed
attached to the proc zone forever)
driver. I tried a few obvious experiments, but was unable to make
the 3c996B-T generate correct UDP checksums for transmitted fragmented
packets. I'm not so sure the device is even capable of it.
This fixes NFS over UDP.
MFC after: 1 day
queue lock (revision 1.33 of vm/vm_page.c removed them).
o Make the free queue lock a spin lock because it's sometimes acquired
inside of a critical section.
These functions are always called on new memory so they can
not already be set up, so don't bother testing for that.
(This was left over from before we used UMA (which is cool))
of the KVA space's size in addition to the amount of physical memory
and reduce it by a factor of two.
Under the old formula, our reservation amounted to one kernel map entry
per virtual page in the KVA space on a 4GB i386.