and bump all of the taskqueue swi's to 6. This gives callouts higher
priority than taskqueue tasks and gives all taskqueue tasks the same
priority.
Discussed with: bde
than a a stack-limited list. This removes the artifical limit on s/g list
size.
cvs: ----------------------------------------------------------------------
Add Intel Pro100Lan56 card.
Also integrate changes from Carlos Velasco. Only attch if we're a
network device (to filter out the serial devices). Also, increment
vpmatch if we match to conform to the pccard match function api.
Use bus space rather than direct inb/outb. Minor style changes while
I'm here. Extremely preliminary support for siliconix ethernet cards
(but more work is required).
The hack for setting the bus has been moved down into the cbb driver.
I've been running without this hack in my tree for so long I had
forgotten that I'd removed it :-). Please let me know if this causes
difficulty for your laptop.
starting value. This is more pedantically correct (since the handle
isn't always identical to the start of the resource) and also doesn't
access the innards of struct resource direct (which I forbid in my
tree). We need to do this for all resource types, not just ioport.
Reviewed by: njl
Now, when trying to mount file system in read-only mode it tries to
opened a device for writting to be able to update to read-write mode
latter. Ehh.
Discussed with: phk
so_gencnt, numopensockets, and the per-socket field so_gencnt. Annotate
this this might be better done with atomic operations.
Annotate what accept_mtx protects.
to warn about attempts to sleep in the I/O path. This change pushes the
definition and use of 'mymutex' behind #ifdef WITNESS to avoid the cost
in non-debugging cases. This results in a clear .22% performance win for
512 byte and 1k I/O tests on my SMP test box. Not much, but every bit
counts.
associated with performing a wakeup on the socket buffer:
- When performing an sbappend*() followed by a so[rw]wakeup(), explicitly
acquire the socket buffer lock and use the _locked() variants of both
calls. Note that the _locked() sowakeup() versions unlock the mutex on
return. This is done in uipc_send(), divert_packet(), mroute
socket_send(), raw_append(), tcp_reass(), tcp_input(), and udp_append().
- When the socket buffer lock is dropped before a sowakeup(), remove the
explicit unlock and use the _locked() sowakeup() variant. This is done
in soisdisconnecting(), soisdisconnected() when setting the can't send/
receive flags and dropping data, and in uipc_rcvd() which adjusting
back-pressure on the sockets.
For UNIX domain sockets running mpsafe with a contention-intensive SMP
mysql benchmark, this results in a 1.6% query rate improvement due to
reduce mutex costs.
The overhead of unconditionally allocating TIDs (and likewise,
unconditionally deallocating them), is amortized across multiple
thread creations by the way UMA makes it possible to have type-stable
storage.
Previously the cost was kept down by having threads created as part
of a fork operation use the process' PID as the TID. While this had
some nice properties, it also introduced complexity in the way TIDs
were allocated. Most importantly, by using the type-stable storage
that UMA gives us this was also unnecessary.
This change affects how core dumps are created and in particular how
the PRSTATUS notes are dumped. Since we don't have a thread with a
TID equalling the PID, we now need a different way to preserve the
old and previous behavior. We do this by having the given thread (i.e.
the thread passed to the core dump code in td) dump it's state first
and fill in pr_pid with the actual PID. All other threads will have
pr_pid contain their TIDs. The upshot of all this is that the debugger
will now likely select the right LWP (=TID) as the initial thread.
Credits to: julian@ for spotting how we can utilize UMA.
Thanks to: all who provided julian@ with test results.