2.3.0 -> 2.3.1 changes, but I seem to recall that there are certain
"issues" with 2.3.1 (I'm not sure if it's just pppd or the whole lot, I
am not quite that far). The present pppd seems to work with it just fine
for the time being.
Among the changes are that zlib (aka LZ77 aka deflate aka gzip) compression
is implemented as well as the original compress(1) LZW style.
same syscall number as NetBSD/OpenBSD. The getpgid() came from NetBSD
(I think) originally, but it's basically cut/paste/edit from the other
simple get*() syscalls.
VM systems usage of the kernel lock (lockmgr) code. This is a first
pass implementation, and is expected to evolve as needed. The API
for the lock manager code has not changed, but the underlying implementation
has changed significantly. This change should not materially affect
our current SMP or UP code without non-standard parameters being used.
This version.
1/ avoids garret's introduced potential page fault. (I got one)
2/ removes compiler warnings
Also fix the tunable scheduling quantum to return a better error code when
fed a bad argument.
Could somebody please update other drivers so that SCSI_RSVD (0x18)
to be handled just like SCSI_BUSY(0x08)?
There's no need for extra state, so we use XS_BUSY for SCSI_RSVD too.
PR: 4257
socket addresses in mbufs. (Socket buffers are the one exception.) A number
of kernel APIs needed to get fixed in order to make this happen. Also,
fix three protocol families which kept PCBs in mbufs to not malloc them
instead. Delete some old compatibility cruft while we're at it, and add
some new routines in the in_cksum family.
- interrupt-driven printing now works (nlpt)
- Rearrangement of bus-related functions into ppb_base/ppbconf
- Addition of ieee1284 interface functions, preliminary parallel-port
PnP support
Submitted by: Nicolas Souchu <Nicolas.Souchu@prism.uvsq.fr>
mod makes sure that the Natoma chipset is set into the correct mode. In
the case of my P6DNF, when booting a UP kernel, I see a substantial improvement
in the latency of certain operations. It appears that the cache hit
latency is curiously improved the most, per lat_mem_rd.
We now tsleep() in kthread_init() between start_init()
and prepare_usermode() while waiting for ALL the idle_loop()
processes to come online.
Debugged & tested by: "Thomas D. Dean" <tomdean@ix.netcom.com>
Reviewed by: David Greenman <dg@root.com>
We now tsleep() in kthread_init() between start_init()
and prepare_usermode() while waiting for ALL the idle_loop()
processes to come online.
Debugged & tested by: "Thomas D. Dean" <tomdean@ix.netcom.com>
Reviewed by: David Greenman <dg@root.com>