machine will result in approximately a 4.2% loss of performance (buildworld)
and approximately a 5% reduction in power consumption (when idle). Add XXX
note on how to really make hlt work (send an IPI to wakeup HLTed cpus on
a thread-schedule event? Generate an interrupt somehow?).
in the .h file. Make it static __inline to make sure that it doesn't
wind up defined in any files.
Also, fix a typo that said null_do_attach instead of null_do_probe.
1.93; henning; MA401RA wi card
1.92; millert; elsa XI-325 wi card
1.91; fgsch; gemplus cpr400 smartcard reader
1.90; mickey; Nokia c110/c111 is prism2 card
1.89-1.86 (similar to what we do already)
The value we use is still questionable for 440BX chipsets.
- When flushing the TLB just toggle the bit in question instead of writing
a magic value that could trash other unrelated bits.
the filelist_lock and check nfiles. This closes a race where we had to
unlock the filedesc to re-lock the filelist_lock.
Reported by: David Xu
Reviewed by: bde (mostly)
after a panic which is not an interrupt thread, or the thread which
caused the panic. Also, remove panicstr checks from msleep() and from
cv_wait() in order to allow threads to go to sleep and yeild the cpu
to the panicing thread, or to an interrupt thread which might
be doing the crashdump.
Reviewed by: jhb (and it was mostly his idea too)
support creation times such as UFS2) to the value of the
modification time if the value of the modification time is older
than the current creation time. See utimes(2) for further details.
Sponsored by: DARPA & NAI Labs.
administrator to define certain properties of new devfs nodes before
they become visible to the userland. Both static (e.g., /dev/speaker)
and dynamic (e.g., /dev/bpf*, some removable devices) nodes are
supported. Each DEVFS mount may have a different ruleset assigned to
it, permitting different policies to be implemented for things like
jails.
Approved by: phk
system lockups (infinite loops) when a zero-length RPC is received.
Linux clients will sometimes send zero-length RPC requests.
Reorganize the use of recm in the loop.
Cc: security@freebsd.org
Submitted by: Mike Junk <junk@isilon.com>
MFC after: 3 days
vm_page_zero_idle() instead of partially duplicated implementations.
In particular, this change guarantees that the number of free pages
in the free queue(s) matches the global free page count when Giant
is released.
Submitted by: peter (via his p4 "pmap" branch)
of them, and couple them by always performing all operations on all
present IOMMUs. This is required because with the current API there
is no way to determine on which bus a busdma operation is performed.
While being there, clean up the iommu code a bit.
This should be a step in the direction of allow some of larger machines
to work; tests have shown that there still seem to be problems left.
obtain the send lock, we would bogusly try to unlock the send lock before
returning resulting in a panic. Instead, only unlock the send lock if
nfs_sndlock() succeeds and nfs_reconnect() fails.
MFC after: 3 days
Sponsored by: The Weather Channel
running with interrupts disabled, other cpus locked down, and only
making a temporary local mapping that we immediately back out again.
Tested by: gallatin
using a udp6 socket without bind(2)ing.
- fbsd4/430 reported from the FreeBSD team.
- this fix is different from the fix reported in the above PR. i think
this better, but we need some test.
Obtained from: KAME
MFC after: 3 weeks
semicolons from the end of macros:
#define FOO() bar(a,b,c);
becomes
#define FOO() bar(a,b,c)
Thus requiring the semicolon in the invocation of FOO. This is much
cleaner syntax and more consistent with expectations when writing
function-like things in source.
With both peril-sensitive sunglasses and flame-proof undies on, tighten
up some types, and work around some warnings generated by this. There
are some _horrible_ const/non-const issues in this code.
This represents the original standardization of the following functions
and headers:
popen()
<regex.h>: regcomp(), regexec(), regerror(), regfree()
<fnmatch.h>: fnmatch()
getopt(), optarg, optind, opterr, optopt
<glob.h>: glob()
<wordexp.h>: wordexp(), wordfree()
confstr()
and a cluster in one shot.
o Introduce MBP_PERSIST and MBP_PERSISTENT control bits to mb_alloc();
MBP_PERSIST means "if you can allocate, then keep the cache lock
held on exit," and MBP_PERSISTENT means "a cache lock is alredy held
on entry, so allocate from the specified (already locked) cache."
They may be used in combination.
o m_getcl() uses the MBP_PERSIST/MBP_PERSISTENT interface so that it
doesn't drop the cache lock in between the mbuf and cluster allocations.
o m_getm(), which takes a size and allocates an mbuf + cluster "best fit"
chain, has been moved from uipc_mbuf.c to subr_mbuf.c and shown how to
use MBP_PERSIST/MBP_PERSISTENT to attempt to do a grouped allocation
without dropping the cache lock in between.
Why this is good: much less bus-locked lock acquires/drops when they're
not needed. Also, prototype for m_getcl():
struct mbuf * m_getcl(int how, short type, int flags);
"how" and "type" are self-explanatory. "flags" may be M_PKTHDR, in
which case m_getcl() will make the mbuf a pkthdr-mbuf.
While I'm in subr_mbuf.c:
o Every exported routine now has a nice comment with a description of
the expected arguments. Eventually, mbuf(9) needs to be re-vamped
but there's still more code to write/finalize before I get to that.
o internal macros have been changed a bit.
o consistently use 'short' for "type." This somehow slipped through
before (that 'type' was sometimes declared as int).
Alfred has been pushing for the MBP_PERSIST{,ENT} thing for almost a
year now. Luigi asked for m_getcl(), and will probably MFC that
part of this commit.
TODO [Related]: teach mb_free() about MBP_PERSIST{, ENT}.
because the previous interface handle gets freed when the config
number is set. This fixes a problem where memory could be accessed
after it was freed when the interface was ifconfig'd up.
Reviewed by: n_hibma
one out of a block cipher. This has 2 advantages:
1) The code is _much_ simpler
2) We aren't committing our security to one algorithm (much as we
may think we trust AES).
While I'm here, make an explicit reseed do a slow reseed instead
of a fast; this is in line with what the original paper suggested.
just because you leave your session idle.
Also, put in a fix for 64-bit architectures (to be revised).
In detail:
ip_fw.h
* Reorder fields in struct ip_fw to avoid alignment problems on
64-bit machines. This only masks the problem, I am still not
sure whether I am doing something wrong in the code or there
is a problem elsewhere (e.g. different aligmnent of structures
between userland and kernel because of pragmas etc.)
* added fields in dyn_rule to store ack numbers, so we can
generate keepalives when the dynamic rule is about to expire
ip_fw2.c
* use a local function, send_pkt(), to generate TCP RST for Reset rules;
* save about 250 bytes by cleaning up the various snprintf()
in ipfw_log() ...
* ... and use twice as many bytes to implement keepalives
(this seems to be working, but i have not tested it extensively).
Keepalives are generated once every 5 seconds for the last 20 seconds
of the lifetime of a dynamic rule for an established TCP flow. The
packets are sent to both sides, so if at least one of the endpoints
is responding, the timeout is refreshed and the rule will not expire.
You can disable this feature with
sysctl net.inet.ip.fw.dyn_keepalive=0
(the default is 1, to have them enabled).
MFC after: 1 day
(just kidding... I will supply an updated version of ipfw2 for
RELENG_4 tomorrow).
- always reinitialize the rx descriptors, even if the mbuf is kept.
This should fix the hangs on ifconfig that were observed
- on an rx overflow, reinitialize the descriptor so that the interface
will not hang
- correct some bus_dmamap_sync() calls
- correct some debug messages
- some minor nits
1/ don't need to set td_state to TDS_RUNNING in fork_return.
it's already set in choosethread().
2/ Set a child process state to "normal" as opposed to "new"
when we allow it to be put on the run queue.
Allows child to receive signals from the parent if the parent
runs first and tries to immediatly signal he child.
Submitted by: (part 2) Thomas Moestl <tmoestl@gmx.net>
that's binary compatible for -stable. While binary compatibility doesn't
matter much in -current, it is critical for -stable. This change requires
pccardd/pccardc to be recompiled.
formulated. The correct states should be:
IDLE: On the idle KSE list for that KSEG
RUNQ: Linked onto the system run queue.
THREAD: Attached to a thread and slaved to whatever state the thread is in.
This means that most places where we were adjusting kse state can go away
as it is just moving around because the thread is..
The only places we need to adjust the KSE state is in transition to and from
the idle and run queues.
Reviewed by: jhb@freebsd.org
-finstrument-functions instead of -mprofiler-epilogue. The former
works essentially the same as the latter but has a higher overhead
(about 22 more bytes per function for passing unused args to the
profiling functions).
Removed all traces of the IDENT Makefile variable, which had been
reduced to just a place for holding profiling's contribution to CFLAGS
(the IDENT that gives the kernel identity was renamed to KERN_IDENT).
o Assert that the page queues lock is held in vm_page_unwire().
o Make vm_page_lock_queues() and vm_page_unlock_queues() visible
to kernel loadable modules.
(PROFLEVEL) to kern.pre.mk so that it is easier to manage. Bumped config
version to match.
Moved the check for cputype being configured to a less bogus place in
mkmakefile.c.
filedesc is already locked rather than having chroot() unlock the
filedesc so chroot_refuse_vdir_fds() can immediately relock it.
- Reorder chroot() a bitso that we do the namei lookup before checking
the process's struct filedesc. This closes at least one potential race
and allows us to only acquire the filedsec lock once in chroot().
- Push down Giant slightly into chroot().
_vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge
o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
(This clears the way for exec_map accesses to move outside of Giant.
The exec_map is not a system map.)
o Remove some premature MPSAFE comments.
Reviewed by: tegge
This was always broken in HEAD (the offending statement was introduced
in rev. 1.123 for HEAD, while RELENG_4 included this fix (in rev.
1.99.2.12 for RELENG_4) and I inadvertently deleted it in 1.99.2.30.
So I am also restoring these two lines in RELENG_4 now.
We might need another few things from 1.99.2.30.
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.
A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.
Two-way SMP buildworld -j 5 tests (second run, after stabilization)
2282.76 real 2515.17 user 704.22 sys before peter's IPI commit
2266.69 real 2467.50 user 633.77 sys after peter's commit
2232.80 real 2468.99 user 615.89 sys after this commit
Reviewed by: peter, jhb
Approved by: peter
choosethread() in MI C code instead of doing it in in assembly in all the
various cpu_switch() functions. This fixes problems on ia64 and sparc64.
Reviewed by: julian, peter, benno
Tested on: i386, alpha, sparc64
itself; this causes undefined behaviour on UltraSPARCs. In particular,
the interrupt packet data words will not necessarily be delivered
correctly, which would result in a crash.
This bug also caused the cache-flushing work to be done twice on the
triggering CPU (when it did not cause crashes).
Reviewed by: jake
- It actually works this time, honest!
- Fine grained TLB shootdowns for SMP on i386. IPI's are very expensive,
so try and optimize things where possible.
- Introduce ranged shootdowns that can be done as a single IPI.
- PG_G support for i386
- Specific-cpu targeted shootdowns. For example, there is no sense in
globally purging the TLB cache for where we are stealing a page from
the local unshared process on the local cpu. Use pm_active to track
this.
- Add some instrumentation for the tlb shootdown code.
- Rip out SMP code from <machine/cpufunc.h>
- Try and fix some very bogus PG_G and PG_PS interactions that were bad
enough to cause vm86 bios calls to break. vm86 depended on our existing
bugs and this was the cause of the VESA panics last time.
- Fix the silly one-line error that caused the 'panic: bad pte' last time.
- Fix a couple of other silly one-line errors that should have caused more
pain than they did.
Some more work is needed:
- pmap_{zero,copy}_page[_idle]. These can be done without IPI's if we
have a hook in cpu_switch.
- The IPI handlers need some cleanup. I have a bogus %ds load that can
be avoided.
- APTD handling is rather bogus and appears to be a large source of
global TLB IPI shootdowns for no really good reason.
I see speedups of between 1.5% and ~4% on buildworlds in a while 1 loop.
I expect to see a bigger difference when there is significant pageout
activity or the system otherwise has memory shortages.
I have backed out a few optimizations that I had been using over the last
few days in order to be a little more conservative. I'll revisit these
again over the next few days as the dust settles.
New option: DISABLE_PG_G - In case I missed something.