and the irq are different for pc98, and are not very well handled (we
use a historical mess of hard-coded values, values from header files
and values from hints).
- 1.58 (2000/09/01; author: kato)
Fixed FPU_ERROR_BROKEN code. It had old-isa code.
- 1.33 (1998/03/09; author: kato)
Make FPU_ERROR_BROKEN a new-style option.
- 1.7 (1996/10/09; author: asami)
Make sure FPU is recognized for non-Intel CPUs.
The log for rev.1.7 should have said something like:
Added FPU_ERROR_BROKEN option. This forces a successful probe for
exception 16, so that hardware with a broken FPU error signal can sort
of work.
Use the normal interrupt handler (npx_intr()) instead of a special
probe-time interrupt handler, although this causes problems due to
the bus_teardown_intr() not actually even tearing down the interrupt
(these problems were avoided by doing interrupt attachment for the
special interrupt handler directly). Fixed minor bitrot in comments.
The reason for the npxprobe()/npxprobe1() split mostly went away at
about the same time it was made (in 1992 or 1993 just before the
beginning of history). 386BSD ran all probes with interrupts completely
masked, and I didn't want to disturb this when I added an irq probe
to npxprobe(). An irq (not necessarily npx) must be acked for at least
external npx's to take the cpu out of the wait state that it enters
when an npx error occurs, so the probe must be done with a suitable
irq unmasked. npxprobe() went to great lengths to unmask precisely
the npx irq.
Running probes with all interrupts masked was never really needed in
FreeBSD, since FreeBSD always masked interrupts well enough using
splhigh(), but it wasn't until rev.1.48 (1995/12/12) of autoconf.c
that all probes were run with CPU interrupts enabled. This permits
npxprobe() to probe its irq using normal interrupt resources. Note
that most drivers still can't depend on this. It depends on the
interrupt handler being fast and the irq not being shared.
lost when the buggy code goes away completely:
- don't assume that the npx irq number is >= 8. Rev.1.73 only reversed
part of the hard-coding of it to 13 in rev.1.66.
- backed out the part of rev.1.84 that added a highly confused comment
about an enable_intr() being "highly bogus". The whole reason for
existence of npxprobe() (separate from the main probe, npxprobe1())
is to handle the complications to make this enable_intr() safe.
- backed out the part of rev.1.94 that modified npxprobe(). It mainly
broke the enable_intr() to restore_intr(). Restoring the interrupt
state in a nested way is precisely what is not wanted here. It was
harmless in practice because npxprobe() is called with interrupts
enabled, so restoring the interrupt state enables interrupts. Most
of npxprobe() is a no-op for the same reason...
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.
Sorry john! (your next MFC will be a doosie!)
Reviewed by: peter@freebsd.org, dillon@freebsd.org
X-MFC after: ha ha ha ha
the process of exiting the kernel. The ast() function now loops as long
as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with
preemption disabled so that any further AST's that arrive via an
interrupt will be delayed until the low-level MD code returns to user
mode.
- Use u_int's to store the tick counts for profiling purposes so that we
do not need sched_lock just to read p_sticks. This also closes a
problem where the call to addupc_task() could screw up the arithmetic
due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
clear_resched(), and resched_wanted() in favor of direct bit operations
on p_sflag.
- Fix up locking with sched_lock some. In addupc_intr(), use sched_lock
to ensure pr_addr and pr_ticks are updated atomically with setting
PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing
PS_OWEUPC. We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.
Reviewed by: bde (mostly)
we are required to do if we let user processes use the extra 128 bit
registers etc.
This is the base part of the diff I got from:
http://www.issei.org/issei/FreeBSD/sse.html
I believe this is by: Mr. SUZUKI Issei <issei@issei.org>
SMP support apparently by: Takekazu KATO <kato@chino.it.okayama-u.ac.jp>
Test code by: NAKAMURA Kazushi <kaz@kobe1995.net>, see
http://kobe1995.net/~kaz/FreeBSD/SSE.en.html
I have fixed a couple of style(9) deviations. I have some followup
commits to fix a couple of non-style things.
simpler for npx exceptions that start as traps (no assembly required...)
and works better for npx exceptions that start as interrupts (there is
no longer a problem for nested interrupts).
Submitted by: original (pre-SMPng) version by luoqi
npxsave() went to great lengths to excecute fnsave with interrupts
enabled in case executing it froze the CPU. This case can't happen,
at least for Intel CPU/NPX's. Spurious IRQ13's don't imply spurious
freezes. Anyway, the complications were usually no-ops because IRQ13
is not used on i486's and newer CPUs, and because SMPng broke them in
rev.1.84. Forcible enabling of interrupts was changed to
write_eflags(old_eflags), but since SMPng usually calls npxsave() from
cpu_switch() with interrupts disabled, write_eflags() usually just
kept interrupts disabled.
npxinit() didn't have the usual race because it doesn't save to curpcb,
but it may have had a worse form of it since it uses the npx when it
doesn't "own" it. I'm not sure if locking prevented this. npxinit()
is normally caled with the proc lock but not sched_lock.
Use a critical region to protect pushing of curproc's npx state to
curpcb in npxexit(). Not doing so was harmless since it at worst
saved a wrong state to a dieing pcb.
handling, SMPng always switches the npx context away from curproc
before calling the handler, so the handler always paniced. When using
exception 16 exception handling, SMPng sometimes switches the npx
context away from curproc before calling the handler, so the handler
sometimes paniced. Also, we didn't lock the context while using it,
so we sometimes didn't detect the switch and then paniced in a less
controlled way.
Just lock the context while using it, and return without doing anything
except clearing the busy latch if the context is not for curproc. This
fixes the exception 16 case and makes the IRQ13 case harmless. In both
cases, the instruction that caused the exception is restarted and the
exception repeats. In the exception 16 case, we soon get an exception
that can be handled without doing anything special. In the IRQ13 case,
we get an easy to kill hung process.
other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.
Sort sys/*.h includes where possible in affected files.
OK'ed by: bde (with reservations)
of long and int64_t; and print the result as an unsigned long. This should
make the output from the bzero() test more readable, and avoid printing a
negative bandwidth. Note that this doesn't change the decision process,
since that is based on time elapsed, not on computed bandwidth.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
- If possible, context switch to the thread directly in sched_ithd(),
rather than triggering a delayed ast reschedule.
- Disable interrupts while restoring fpu state in the trap handler,
in order to ensure that we are not preempted in the middle, which
could cause migration to another cpu.
Reviewed by: peter
Tested by: peter (alpha)
- Make softinterrupts (SWI's) almost completely MI, and divorce them
completely from the x86 hardware interrupt code.
- The ihandlers array is now gone. Instead, there is a MI shandlers array
that just contains SWI handlers.
- Most of the former machine/ipl.h files have moved to a new sys/ipl.h.
- Stub out all the spl*() functions on all architectures.
Submitted by: dfr
include:
* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and currently uses both.)
* Per-CPU idle processes.
* Interrupts are run in their own separate kernel threads and can be
preempted (i386 only).
Partially contributed by: BSDi (BSD/OS)
Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
macros) to the signal handler, for old-style BSD signal handlers as
the second (int) argument, for SA_SIGINFO signal handlers as
siginfo_t->si_code. This is source-compatible with Solaris, except
that we have no <siginfo.h> (which isn't even mentioned in POSIX
1003.1b).
An rather complete example program is at
http://www3.cons.org/cracauer/freebsd-signal.c
This will be added to the regression tests in src/.
This commit also adds code to disable the (hardware) FPU from
userconfig, so that you can use a software FP emulator on a machine
that has hardware floating point. See LINT.
though, on systems (386 mostly) that still have a seperate fpu, but it
might be possible to find systems where the FPU coprocessor is wired to
a different IRQ pin.