PCI bus object. This should deal both with already-routed interrupts
as well as devices that need an interrupt routed.
Note that it *doesn't* deal with interlocked interrupt dependancies, nor
does it select between interrupt options in a smart way. These are
optimisations that need further work.
is a parallel adjunct to active cooling, not a lesser evil. The _ACx
levels sort from 0 being hottest, not coolest.
Sanity check the returned temperature values, since we are having
trouble reading them on some systems.
Rearrange sysctl nodes a bit; this is probably close to the final layout.
Pass the softc, not the device_t to the Notify handler.
Don't invoke the Interpreter from callout context, as it may sleep.
Use AcpiOsQueueForExecution, which is called from taskqueue_swi.
Add an ACPI subsystem mutex, and macros for handling it. Because it's
not possible to differentiate between ACPI CA acquiring mutexes for
internal use and for use by AML, and because AML in the field doesn't
handle mutexes correctly, we can't use the ACPI subsystem's internal
locking. In addition, we have other private data of our own to lock.
Add initial locking to the ACPI driver code and the thermal module.
These locks are currently inoperative.
Pull some errant style back into line.
- Reorder the acpi_* functions in a sensible fashion
- Add acpi_ForeachPackageObject and acpi_GetHandleInScope
- Use the new debugging layer/level names
- Implement most of the guts of the acpi_thermal module; passive cooling
isn't there yet, but active cooling should work.
- Implement power resource handling (acpi_powerres.c)
This compiles and mostly works, but my test coverage is small, so feedback
is welcome.
- Use __func__ instead of __FUNCTION.
- Support power-off to S3 or S5 (takawata)
- Enable ACPI debugging earlier (with a sysinit)
- Fix a deadlock in the EC code (takawata)
- Improve arithmetic and reduce the risk of spurious wakeup in
AcpiOsSleep.
- Add AcpiOsGetThreadId.
- Simplify mutex code (still disabled).
other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.
Sort sys/*.h includes where possible in affected files.
OK'ed by: bde (with reservations)
sys/contrib/dev/acpica/Subsystem/Hardware/Attic/hwxface.c to the proper
location after AcpiEnterSleepState().
- Wait for the WAK_STS bit
- Evaluate the _WAK method and check result code
handle read and write requests for widths of multiple bytes. This
can be used to read 16-bit battery status registers for example.
- Remove some unused variables and #if 0'd debugging cruft.
- Don't complain about a GPE query that fails due to AE_NOT_FOUND if the
query method was _Q00.
from a BIF, use the size of the destinatino buffer, not the length of the
string to determine where to put the nul char. As a side effect, the
old code would truncate the string by one character while it was possibly
overflowing the buffer.
- All processes go into the same array of queues, with different
scheduling classes using different portions of the array. This
allows user processes to have their priorities propogated up into
interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
32. We used to have 4 separate arrays of 32 queues each, so this
may not be optimal. The new run queue code was written with this
in mind; changing the number of run queues only requires changing
constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter. This
is intended to be used to create per-cpu run queues. Implement
wrappers for compatibility with the old interface which pass in
the global run queue structure.
- Group the priority level, user priority, native priority (before
propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
This was used to detect when a process' priority had lowered and
it should yield. We now effectively yield on every interrupt.
- Activate propogate_priority(). It should now have the desired
effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
idle loop. It interfered with propogate_priority() because
the idle process needed to do a non-blocking acquire of Giant
and then other processes would try to propogate their priority
onto it. The idle process should not do anything except idle.
vm_page_zero_idle() will return in the form of an idle priority
kernel thread which is woken up at apprioriate times by the vm
system.
- Update struct kinfo_proc to the new priority interface. Deliberately
change its size by adjusting the spare fields. It remained the same
size, but the layout has changed, so userland processes that use it
would parse the data incorrectly. The size constraint should really
be changed to an arbitrary version number. Also add a debug.sizeof
sysctl node for struct kinfo_proc.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
Turn off semaphores. Nobody else implements them, and there is lots of
AML out there which does totally absurd things with them, meaning that
if we try to do the right thing we are guaranteed to fail.
Use acpi_EvaluateInteger where possible.
Use FuncName rather than &FuncName when passing function addresses.
Don't evaluate the _REG method when we attach to an address space -
AcpiInstallAddressSpaceHandler does it for us.
acpi_EvaluateInteger.
Use acpi_EvaluateInteger instead of doing things the hard way where
possible.
AcpiSetSystemSleepState (unofficial) becomes AcpiEnterSleepState.
Use the AcpiGbl_FADT pointer rather than searching for the FADT.
This sould make the system power-off correctly where the howto had
more bits set than RB_POWEROFF, e.g. RB_NOSYNC.
Submitted by: Peter Pentchev <roam@orbitel.bg>