mirror of
https://git.FreeBSD.org/src.git
synced 2024-12-26 11:47:31 +00:00
f49fde7faf
'counter_upper' and 'counter_lower_last'. The race exists because interrupts are enabled even though tick_ticker() executes in a critical section. Fix a bug in clock_intr() in how it updates the cached values of 'counter_upper' and 'counter_lower_last'. They are updated only when the COUNT register rolls over. More interestingly it will *never* update the cached values if 'counter_lower_last' happens to be zero. Get rid of superfluous critical section in clock_intr(). There is no reason to do this because clock_intr() executes in hard interrupt context. Switch back to using 'tick_ticker()' as the cpu ticker for Sibyte. Reviewed by: jmallett, mav |
||
---|---|---|
.. | ||
adm5120 | ||
alchemy | ||
atheros | ||
cavium | ||
compile | ||
conf | ||
idt | ||
include | ||
malta | ||
mips | ||
rmi | ||
sentry5 | ||
sibyte |