1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-16 10:20:30 +00:00
Commit Graph

26 Commits

Author SHA1 Message Date
John Baldwin
0c0b25ae91 Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
  determine if a thread about to be added to a run queue should be
  preempted to directly.  If it is not safe to preempt or if the new
  thread does not have a high enough priority, then the function returns
  false and sched_add() adds the thread to the run queue.  If the thread
  should be preempted to but the current thread is in a nested critical
  section, then the flag TDF_OWEPREEMPT is set and the thread is added
  to the run queue.  Otherwise, mi_switch() is called immediately and the
  thread is never added to the run queue since it is switch to directly.
  When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
  then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
  setrunqueue() now does all the correct work.  This also removes the
  do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
  supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
  chance to run if the architecture supports native preemption since
  the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
  architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
  preemption, namely alpha, i386, and amd64.

This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.

Approved by:	scottl (with his re@ hat)
2004-07-02 20:21:44 +00:00
John Baldwin
bf0acc273a - Change mi_switch() and sched_switch() to accept an optional thread to
switch to.  If a non-NULL thread pointer is passed in, then the CPU will
  switch to that thread directly rather than calling choosethread() to pick
  a thread to choose to.
- Make sched_switch() aware of idle threads and know to do
  TD_SET_CAN_RUN() instead of sticking them on the run queue rather than
  requiring all callers of mi_switch() to know to do this if they can be
  called from an idlethread.
- Move constants for arguments to mi_switch() and thread_single() out of
  the middle of the function prototypes and up above into their own
  section.
2004-07-02 19:09:50 +00:00
Bruce Evans
61ecb14af6 Record exactly where this file was copied from. It wasn't repo-copied so
this is not very obvious.

Fixed some style bugs (mainly missing parentheses around return values).
2004-03-04 10:18:17 +00:00
Jeff Roberson
7b09539ce2 - Use a seperate startup function for the zeroidle kthread. Use this to
set P_NOLOAD prior to running the thread.
2004-02-02 07:51:03 +00:00
Jeff Roberson
29bcc4514f - Add a flags parameter to mi_switch. The value of flags may be SW_VOL or
SW_INVOL.  Assert that one of these is set in mi_switch() and propery
   adjust the rusage statistics.  This is to simplify the large number of
   users of this interface which were previously all required to adjust the
   proper counter prior to calling mi_switch().  This also facilitates more
   switch and locking optimizations.
 - Change all callers of mi_switch() to pass the appropriate paramter and
   remove direct references to the process statistics.
2004-01-25 03:54:52 +00:00
Warner Losh
06b4bf3e55 Expand inline the relevant parts of src/COPYRIGHT for Matt Dillon's
copyrighted files.

Approved by: Matt Dillon
2003-08-12 23:24:05 +00:00
David E. O'Brien
874651b13c Use __FBSDID(). 2003-06-11 23:50:51 +00:00
Dag-Erling Smørgrav
e8c7f48855 Rename a static variable to avoid future conflicts. 2003-04-04 12:08:42 +00:00
Jeff Roberson
b43179fbe8 - Create a new scheduler api that is defined in sys/sched.h
- Begin moving scheduler specific functionality into sched_4bsd.c
 - Replace direct manipulation of scheduler data with hooks provided by the
   new api.
 - Remove KSE specific state modifications and single runq assumptions from
   kern_switch.c

Reviewed by:	-arch
2002-10-12 05:32:24 +00:00
Peter Wemm
16e12eab5a Set P_NOLOAD on the pagezero kthread so that it doesn't artificially skew
the loadav.  This is not real load.  If you have a nice process running in
the background, pagezero may sit in the run queue for ages and add one to
the loadav, and thereby affecting other scheduling decisions.
2002-07-19 21:06:01 +00:00
Alan Cox
f23050633f o Remove the acquisition and release of Giant from the idle priority thread
that pre-zeroes free pages.
 o Remove GIANT_REQUIRED from some low-level page queue functions.  (Instead
   assertions on the page queue lock are being added to the higher-level
   functions, like vm_page_wire(), etc.)

In collaboration with:	peter
2002-07-18 17:40:07 +00:00
Alan Cox
072e9cbb50 o Use vm_pageq_remove_nowakeup() and vm_pageq_enqueue() in
vm_page_zero_idle() instead of partially duplicated implementations.
   In particular, this change guarantees that the number of free pages
   in the free queue(s) matches the global free page count when Giant
   is released.

Submitted by:	peter (via his p4 "pmap" branch)
2002-07-16 19:39:40 +00:00
Matthew Dillon
fbcf77c2ea Re-enable the idle page-zeroing code. Remove all IPIs from the idle
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.

A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.

Two-way SMP buildworld -j 5 tests (second run, after stabilization)
    2282.76 real  2515.17 user  704.22 sys	before peter's IPI commit
    2266.69 real  2467.50 user  633.77 sys	after peter's commit
    2232.80 real  2468.99 user  615.89 sys	after this commit

Reviewed by:	peter, jhb
Approved by:	peter
2002-07-12 20:17:06 +00:00
Peter Wemm
5e13bcd6c4 vm_page_queue_free_mtx is a spin mutex, not a normal sleep mutex.
I do not know why this didn't panic my box, but I have most certainly
been using it:
peter@overcee[3:14pm]~src/sys/i386/i386-110> sysctl -a | grep zero
vm.stats.misc.zero_page_count: 2235
vm.stats.misc.cnt_prezero: 638951
vm.idlezero_enable: 1
vm.idlezero_maxrun: 16

Submitted by:	Tor.Egge@cvsup.no.freebsd.org
Approved by:	Tor's patches are never wrong. :-)
2002-07-08 23:12:37 +00:00
Peter Wemm
b428c5fd23 Turn the zeroidle process off for SMP systems, there is still a possible
TLB problem when bouncing from one cpu to another (the original cpu will
not have purged its TLB if the it simply went idle).

Pointed out by:	 Tor.Egge@cvsup.no.freebsd.org
Approved by:	Tor is never wrong. :-)
2002-07-08 23:09:11 +00:00
Peter Wemm
a58b3a6878 Add a special page zero entry point intended to be called via the single
threaded VM pagezero kthread outside of Giant.  For some platforms, this
is really easy since it can just use the direct mapped region.  For others,
IPI sending is involved or there are other issues, so grab Giant when
needed.

We still have preemption issues to deal with, but Alan Cox has an
interesting suggestion on how to minimize the problem on x86.

Use Luigi's hack for preserving the (lack of) priority.

Turn the idle zeroing back on since it can now actually do something useful
outside of Giant in many cases.
2002-07-08 04:24:26 +00:00
Alan Cox
25524d3eba o Lock accesses to the free queue(s) in vm_page_zero_idle(). 2002-07-07 19:27:57 +00:00
Julian Elischer
e602ba25fd Part 1 of KSE-III
The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by:	Almost everyone who counts
	(at various times, peter, jhb, matt, alfred, mini, bernd,
	and a cast of thousands)

	NOTE: this is still Beta code, and contains lots of debugging stuff.
	expect slight instability in signals..
2002-06-29 17:26:22 +00:00
Peter Wemm
1a87a0da66 Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
Julian Elischer
2c1007663f In a threaded world, differnt priorirites become properties of
different entities.  Make it so.

Reviewed by:	jhb@freebsd.org (john baldwin)
2002-02-11 20:37:54 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
John Baldwin
29fdb744d1 Process priority is locked by the sched_lock, not the proc lock. 2001-09-01 20:16:30 +00:00
Peter Wemm
3516c025ff Implement idle zeroing of pages. I've been tinkering with this
on and off since John Dyson left his work-in-progress.

It is off by default for now.  sysctl vm.zeroidle_enable=1 to turn it on.

There are some hacks here to deal with the present lack of preemption - we
yield after doing a small number of pages since we wont preempt otherwise.

This is basically Matt's algorithm [with hysteresis] with an idle process
to call it in a similar way it used to be called from the idle loop.

I cleaned up the includes a fair bit here too.
2001-08-25 05:00:44 +00:00
Benno Rice
1f246456a5 The i386-specific includes in this file were "fixed" by bracketing them with
#ifndef __alpha__.  Fix this for the rest of the world by turning it into
#ifdef __i386__.

Reviewed by:	obrien
2001-07-15 04:11:51 +00:00
Matt Jacob
f343cf2135 Apply field bandages to the includes so compiles happen on alpha. 2001-07-05 06:13:44 +00:00
Matthew Dillon
7197571105 Move vm_page_zero_idle() from machine-dependant sections to a
machine-independant source file, vm/vm_zeroidle.c.  It was exactly the
same for all platforms and updating them all was getting annoying.
2001-07-05 01:32:42 +00:00