1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-21 11:13:30 +00:00
Commit Graph

68 Commits

Author SHA1 Message Date
Jeff Roberson
1aca9909e5 - Only change the run queue in sched_prio() if the kse is non null. threads
can be in the TD_ON_RUNQ state and not have an associated kse.
 - Remove the PRI_IDLE special case from sched_clock(), it was not actually
   necessary.
2003-10-28 03:28:48 +00:00
Jeff Roberson
3f741ca117 - Use a better algorithm in sched_pctcpu_update()
Contributed by:	Thomaswuerfl@gmx.de

 - In sched_prio(), adjust the run queue for threads which may need to move
   to the current queue due to priority propagation .
 - In sched_switch(), fix style bug introduced when the KSE support went in.
   Columns are 80 chars wide, not 90.
 - In sched_switch(), Fix the comparison in the idle case and explicitly
   re-initialize the runq in the not propagated case.
 - Remove dead code in sched_clock().
 - In sched_clock(), If we're an IDLE class td set NEEDRESCHED so that threads
   that have become runnable will get a chance to.
 - In sched_runnable(), if we're not the IDLETD, we should not consider
   curthread when examining the load.  This mimics the 4BSD behavior of
   returning 0 when the only runnable thread is running.
 - In sched_userret(), remove the code for setting NEEDRESCHED entirely.
   This is not necessary and is not implemented in 4BSD.
 - Use the correct comparison in sched_add() when checking to see if an idle
   prio task has had it's priority temporarily elevated.
2003-10-27 06:47:05 +00:00
Jeff Roberson
484288de56 - If a thread is not bound to a kse return 0 from sched_pctcpu().
Reported by:	 pawel.worach@nordea.com
2003-10-20 19:55:21 +00:00
Jeff Roberson
0e0f626628 - Only kse_reassign() in the !running case.
Reported by:	kris
2003-10-16 20:32:57 +00:00
Jeff Roberson
0c7da3a43d - Call sched_add() with the correct argument on SMP.
Reported by:	Valentin Chopov <valentin@valcho.net>
2003-10-16 20:06:19 +00:00
Jeff Roberson
b72f347bdb - Fix a minor problem with my last commit, we don't want to return from
sched_switch if the thread is running, we want to fall through and pick
   a new thread because we have been preempted.
2003-10-16 10:04:54 +00:00
Jeff Roberson
ae53b483cc - Collapse sched_switchin() and sched_switchout() into sched_switch(). Now
mi_switch() calls sched_switch() which calls cpu_switch().  This is
   actually one less function call than it had been.
2003-10-16 08:53:46 +00:00
Jeff Roberson
7cf90fb376 - Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a td
argument rather than a kse.
2003-10-16 08:39:15 +00:00
Jeff Roberson
4c9612c622 - The non iterative algorithm for interact_update was broken due to
rounding errors.  This was the source of the majority of the
   interactivity problems.  Reintroduce the old algorithm and its XXX.
 - Up the interactivity threshold to 30.  It really could stand to be even
   a tiny bit higher.
 - Let the sleep and run time accumulate up to 5 seconds of history rather
   than two.  This helps stop XFree86 from becoming non-interactive during
   bursts of activity.
2003-10-16 08:17:43 +00:00
Jeff Roberson
08fd6713b2 - If our user_pri doesn't match our actual priority our priority has been
elevated either due to priority propagation or because we're in the
   kernel in either case, put us on the current queue so that we dont
   stop others from using important resources.  At some point the priority
   elevations from sleeping in the kernel should go away.
 - Remove an optimization in sched_userret().  Before we would only set
   NEEDRESCHED if there was something of a higher priority available.  This
   is a trivial optimization and it breaks priority propagation because it
   doesn't take threads which we may be blocking into account.  Notice that
   the thread which is blocking others gets up to one tick of cpu time before
   we honor this NEEDRESCHED in sched_clock().
2003-10-15 07:47:06 +00:00
Jeff Roberson
736c97c7b3 - In SCHED_CURR() add holding Giant to the list of criteria that will keep
you on the current queue.  In the future, it would be nice if priority
   propagation could deterministicly pluck a thread off of the next queue
   and put it on the current queue.  Until then this hack stops us from
   holding up our entire current queue, including interrupt handlers, while
   a thread on the next queue is blocked while holding Giant.
 - Inherit our pctcpu information from our parent.
2003-10-12 21:07:31 +00:00
Jeff Roberson
8ec82641d8 - Change a lame iterative algorithm to a constant time algorithm. Remove
the XXX that complains about it as well.

Submitted by:	ThomasWuerfl@gmx.de
2003-10-04 17:41:13 +00:00
Jeff Roberson
81de51bf1d - Somewhere along the line I stupidly removed critical logic from
sched_ptcpu_update().  This caused erroneous cpu times in TOP for
   processes that were asleep.  Replace the code that was removed.
2003-09-20 02:05:58 +00:00
David Xu
ab2baa7254 Let SA process work under ULE scheduler, originally it would panic kernel.
Reviewed by: jeff
2003-08-26 11:33:15 +00:00
Sam Leffler
c06eb4e293 Change instances of callout_init that specify MPSAFE behaviour to
use CALLOUT_MPSAFE instead of "1" for the second parameter.  This
does not change the behaviour; it just makes the intent more clear.
2003-08-19 17:51:11 +00:00
Jeff Roberson
0c0a98b231 - When stealing a kse in kseq_move() ignore the current kseq's min nice
value.  We want to steal any thread, even one that is not given a slice
   on its current queue.
2003-07-08 06:19:40 +00:00
Jeff Roberson
0ec896fd28 - Clean up an unused variable.
Submitted by:	Steve Kargl <skg@routmask.apl.washington.edu>
2003-07-07 21:08:28 +00:00
Jeff Roberson
749d01b011 - Parse the cpu topology map in sched_setup().
- Associate logical CPUs on the same physical core with the same kseq.
 - Adjust code that assumed there would only be one running thread in any
   kseq.
 - Wrap the HTT code with a ULE_HTT_EXPERIMENTAL ifdef.  This is a start
   towards HyperThreading support but it isn't quite there yet.
2003-07-04 19:59:00 +00:00
Jeff Roberson
7a20304f84 - Don't migrate to stopped cpus. 2003-06-28 09:09:33 +00:00
Jeff Roberson
86f8ae9663 - If smp is not started yet don't try to load balance or we'll put threads
on cpus that aren't running yet.
2003-06-28 08:24:42 +00:00
Jeff Roberson
a91172ade1 - Throttle the inherited sleep and run time in sched_fork_kseg(). This
allows us to learn the behavior of a thread much more quickly after it
   starts up.
2003-06-28 06:19:56 +00:00
Jeff Roberson
e493a5d90c - Adjust the default maximum slice value to ~140ms. This has improved the
nice distribution without significantly impacting interactive response.
   As a side effect it should also allow batch processes to run for a
   slightly longer period which will positively impact their performance.
2003-06-28 06:04:47 +00:00
Jeff Roberson
1a7a9d0ec2 - lticks was erroneously being updated in sched_pctcpu(). This was causing
us to skip the pctcpu_update() call which lead to inaccurate cpu usage
   statistics for processes that didn't run often.
2003-06-21 02:31:49 +00:00
Jeff Roberson
665cb285a8 - Don't allow nice to have such a large effect on priority. This was
causing poor interactive performance while unnice processes were running.
   The new scheme still allows nice to have an effect on priority but it is
   not as dramatic as the effect of the interactivity score.
2003-06-21 02:22:47 +00:00
Jeff Roberson
d07ac847ef - Use a more robust mechanism for determining whether or not a kse is on a
kseq.
2003-06-17 19:49:18 +00:00
Jeff Roberson
7cd0f83355 - Temporarily patch a problem where the interact score could be negative
because the run time exceeds the largest value a signed int can hold.
   The real solution involves calculating how far we are over the limit.
   To quickly solve this problem we loop removing 1/5th of the current value
   until it falls below the limit.  The common case requires no passes.
2003-06-17 10:21:34 +00:00
Jeff Roberson
4b60e3242e - Add a new function "sched_interact_update()" that scales back the sleep
and run time.
 - Scale the sleep and run time back via sched_interact_update() in more
   places.  This is to keep the statistic more accurate.
 - Charge a parent one tick for forking a child.
 - Add only the run time and not the sleep time to the parents kg when a
   thread exits.  This allows us to give a penalty for having an expensive
   thread exit but does not give a bonus for having an interactive thread
   exit.
 - Change the SLP_RUN_THROTTLE to limit us to 4/5th and not 1/2.
 - Change the SLP_RUN_MAX to two seconds.  This keeps bursty interactive
   applications like mozilla and openoffice in the interactive range even
   through expensive tasks.
 - Recalculate the slice after every sleep.  This ensures that once a task
   has been marked interactive it only has a slice of 1 at the risk of
   giving tasks that sleep for a very brief period a longer time slice.
2003-06-17 06:39:51 +00:00
Jeff Roberson
3c12473229 - Increase the ksegrp's cpu time history buffer to 250ms.
- Decrease the history buffer divisor to 2 so that we remember more of the
   old behavior.
2003-06-15 04:14:25 +00:00
Jeff Roberson
b41f3d22cc - Cap the growth of sleep and run time in sched_exit_kse(). 2003-06-15 02:52:29 +00:00
Jeff Roberson
210491d3d9 - Fix the maximum slice value. I accidentally checked in a value of '2'
which meant no process would run for longer than 20ms.
 - Slightly redo the interactivity scorer.  It follows the same algorithm but
   in a slightly more correct way.  Previously values above half were
   incorrect.
 - Lower the interactivity threshold to 20.  It seems that in testing non-
   interactive tasks are hardly ever near there and expensive interactive
   tasks can sometimes surpass it.  This area needs more testing.
 - Remove an unnecessary KTR.
 - Fix a case where an idle thread that had an elevated priority due to
   priority prop. would be placed back on the idle queue.
 - Delay setting NEEDRESCHED until userret() for threads that haad their
   priority elevated while in kernel.  This gives us the same context switch
   optimization as SCHED_4BSD.
 - Limit the child's slice to 1 in sched_fork_kse() so we detect its behavior
   more quickly.
 - Inhert some of the run/slp time from the child in sched_exit_ksegrp().
 - Redo some of the priority comparisons so they are more clear.
 - Throttle the frequency of sched_pctcpu_update() so that rounding errors
   do not make it invalid.
2003-06-15 02:18:29 +00:00
David Xu
0e2a4d3aeb Rename P_THREADED to P_SA. P_SA means a process is using scheduler
activations.
2003-06-15 00:31:24 +00:00
David E. O'Brien
677b542ea2 Use __FBSDID(). 2003-06-11 00:56:59 +00:00
Jeff Roberson
356500a306 - Add a simple CPU load balancing algorithm. This works by executing once a
second and equalizing the load between the two most imbalanced CPU.  This
   is intended to clear up long term load imbalances that would not be handled
   by the 'pull' method in sched_choose().
 - Pull out some bits of sched_choose() into a kseq_move() function that moves
   an arbitrary thread from one kseq to another.
2003-06-09 00:39:09 +00:00
Jeff Roberson
b90816f188 - When a new thread is added to a kseq the load is incremented prior to
adding it to the nice tables.  Therefore, in kseq_add_nice, we should
   keep in mind that the load will be 1 if we are the only thread, and not
   0.
 - Assert that the sched lock is held in all the appropriate places.
 - Increase the scope of the sched lock in sched_pctcpu_update().
 - Hold the sched lock in sched_runnable().  It is not held by the caller.
2003-06-08 00:47:33 +00:00
Julian Elischer
43fdafb1e1 Fix typo in last commit 2003-05-02 06:18:55 +00:00
Julian Elischer
b1ac98d8b2 Move the flag that indicates an idle thread from the KSE to the thread.
It was always referenced via the thread anyhow.

Reviewed by:	jhb (a LOOOOONG time ago)
2003-05-02 00:33:12 +00:00
John Baldwin
2056d0a168 Add lock assertions for various proc/thread/kse/ksegroup fields to the
scheduler functions.
2003-04-23 18:51:05 +00:00
John Baldwin
0b5318c81a - Assert that the proc lock and sched_lock are held in sched_nice().
- For the 4BSD scheduler, this means that all callers of the static
  function resetpriority() now always hold sched_lock, so don't lock
  sched_lock explicitly in that function.
2003-04-22 20:50:38 +00:00
John Baldwin
828e7683bf Protect p_swtime with the sched_lock. 2003-04-22 19:48:25 +00:00
Jeff Roberson
7cd650a972 - Set the ke_cpu field in sched_add() for interrupt and realtime threads
since they are going on the current cpu and not their previously assigned
   cpu.
 - sched_runnable() should only return true in the SMP case if the other
   processor has more than one thread that is runnable.  We can not steal
   curthread.
 - Change kseq_print() to accept the cpuid instead of a kseq pointer.  This
   makes use of this function in ddb much easier.
2003-04-18 05:24:10 +00:00
Jeff Roberson
a5f099d0c4 - Unbreak priority prop. for timeshare threads. Always place something on
the current queue if its priority is really elevated.  This needs more work
   as there are cases where a next queue kse could be holding up what would
   be a curr queue kse, and thus hurting interactivity.  Also, when a thread
   with an elevated priority has its priority lowered it should be placed
   back on the next queue.
2003-04-12 22:33:24 +00:00
Jeff Roberson
9bca28a703 - Clean up some debug code left over from my earlier megacommit. 2003-04-12 07:28:36 +00:00
Jeff Roberson
b5c4c4a7e5 - We only care about the base priority. Ignore the SCHED_FIFO_BIT so that
we dont get confused.

Reported and debugged by:	Steve Kargl <sgk@troutmask.apl.washington.edu>
2003-04-12 07:00:16 +00:00
Jeff Roberson
141ad61c78 - Add sched_exit_*
- Call sched_exit_kse() from sched_exit() instead of implementing it here.
2003-04-11 19:24:00 +00:00
Jeff Roberson
58177de2de - Only select kseqs with more than one kse to steal. The running kse
is reflected in the load now and you can't very well migrate that.
2003-04-11 18:40:34 +00:00
Jeff Roberson
c36ccfa22b - When migrating a kse from one kseq to the next actually insert it onto
the second kseq's run queue so that it is referenced by the kse when
   it is switched out.
 - Spell ksq_rslices properly.

Reported by:	Ian Freislich <ianf@za.uu.net>
2003-04-11 18:37:34 +00:00
Jeff Roberson
15dc847e52 - Add a SYSCTL node for the ule scheduler.
- Allow user adjustable min and max time slices (suggested by hiten).
 - Change the SLP_RUN_MAX to 100ms from 2 seconds so that we learn whether a
   process is interactive or not much more quickly.
 - Place a process on the current run queue if it is interactive or if it is
   running at an interrupt thread priority due to priority prop.
 - Use the 'current' timeshare queue for interrupt threads, realtime threads,
   and idle threads that are running at higher priority due to priority prop.
   This fixes problems where priorities would have been elevated but we would
   not check the timeshare run queue until other lower priority tasks were
   no longer runnable.
 - Keep an array of loads indexed by the priority class as well as a global
   load.
 - Keep an bucket of nice values with a count of the number of kses currently
   runnable with that nice value.
 - Keep track of the minimum nice value of any running thread.
 - Remove the unused short term sleep accounting.  I was attempting to use
   this for load balancing but it didn't work out.
 - Define a kseq_print() for use with debugging.
 - Add KTR debugging at useful places so we can easily debug slice and
   priority assignment.
 - Decouple the runq assignment from the kseq assignment.  kseq_add now keeps
   track of statistics.  This is done so that the nice and load is still
   tracked for the currently running process.  Previously if a niced process
   was added while a non nice process was running the niced process would
   still get a slice since it was not aware of the unnice process.
 - Make adjustments for the sched api changes.
2003-04-11 03:47:14 +00:00
Julian Elischer
060563ec50 Move the _oncpu entry from the KSE to the thread.
The entry in the KSE still exists but it's purpose will change a bit
when we add the ability to lock a KSE to a cpu.
2003-04-10 17:35:44 +00:00
Jeff Roberson
a8949de20e - Keep seperate statistics and run queues for different scheduling classes.
- Treat each class specially in kseq_{choose,add,rem}.  Let the rest of the
   code be less aware of scheduling classes.
 - Skip the interactivity calculation for non TIMESHARE ksegrps.
 - Move slice and runq selection into kseq_add().  Uninline it now that it's
   big.
2003-04-03 00:29:28 +00:00
Jeff Roberson
5053d272c2 - Make the interactivity calculator decay faster.
- Make the pcpu estimator update faster.
2003-04-02 08:22:33 +00:00