1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-30 12:04:07 +00:00
Commit Graph

5956 Commits

Author SHA1 Message Date
Julian Elischer
4a338afd7a Move a bunch of flags from the KSE to the thread.
I was in two minds as to where to put them in the first case..
I should have listenned to the other mind.

Submitted by:	 parts by davidxu@
Reviewed by:	jeff@ mini@
2003-02-17 09:55:10 +00:00
Jeff Roberson
5215b1872f - Split the struct kse into struct upcall and struct kse. struct kse will
soon be visible only to schedulers.  This greatly simplifies much the
   KSE code.

Submitted by:	davidxu
2003-02-17 05:14:26 +00:00
Jeff Roberson
e4625663c9 - Move ke_sticks, ke_iticks, ke_uticks, ke_uu, ke_su, and ke_iu back into
the proc.  These counters are only examined through calcru.

Submitted by:	davidxu
Tested on:	x86, alpha, UP/SMP
2003-02-17 02:19:58 +00:00
Alfred Perlstein
9d4156aed3 Fix logic in loop so it actually executes.
Pointed out by: fjoe
2003-02-16 16:12:10 +00:00
Poul-Henning Kamp
f341ca9891 Remove #include <sys/dkstat.h> 2003-02-16 14:13:23 +00:00
Poul-Henning Kamp
3abd4ccf87 Move the tty related statistics counters to live with the tty code. 2003-02-16 13:22:15 +00:00
Jeff Roberson
71146186a1 - Introduce a new function bremfreel() that does a bremfree with the buf
queue lock already held.
 - In getblk() and flushbufqueues() use bremfreel() while we still have the
   buf queue lock held to keep the lists consistent.
 - Add LK_NOWAIT to two cases where we're essentially asserting that the bufs
   are not locked while acquiring the locks.  This will make sure that we get
   the appropriate panic() and not another one for sleeping with a lock held.
2003-02-16 10:43:06 +00:00
Jeff Roberson
5e8feb5bed - Add a WITNESS_SLEEP() for the appropriate cases in lockmgr(). 2003-02-16 10:39:49 +00:00
Alfred Perlstein
5015c68a3c prevent overflow in shminfo.shmmax 2003-02-16 06:08:55 +00:00
Jeffrey Hsu
a44009e07d Remove extraneous FILEDESC_LOCK around atomic read. 2003-02-16 02:15:15 +00:00
Andrew R. Reiter
1f5a94d5f6 - Update a couple of comments to make sense with what today's code is
doing (stale comments make arr something something ;)).
2003-02-15 23:25:12 +00:00
Tor Egge
218a01e062 Avoid file lock leakage when linuxthreads port or rfork is used:
- Mark the process leader as having an advisory lock
  - Check if process leader is marked as having advisory lock when
    closing file
  - Check that file is still open after lock has been obtained
  - Don't allow file descriptor table sharing between processes
    with different leaders

PR:		10265
Reviewed by:	alfred
2003-02-15 22:43:05 +00:00
Andrew R. Reiter
da8f0c8429 - Remove old comment for PURGE() as it no longer exists and implied it
was a comment to cache_zap().
- Add a comment to quickly state what cache_zap() does.

Reviewed by:	phk, mux
2003-02-15 18:58:06 +00:00
Tim J. Robbins
4444375710 Acquire Giant around calls to kern_sigaction() in sigaction(),
freebsd4_sigaction() and osigaction() instead of around the whole
body of those functions. They now no longer hold Giant around calls
to copyin() and copyout(), and it is slightly more obvious what
Giant is protecting.
2003-02-15 09:56:09 +00:00
Tim J. Robbins
c41c566c4a osigpending() no longer needs Giant, for the same reason sigpending()
does not.
2003-02-15 09:15:30 +00:00
Tim J. Robbins
48e8f774cb All uses of p_siglist are protected by the proc lock now, so there's
no need to acquire Giant in sigpending() anymore.
2003-02-15 08:42:02 +00:00
Alfred Perlstein
e7d6662f1b Do not allow kqueues to be passed via unix domain sockets. 2003-02-15 06:04:55 +00:00
Alfred Perlstein
edf6699ae6 Fix LOR with PROC/filedesc. Introduce fdesc_mtx that will be used as a
barrier between free'ing filedesc structures.  Basically if you want to
access another process's filedesc, you want to hold this mutex over the
entire operation.
2003-02-15 05:52:56 +00:00
Bosko Milekic
9e7225808e Make m_getm() always return the top of the newly allocated chain, as
opposed to returning the top of the old chain when there was one and
the top of the newly allocated chain if there was no old chain.

Actually, it should be noted that prior to this fix, although the
comment above m_getm() advertised that m_getm() would return the
top of the old chain (if an old chain was being passed in) it
actually [wrongly] was returning the tail mbuf in the old chain
instead.  This is a bug but since the one use of m_getm() in
the tree luckily did not depend on the behavior, it happened
to work out without notice.

Harti Brandt pointed out that the advertised behavior was actually
not the real behavior and so this change makes m_getm() ALWAYS
return the newly allocated chain (and fixes the comment).  This
is less confusing and is the best course of action as then the
caller is always able to have both a reference to the top of
the original chain (because it's passing it in in the call) and
a reference to the newly attached chain.  Although the API is
slightly modified, I don't think that any third-party code uses
m_getm() and if it does, it surely can't be working properly
because the old behavior was bogus.

API bug pointed out by: Harti Brandt <brandt@fokus.fraunhofer.de>
2003-02-14 16:50:13 +00:00
Dag-Erling Smørgrav
af2eed6648 Style nit. 2003-02-14 13:30:25 +00:00
Alfred Perlstein
3dc593c895 KASSERT format string does not need newline termination 2003-02-14 13:28:44 +00:00
Alfred Perlstein
0c5f7aaab5 Add kasserts to catch bad API usage.
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
2003-02-14 13:18:51 +00:00
Alfred Perlstein
c11110eabe Fix crash dumps on ata and scsi.
To fix scsi, don't wait for ithreads if we're dumping, it makes the
debugger sad.

To fix ata, use what appears to be a polling method if we're dumping,
I stole this from tmm but added code to ensure that this change is
only in effect while dumping.

Tested by: des
2003-02-14 13:10:40 +00:00
Alfred Perlstein
e95499bd4c style. 2003-02-14 12:44:48 +00:00
Alfred Perlstein
aae87a3681 Print a backtrace in case we tsleep from inside of DDB. 2003-02-14 12:44:07 +00:00
Alan Cox
2bd63062b5 Use atomic ops to update amountpipekva. Amountpipekva represents the
total kernel virtual address space used by all pipes.  It is, thus, outside
the scope of any individual pipe lock.
2003-02-13 19:39:54 +00:00
Dag-Erling Smørgrav
f6cebd7310 It seems the extra precautions are no longer needed. 2003-02-13 10:05:20 +00:00
Tim J. Robbins
5ce623b8e0 Add an XXX comment noting that getrusage() accesses p_stats->p_ru
and p_stats->p_cru without holding the appropriate locks.
2003-02-13 09:53:59 +00:00
Peter Wemm
1c425b874c Add a 'debug.witness_trace' sysctl (and tunable) when DDB is present.
This causes LOR and could-sleep messages to come with a stack trace.
2003-02-13 01:35:56 +00:00
Peter Wemm
891e066864 Print "Stack backtrace:" right before dumping the backtrace. We cannot
expect end users to automatically recognize a stack trace for what it is.
2003-02-13 01:33:59 +00:00
Warner Losh
b235704d7c Implement rman_get_device
# I though this was alredy implemented

Pointy hat on my head shown by: peter
2003-02-12 07:00:59 +00:00
Alfred Perlstein
42e1b74af2 Don't lock FILEDESC under PROC.
The locking here needs to be revisited, but this ought to get rid of the
LOR messages that people are complaining about for now.  I imagine either
I or someone else interested with smp will eventually clear this up.
2003-02-11 07:20:52 +00:00
Jeff Roberson
25c4325446 - Add a comment about a race that will happen without Giant. 2003-02-10 22:47:34 +00:00
Jeff Roberson
c7b716cc2a - Unlock the nblock after the loop in bwillwrite(). 2003-02-10 22:33:59 +00:00
Jeff Roberson
783caefbbf - Enable STRICT_RESCHED until code that dynamically decides on resched
strictness based on the current workload is finished.
2003-02-10 14:11:23 +00:00
Jeff Roberson
407b015791 - Add a new variable 'kg_runtime' that tracks the amount of time we've run.
- Use the ratio of kg_runtime / kg_slptime to determine our dynamic priority.
 - Scale kg_runtime and kg_slptime back when the sum of the two exceeds
   SCHED_SLP_RUN_MAX.  This allows us to slowly forget old behavior.
 - Scale back the runtime and slptime in fork so that the new process has the
   same ratio but much less accumulated time.  This causes new behavior to be
   noticed more quickly.
2003-02-10 14:03:45 +00:00
Tim J. Robbins
fbf70de6b0 Lock the proc around accessing p_siglist in ttycheckoutq() in the
unused wait != 0 case.
2003-02-10 06:06:46 +00:00
Jeff Roberson
7137d635ac - In getnewbuf() unlock the bq lock prior to sleeping when we're out of
buffers.

Submitted by:	tegge
2003-02-10 06:02:51 +00:00
Jake Burkholder
3749dff3f9 Remove mtx_lock_giant from functions which are mp-safe. 2003-02-10 04:42:20 +00:00
Jeff Roberson
3306adcfcf - Correct another atomic op.
Spotted by:	alc
2003-02-09 22:39:51 +00:00
Jeff Roberson
08883c8a85 - Claim we're 'fsync' and not 'spec_fsync' in vop_stdfsync. 2003-02-09 12:29:38 +00:00
Jeff Roberson
69953c8435 - Move some code out from #ifdef INVARIANTS. 2003-02-09 12:11:37 +00:00
Jeff Roberson
05e393f0cd - Update a printf format for b_flags. 2003-02-09 11:56:13 +00:00
Jeff Roberson
767b9a529d - Cleanup unlocked accesses to buf flags by introducing a new b_vflag member
that is protected by the vnode lock.
 - Move B_SCANNED into b_vflags and call it BV_SCANNED.
 - Create a vop_stdfsync() modeled after spec's sync.
 - Replace spec_fsync, msdos_fsync, and hpfs_fsync with the stdfsync and some
   fs specific processing.  This gives all of these filesystems proper
   behavior wrt MNT_WAIT/NOWAIT and the use of the B_SCANNED flag.
 - Annotate the locking in buf.h
2003-02-09 11:28:35 +00:00
Jeff Roberson
15553af710 - spell add 'add' and not 'subtract' in an atomic op.
Spotted by:	alc
Pointy hat to:	jeff
2003-02-09 11:21:40 +00:00
Jeff Roberson
d85be48243 - Lock down the buffer cache's infrastructure code. This includes locks on
buf lists, synchronization variables, and atomic ops for the counters.
   This change does not remove giant from any code although some pushdown
   may be possible.
 - In vfs_bio_awrite() don't access buf fields without the buf lock.
2003-02-09 09:47:31 +00:00
Julian Elischer
a282253a29 A little infrastructure, preceding some upcoming changes
to the profiling and statistics code.

Submitted by:	DavidXu@
Reviewed by:	peter@
2003-02-08 02:58:16 +00:00
Jeffrey Hsu
67c0ddef59 Remove vestiges of no longer needed unp_rvnode field.
Approved by:	phk (who originally added it in rev 1.8 of unpcb.h)
2003-02-06 01:34:43 +00:00
Julian Elischer
822ded67fe The lockmanager has to keep track of locks per thread, not per process.
Submitted by:	david Xu (davidxu@)
Reviewed by:	jhb@
2003-02-05 19:36:58 +00:00
Dag-Erling Smørgrav
c524b1a8cf Correct grammatical error in previous commit. 2003-02-04 18:47:17 +00:00