1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-15 10:17:20 +00:00
Commit Graph

10184 Commits

Author SHA1 Message Date
Konstantin Belousov
f231de478e Implement fetching of the __FreeBSD_version from the ELF ABI-tag note.
The value is read into the p_osrel member of the struct proc. p_osrel
is set to 0 for the binaries without the note.

MFC after:	3 days
2007-12-04 12:28:07 +00:00
Konstantin Belousov
93d1c72883 Check for the program headers alignment of the ELF images before
dereferencing. Unaligned access could cause panic on strict alignment
architectures.

Reviewed by:	marcel, marius (also tested on sparc64, thanks !)
MFC after:	3 days
2007-12-04 12:21:27 +00:00
Alan Cox
ba63339a0a Introduce an UMA backend page allocator for the jumbo frame zones that
allocates physically contiguous memory.

MFC after: 3 months
Requested and reviewed by: Kip Macy
Tested by: Andrew Gallatin and Pyun YongHyeon
2007-12-04 07:06:08 +00:00
Robert Watson
56905239ae When a symbol name can't be resolved, return "??" as the name, rather
than "Unknown func", in order to avoid putting spaces in what ideally
is a string separated by white space.
2007-12-03 14:44:35 +00:00
Robert Watson
1cc8c45c54 Add another new sysctl in support of the forthcoming procstat(1) to
support its -k argument:

kern.proc.kstack - dump the kernel stack of a process, if debugging
  is permitted.

This sysctl is present if either "options DDB" or "options STACK" is
compiled into the kernel.  Having support for tracing the kernel
stacks of processes from user space makes it much easier to debug
(or understand) specific wmesg's while avoiding the need to enter
DDB in order to determine the path by which a process came to be
blocked on a particular wait channel or lock.
2007-12-02 21:52:18 +00:00
Robert Watson
3c90d1ea74 Break out stack(9) from ddb(4):
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
  definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
  defined, or also if "options DDB" is defined to provide compatibility
  with existing users of stack(9).

Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to.  It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.

Update stack(9) man page.

Build tested:	amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested:	amd64 (rwatson), arm (cognet), i386 (rwatson)
2007-12-02 20:40:35 +00:00
Robert Watson
cc43c38c87 Add two new sysctls in support of the forthcoming procstat(1) to support
its -f and -v arguments:

kern.proc.filedesc - dump file descriptor information for a process, if
  debugging is permitted, including socket addresses, open flags, file
  offsets, file paths, etc.

kern.proc.vmmap - dump virtual memory mapping information for a process,
  if debugging is permitted, including layout and information on
  underlying objects, such as the type of object and path.

These provide a superset of the information historically available
through the now-deprecated procfs(4), and are intended to be exported
in an ABI-robust form.
2007-12-02 10:10:27 +00:00
Alan Cox
30418ed31c Eliminate vfs_page_set_valid()'s unused argument. 2007-12-02 01:28:35 +00:00
Robert Watson
9ccca7d1b1 Modify stack(9) stack_print() and stack_sbuf_print() routines to use new
linker interfaces for looking up function names and offsets from
instruction pointers.  Create two variants of each call: one that is
"DDB-safe" and avoids locking in the linker, and one that is safe for
use in live kernels, by virtue of observing locking, and in particular
safe when kernel modules are being loaded and unloaded simultaneous to
their use.  This will allow them to be used outside of debugging
contexts.

Modify two of three current stack(9) consumers to use the DDB-safe
interfaces, as they run in low-level debugging contexts, such as inside
lockmgr(9) and the kernel memory allocator.

Update man page.
2007-12-01 22:04:16 +00:00
Robert Watson
cdd475b347 The kernel linker includes a number of utility functions to look up symbol
information in support of DDB(4); these functions bypass normal linker
locking as they may run in contexts where locking is unsafe (such as the
kernel debugger).

Add a new interface linker_ddb_search_symbol_name(), which looks up a
symbol name and offset given an address, and also
linker_search_symbol_name() which does the same but *does* follow the
locking conventions of the linker.

Unlike existing functions, these functions place the name in a
caller-provided buffer, which is stable even after linker locks have been
released.  These functions will be used in upcoming revisions to stack(9)
to support kernel stack trace generation in contexts as part of a live,
rather than suspended, kernel.
2007-12-01 19:24:28 +00:00
Peter Wemm
e16aed66ee Deal with the possibility of device_set_unit() being called when attaching
the associated devinfo sysctl tree.
2007-11-30 21:30:14 +00:00
Peter Wemm
cd17ceaab8 Add sysctl_rename_oid() to support device_set_unit() usage. Otherwise,
when unit numbers are changed, the sysctl devinfo tree gets out of sync
and duplicate trees are attempted to be attached with the original name.
2007-11-30 21:29:08 +00:00
Robert Watson
ef54068b54 Move use of 'i' in cp_time sysctl under SCTL_MASK32 so that it compiles
without warnings on systems that don't define it.
2007-11-29 08:38:22 +00:00
Peter Wemm
7628402b07 Move the shared cp_time array (counts %sys, %user, %idle etc) to the
per-cpu area.  cp_time[] goes away and a new function creates a merged
cp_time-like array for things like linprocfs, sysctl etc.  The
atomic ops for updating cp_time[] in statclock go away, and the scope
of the thread lock is reduced.

sysctl kern.cp_time returns a backwards compatible cp_time[] array.
A new kern.cp_times sysctl returns the individual per-cpu stats.

I have pending changes to make top and vmstat optionally show per-cpu
stats.

I'm very aware that there are something like 5 or 6 other versions "out
there" for doing this - but none were handy when I needed them.

I did merge my changes with John Baldwin's, and ended up replacing a
few chunks of my stuff with his, and stealing some other code.

Reviewed by:  jhb
Partly obtained from:  jhb
2007-11-29 06:34:30 +00:00
Attilio Rao
573c6b82df Make ADAPTIVE_GIANT as the default in the kernel and remove the option.
Currently, Giant is not too much contented so that it is ok to treact it
like any other mutexes.

Please don't forget to update your own custom config kernel files.

Approved by:	cognet, marcel (maintainers of arches where option is
		not enabled at the moment)
2007-11-28 05:50:45 +00:00
Attilio Rao
49aead8a10 Simplify the adaptive spinning algorithm in rwlock and mutex:
currently, before to spin the turnstile spinlock is acquired and the
waiters flag is set.
This is not strictly necessary, so just spin before to acquire the
spinlock and to set the flags.
This will simplify a lot other functions too, as now we have the waiters
flag set only if there are actually waiters.
This should make wakeup/sleeping couplet faster under intensive mutex
workload.
This also fixes a bug in rw_try_upgrade() in the adaptive case, where
turnstile_lookup() will recurse on the ts_lock lock that will never be
really released [1].

[1] Reported by: jeff with Nokia help
Tested by: pho, kris (earlier, bugged version of rwlock part)
Discussed with: jhb [2], jeff
MFC after: 1 week

[2] John had a similar patch about 6.x and/or 7.x about mutexes probabilly
2007-11-26 22:37:35 +00:00
Attilio Rao
4a32616a77 Fix the spinlock static table adding missing spinlocks.
- rm_spinlock has turnstile chain as child
- srclock has callout and clk as child, found by witness "emulation".
  Just move it very high in our ranking
2007-11-24 04:32:32 +00:00
Attilio Rao
2c2bebfcb3 transferlockers() is a very dangerous and hack-ish function as waiters
should never be moved by one lock to another.
As, luckily, nothing in our tree is using it, axe the function.

This breaks lockmgr KPI, so interested, third-party modules should update
their source code with appropriate replacement.

Ok'ed by: ups, rwatson
MFC after: 3 days
2007-11-24 04:22:28 +00:00
Kris Kennaway
e6d64a0f15 Remove remaining Giant acquisition around vn_fullpath1. This was missed
in r1.106 and has not been required for some years now.

Reviewed by:  jeff
MFC After:    1 week
2007-11-22 21:26:25 +00:00
Attilio Rao
557f5e51e9 Cache the value of c_lock as it can change, in the struct,
while the global callout spinlock is not held, and can lead to PF#.

Reported by: dougb, Mark Atkinson <atkin901 at yahoo dot com>
Tested by: dougb
Diagnosed by: jhb
2007-11-22 12:15:54 +00:00
David Xu
110de0cf17 Add function UMTX_OP_WAIT_UINT, the function causes thread to wait for
an integer to be changed.
2007-11-21 04:21:02 +00:00
Robert Watson
965b55e2b4 Test that p_textvp is non-NULL be dereferencing, as no executable vnode is
set for kernel processes.

Reported by:	Skip Ford <skip at menantico dot com>
MFC after:	3 days
2007-11-20 18:03:09 +00:00
Attilio Rao
64b9ee201a Add the function callout_init_rw() to callout facility in order to use
rwlocks in conjuction with callouts.  The function does basically what
callout_init_mtx() alredy does with the difference of using a rwlock
as extra argument.
CALLOUT_SHAREDLOCK flag can be used, now, in order to acquire the lock only
in read mode when running the callout handler.  It has no effects when used
in conjuction with mtx.

In order to implement this, underlying callout functions have been made
completely lock type-unaware, so accordingly with this, sysctl
debug.to_avg_mtxcalls is now changed in the generic
debug.to_avg_lockcalls.

Note: currently the allowed lock classes are mutexes and rwlocks because
callout handlers run in softclock swi, so they cannot sleep and they
cannot acquire sleepable locks like sx or lockmgr.

Requested by: kmacy, pjd, rwatson
Reviewed by: jhb
2007-11-20 00:37:45 +00:00
John Baldwin
790c2471b9 Bump up the number of ttys supported by pty(4) to 512 by making use of
[pt]ty[lmnoLMNO][0-9a-v].

MFC after:	3 days
Reviewed by:	rwatson
2007-11-19 20:49:42 +00:00
Jean-Sébastien Pédron
4b5b09e744 The kernel uses two ways to write data on a pipe:
o  buffered write, for chunks smaller than PIPE_MINDIRECT bytes
    o  direct write, for everything else

A call to writev(2) may receive struct iov of various size and the
kernel may have to switch from one solution to the other. Before doing
this, it must wake reader processes and any select/poll/kqueue up.

This commit fixes a bug where select/poll/kqueue are not triggered
when switching from buffered write to direct write. It adds calls to
pipeselwakeup().

I give more details on freebsd-arch@:
http://lists.freebsd.org/pipermail/freebsd-arch/2007-September/006790.html

This should fix issues with Erlang (lang/erlang) and kqueue.

Reported by:	Rickard Green (Erlang)
2007-11-19 15:05:20 +00:00
Attilio Rao
f9721b43ed Expand lock class with the "virtual" function lc_assert which will offer
an unified way for all the lock primitives to express lock assertions.
Currenty, lockmgrs and rmlocks don't have assertions, so just panic in
that case.
This will be a base for more callout improvements.

Ok'ed by: jhb, jeff
2007-11-18 14:43:53 +00:00
Randall Stewart
7c7454fe95 - Add in missing event handler invokes for initial proc and thread. 2007-11-18 13:56:51 +00:00
John Birrell
f6c1530162 Add a function to list symbols in a file and their values at the
same time rather than having to list the symbols and then go back
and look each one up by name.
2007-11-18 00:23:31 +00:00
John Baldwin
cd808cec50 Acquire the process mutex and spin locks before calling thread_exit() in
kthread_exit() to fix panics when using INVARIANTS.
2007-11-15 21:45:17 +00:00
Randall Stewart
b209f88986 - Adds event handlers for process_ctor,process_dtor, process_init,
process_fini, thread_ctor, thread_dtor, thread_init, thread_fini. This
  will allow us to extend dynamically areas in proc/thread for dtrace ;-)
Reviewed by:    rwatson
2007-11-15 14:20:07 +00:00
Gleb Smirnoff
d8410b8edf Fix build. 2007-11-15 14:16:20 +00:00
Randall Stewart
4a62a3e556 Adds an event handler for:
- process_ctor,dtor, init and fini
  - thread_ctor,dtor, init and fini
This allows the ability to add on additional things
during construction/destruction of threads and processes.

Reviewed by:	rwatson
2007-11-15 13:28:54 +00:00
Julian Elischer
c67ddc21e7 This time REALLY copy the name from the proc to the thread as a default. 2007-11-15 06:35:26 +00:00
Julian Elischer
4b9322aee8 When forking, the new thread deserves a name too. Don't just use the
td_startcopy section as it is not the right thing to do
in other cases (e.g. if starting a new thread from one that is already named).
2007-11-15 02:13:44 +00:00
Attilio Rao
6f5c319c12 Remove a bogus KASSERT which will prevent rwlock to be acquired
recursively in exclusive mode with debugging kernels.

Submitted by: kmacy
Approved by: jeff
2007-11-14 21:21:48 +00:00
Marcel Moolenaar
0c3967e7fe o Rename cpu_thread_setup() to cpu_thread_alloc() to better
communicate that it relates to (is called by) thread_alloc()
o  Add cpu_thread_free() which is called from thread_free()
   to counter-act cpu_thread_alloc().

i386:	Have cpu_thread_free() call cpu_thread_clean() to
	preserve behaviour.
ia64:	Have cpu_thread_free() call mtx_destroy() for the
	mutex initialized in cpu_thread_alloc().

PR: ia64/118024
2007-11-14 20:21:54 +00:00
Julian Elischer
e01eafef2a A bunch more files that should probably print out a thread name
instead of a process name.
2007-11-14 06:51:33 +00:00
Julian Elischer
431f890614 generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.
2007-11-14 06:21:24 +00:00
Julian Elischer
ca081fdbc5 Make sure there is a good default thread name for all threads. 2007-11-14 06:04:57 +00:00
Robert Watson
433ea89af4 Add rm_wowned(9) function to test whether the current thread owns an
exclusive lock on the passed rmlock.

Reviewed by:	ups
2007-11-10 15:06:30 +00:00
John Baldwin
87a194514b A couple of optimizations to the last commit.
Submitted by:	Christoph Mallon christoph mallon of gmx de
2007-11-08 21:45:56 +00:00
Stephan Uphoff
dda7aec745 Use VM_FAULT_DIRTY to fault in pages for write access in
proc_rwmen.
Otherwise copy on write may create an anonymous page that is
not marked as dirty. Since  writing data to these pages
in this function also does not dirty these pages they may be
later discarded by the pagedaemon.
2007-11-08 19:35:36 +00:00
John Baldwin
db27a9dac7 Make it easier to add more ptys to the pty(4) driver:
- Use unit2minor() and minor2unit() to generate minor numbers to support
  unit numbers higher than 255.
- Use simple string operations on the 'names' array rather than hard-coded
  constants and switch statements so that more ptys can be added by simply
  expanding the 'names' array.

MFC after:	1 week
2007-11-08 15:51:52 +00:00
Stephan Uphoff
f53d15fe1b Initial checkin for rmlock (read mostly lock) a multi reader single writer
lock optimized for almost exclusive reader access. (see also rmlock.9)

TODO:
    Convert to per cpu variables linkerset as soon as it is available.
    Optimize UP (single processor)  case.
2007-11-08 14:47:55 +00:00
Robert Watson
088f584961 Remove unused variable td from sched_idletd().
MFC after:	3 days
Found with:	Coverity Prevent(tm)
CID:		3561
2007-11-05 12:01:12 +00:00
Konstantin Belousov
89b57fcf01 Fix for the panic("vm_thread_new: kstack allocation failed") and
silent NULL pointer dereference in the i386 and sparc64 pmap_pinit()
when the kmem_alloc_nofault() failed to allocate address space. Both
functions now return error instead of panicing or dereferencing NULL.

As consequence, vmspace_exec() and vmspace_unshare() returns the errno
int. struct vmspace arg was added to vm_forkproc() to avoid dealing
with failed allocation when most of the fork1() job is already done.

The kernel stack for the thread is now set up in the thread_alloc(),
that itself may return NULL. Also, allocation of the first process
thread is performed in the fork1() to properly deal with stack
allocation failure. proc_linkup() is separated into proc_linkup()
called from fork1(), and proc_linkup0(), that is used to set up the
kernel process (was known as swapper).

In collaboration with:	Peter Holm
Reviewed by:	jhb
2007-11-05 11:36:16 +00:00
Julian Elischer
f4bb4fc8f3 Completely remove the code for single threading the mainline fork code.
Put in a little comment explaining why it went away.
Re-enable it in the case there an exisiting process is just splitting
off its address space and file descriptors.
(I donpt think anything uses that code but it needs some sort of locking
and this does the job.

Reviewed by:	Davidxu, alc, others
MFC after:	3 days
2007-11-02 19:40:36 +00:00
Nate Lawson
a15e947d54 If we're on an SMP kernel and there is more than 1 CPU, reject any attempts
to change the freq before the other CPUs are active.  The current code
always attempts to change all CPUs to match each other, and the requisite
sched_bind() call won't work before APs are launched.
2007-10-30 22:18:08 +00:00
Julian Elischer
539976ffdf fix typo in code normally not compiled in. 2007-10-29 20:45:31 +00:00
Julian Elischer
3c1ffc320f Fix typo in code obviously not being compiled on any of my machines.
found by: rdivacky@
2007-10-28 23:11:57 +00:00