1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Copyright (c) 1982, 1986, 1989, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
* (c) UNIX System Laboratories, Inc.
|
|
|
|
* All or some portions of this file are derived from material licensed
|
|
|
|
* to the University of California by American Telephone and Telegraph
|
|
|
|
* Co. or Unix System Laboratories, Inc. and are reproduced herein with
|
|
|
|
* the permission of UNIX System Laboratories, Inc.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* @(#)kern_sig.c 8.7 (Berkeley) 4/18/94
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
1997-12-16 17:40:42 +00:00
|
|
|
#include "opt_compat.h"
|
1996-01-03 21:42:35 +00:00
|
|
|
#include "opt_ktrace.h"
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/param.h>
|
1998-06-28 08:37:45 +00:00
|
|
|
#include <sys/kernel.h>
|
1995-11-12 06:43:28 +00:00
|
|
|
#include <sys/sysproto.h>
|
2000-09-10 13:54:52 +00:00
|
|
|
#include <sys/systm.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/signalvar.h>
|
|
|
|
#include <sys/namei.h>
|
|
|
|
#include <sys/vnode.h>
|
2000-04-16 18:53:38 +00:00
|
|
|
#include <sys/event.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/proc.h>
|
1997-12-06 04:11:14 +00:00
|
|
|
#include <sys/pioctl.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/acct.h>
|
1997-03-23 03:37:54 +00:00
|
|
|
#include <sys/fcntl.h>
|
2001-01-16 01:00:43 +00:00
|
|
|
#include <sys/condvar.h>
|
2001-03-28 08:41:04 +00:00
|
|
|
#include <sys/lock.h>
|
2000-10-20 07:58:15 +00:00
|
|
|
#include <sys/mutex.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/wait.h>
|
2000-09-07 01:33:02 +00:00
|
|
|
#include <sys/ktr.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/ktrace.h>
|
2001-03-28 08:41:04 +00:00
|
|
|
#include <sys/resourcevar.h>
|
2001-04-27 19:28:25 +00:00
|
|
|
#include <sys/smp.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
#include <sys/stat.h>
|
2001-03-28 11:52:56 +00:00
|
|
|
#include <sys/sx.h>
|
|
|
|
#include <sys/syslog.h>
|
Mega-commit for Linux emulator update.. This has been stress tested under
netscape-2.0 for Linux running all the Java stuff. The scrollbars are now
working, at least on my machine. (whew! :-)
I'm uncomfortable with the size of this commit, but it's too
inter-dependant to easily seperate out.
The main changes:
COMPAT_LINUX is *GONE*. Most of the code has been moved out of the i386
machine dependent section into the linux emulator itself. The int 0x80
syscall code was almost identical to the lcall 7,0 code and a minor tweak
allows them to both be used with the same C code. All kernels can now
just modload the lkm and it'll DTRT without having to rebuild the kernel
first. Like IBCS2, you can statically compile it in with "options LINUX".
A pile of new syscalls implemented, including getdents(), llseek(),
readv(), writev(), msync(), personality(). The Linux-ELF libraries want
to use some of these.
linux_select() now obeys Linux semantics, ie: returns the time remaining
of the timeout value rather than leaving it the original value.
Quite a few bugs removed, including incorrect arguments being used in
syscalls.. eg: mixups between passing the sigset as an int, vs passing
it as a pointer and doing a copyin(), missing return values, unhandled
cases, SIOC* ioctls, etc.
The build for the code has changed. i386/conf/files now knows how
to build linux_genassym and generate linux_assym.h on the fly.
Supporting changes elsewhere in the kernel:
The user-mode signal trampoline has moved from the U area to immediately
below the top of the stack (below PS_STRINGS). This allows the different
binary emulations to have their own signal trampoline code (which gets rid
of the hardwired syscall 103 (sigreturn on BSD, syslog on Linux)) and so
that the emulator can provide the exact "struct sigcontext *" argument to
the program's signal handlers.
The sigstack's "ss_flags" now uses SS_DISABLE and SS_ONSTACK flags, which
have the same values as the re-used SA_DISABLE and SA_ONSTACK which are
intended for sigaction only. This enables the support of a SA_RESETHAND
flag to sigaction to implement the gross SYSV and Linux SA_ONESHOT signal
semantics where the signal handler is reset when it's triggered.
makesyscalls.sh no longer appends the struct sysentvec on the end of the
generated init_sysent.c code. It's a lot saner to have it in a seperate
file rather than trying to update the structure inside the awk script. :-)
At exec time, the dozen bytes or so of signal trampoline code are copied
to the top of the user's stack, rather than obtaining the trampoline code
the old way by getting a clone of the parent's user area. This allows
Linux and native binaries to freely exec each other without getting
trampolines mixed up.
1996-03-02 19:38:20 +00:00
|
|
|
#include <sys/sysent.h>
|
1998-06-28 08:37:45 +00:00
|
|
|
#include <sys/sysctl.h>
|
1998-07-08 06:38:39 +00:00
|
|
|
#include <sys/malloc.h>
|
2001-09-08 20:02:33 +00:00
|
|
|
#include <sys/unistd.h>
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#include <machine/cpu.h>
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
#define ONSIG 32 /* NSIG for osig* syscalls. XXX. */
|
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
static int coredump __P((struct thread *));
|
1999-09-29 15:03:48 +00:00
|
|
|
static int do_sigaction __P((struct proc *p, int sig, struct sigaction *act,
|
1999-10-11 20:33:17 +00:00
|
|
|
struct sigaction *oact, int old));
|
1999-10-12 13:14:18 +00:00
|
|
|
static int do_sigprocmask __P((struct proc *p, int how, sigset_t *set,
|
|
|
|
sigset_t *oset, int old));
|
|
|
|
static char *expand_name __P((const char *, uid_t, pid_t));
|
|
|
|
static int killpg1 __P((struct proc *cp, int sig, int pgid, int all));
|
|
|
|
static int sig_ffs __P((sigset_t *set));
|
|
|
|
static int sigprop __P((int sig));
|
1995-12-14 08:32:45 +00:00
|
|
|
static void stop __P((struct proc *));
|
1994-05-25 09:21:21 +00:00
|
|
|
|
2000-04-16 18:53:38 +00:00
|
|
|
static int filt_sigattach(struct knote *kn);
|
|
|
|
static void filt_sigdetach(struct knote *kn);
|
|
|
|
static int filt_signal(struct knote *kn, long hint);
|
|
|
|
|
|
|
|
struct filterops sig_filtops =
|
|
|
|
{ 0, filt_sigattach, filt_sigdetach, filt_signal };
|
|
|
|
|
1998-07-28 22:34:12 +00:00
|
|
|
static int kern_logsigexit = 1;
|
1999-05-03 23:57:32 +00:00
|
|
|
SYSCTL_INT(_kern, KERN_LOGSIGEXIT, logsigexit, CTLFLAG_RW,
|
|
|
|
&kern_logsigexit, 0,
|
|
|
|
"Log processes quitting on abnormal signals to syslog(3)");
|
1998-07-28 22:34:12 +00:00
|
|
|
|
2002-01-10 01:25:35 +00:00
|
|
|
/*
|
|
|
|
* Policy -- Can ucred cr1 send SIGIO to process cr2?
|
|
|
|
* Should use cr_cansignal() once cr_cansignal() allows SIGIO and SIGURG
|
|
|
|
* in the right situations.
|
|
|
|
*/
|
|
|
|
#define CANSIGIO(cr1, cr2) \
|
|
|
|
((cr1)->cr_uid == 0 || \
|
|
|
|
(cr1)->cr_ruid == (cr2)->cr_ruid || \
|
|
|
|
(cr1)->cr_uid == (cr2)->cr_ruid || \
|
|
|
|
(cr1)->cr_ruid == (cr2)->cr_uid || \
|
|
|
|
(cr1)->cr_uid == (cr2)->cr_uid)
|
|
|
|
|
1998-09-14 05:36:51 +00:00
|
|
|
int sugid_coredump;
|
1999-05-03 23:57:32 +00:00
|
|
|
SYSCTL_INT(_kern, OID_AUTO, sugid_coredump, CTLFLAG_RW,
|
|
|
|
&sugid_coredump, 0, "Enable coredumping set user/group ID processes");
|
1998-06-28 08:37:45 +00:00
|
|
|
|
2000-03-21 07:10:42 +00:00
|
|
|
static int do_coredump = 1;
|
|
|
|
SYSCTL_INT(_kern, OID_AUTO, coredump, CTLFLAG_RW,
|
|
|
|
&do_coredump, 0, "Enable/Disable coredumps");
|
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
/*
|
|
|
|
* Signal properties and actions.
|
|
|
|
* The array below categorizes the signals and their default actions
|
|
|
|
* according to the following properties:
|
|
|
|
*/
|
|
|
|
#define SA_KILL 0x01 /* terminates process by default */
|
|
|
|
#define SA_CORE 0x02 /* ditto and coredumps */
|
|
|
|
#define SA_STOP 0x04 /* suspend process */
|
|
|
|
#define SA_TTYSTOP 0x08 /* ditto, from tty */
|
|
|
|
#define SA_IGNORE 0x10 /* ignore by default */
|
|
|
|
#define SA_CONT 0x20 /* continue if suspended */
|
|
|
|
#define SA_CANTMASK 0x40 /* non-maskable, catchable */
|
|
|
|
|
|
|
|
static int sigproptbl[NSIG] = {
|
|
|
|
SA_KILL, /* SIGHUP */
|
|
|
|
SA_KILL, /* SIGINT */
|
|
|
|
SA_KILL|SA_CORE, /* SIGQUIT */
|
|
|
|
SA_KILL|SA_CORE, /* SIGILL */
|
|
|
|
SA_KILL|SA_CORE, /* SIGTRAP */
|
|
|
|
SA_KILL|SA_CORE, /* SIGABRT */
|
|
|
|
SA_KILL|SA_CORE, /* SIGEMT */
|
|
|
|
SA_KILL|SA_CORE, /* SIGFPE */
|
|
|
|
SA_KILL, /* SIGKILL */
|
|
|
|
SA_KILL|SA_CORE, /* SIGBUS */
|
|
|
|
SA_KILL|SA_CORE, /* SIGSEGV */
|
|
|
|
SA_KILL|SA_CORE, /* SIGSYS */
|
|
|
|
SA_KILL, /* SIGPIPE */
|
|
|
|
SA_KILL, /* SIGALRM */
|
|
|
|
SA_KILL, /* SIGTERM */
|
|
|
|
SA_IGNORE, /* SIGURG */
|
|
|
|
SA_STOP, /* SIGSTOP */
|
|
|
|
SA_STOP|SA_TTYSTOP, /* SIGTSTP */
|
|
|
|
SA_IGNORE|SA_CONT, /* SIGCONT */
|
|
|
|
SA_IGNORE, /* SIGCHLD */
|
|
|
|
SA_STOP|SA_TTYSTOP, /* SIGTTIN */
|
|
|
|
SA_STOP|SA_TTYSTOP, /* SIGTTOU */
|
|
|
|
SA_IGNORE, /* SIGIO */
|
|
|
|
SA_KILL, /* SIGXCPU */
|
|
|
|
SA_KILL, /* SIGXFSZ */
|
|
|
|
SA_KILL, /* SIGVTALRM */
|
|
|
|
SA_KILL, /* SIGPROF */
|
|
|
|
SA_IGNORE, /* SIGWINCH */
|
|
|
|
SA_IGNORE, /* SIGINFO */
|
|
|
|
SA_KILL, /* SIGUSR1 */
|
|
|
|
SA_KILL, /* SIGUSR2 */
|
|
|
|
};
|
|
|
|
|
2000-09-17 14:28:33 +00:00
|
|
|
/*
|
|
|
|
* Determine signal that should be delivered to process p, the current
|
|
|
|
* process, 0 if none. If there is a pending stop signal with default
|
|
|
|
* action, the process stops in issignal().
|
|
|
|
*
|
2000-09-17 15:12:04 +00:00
|
|
|
* MP SAFE.
|
2000-09-17 14:28:33 +00:00
|
|
|
*/
|
|
|
|
int
|
|
|
|
CURSIG(struct proc *p)
|
|
|
|
{
|
|
|
|
sigset_t tmpset;
|
|
|
|
|
2001-06-22 23:02:37 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2000-09-17 15:12:04 +00:00
|
|
|
if (SIGISEMPTY(p->p_siglist))
|
2001-06-22 23:02:37 +00:00
|
|
|
return (0);
|
2000-09-17 14:28:33 +00:00
|
|
|
tmpset = p->p_siglist;
|
|
|
|
SIGSETNAND(tmpset, p->p_sigmask);
|
2000-09-17 15:12:04 +00:00
|
|
|
if (SIGISEMPTY(tmpset) && (p->p_flag & P_TRACED) == 0)
|
2001-06-22 23:02:37 +00:00
|
|
|
return (0);
|
|
|
|
return (issignal(p));
|
2000-09-17 14:28:33 +00:00
|
|
|
}
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
static __inline int
|
|
|
|
sigprop(int sig)
|
1999-09-29 15:03:48 +00:00
|
|
|
{
|
1999-10-12 13:14:18 +00:00
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
if (sig > 0 && sig < NSIG)
|
|
|
|
return (sigproptbl[_SIG_IDX(sig)]);
|
1999-10-12 13:14:18 +00:00
|
|
|
return (0);
|
1999-09-29 15:03:48 +00:00
|
|
|
}
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
static __inline int
|
|
|
|
sig_ffs(sigset_t *set)
|
1999-09-29 15:03:48 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
for (i = 0; i < _SIG_WORDS; i++)
|
1999-09-29 15:03:48 +00:00
|
|
|
if (set->__bits[i])
|
|
|
|
return (ffs(set->__bits[i]) + (i * 32));
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* do_sigaction
|
|
|
|
* sigaction
|
|
|
|
* osigaction
|
|
|
|
*/
|
|
|
|
static int
|
1999-10-11 20:33:17 +00:00
|
|
|
do_sigaction(p, sig, act, oact, old)
|
1999-09-29 15:03:48 +00:00
|
|
|
struct proc *p;
|
|
|
|
register int sig;
|
|
|
|
struct sigaction *act, *oact;
|
1999-10-11 20:33:17 +00:00
|
|
|
int old;
|
1999-09-29 15:03:48 +00:00
|
|
|
{
|
2001-03-07 02:59:54 +00:00
|
|
|
register struct sigacts *ps;
|
1999-09-29 15:03:48 +00:00
|
|
|
|
2001-11-02 23:50:00 +00:00
|
|
|
if (!_SIG_VALID(sig))
|
1999-09-29 15:03:48 +00:00
|
|
|
return (EINVAL);
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
ps = p->p_sigacts;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (oact) {
|
|
|
|
oact->sa_handler = ps->ps_sigact[_SIG_IDX(sig)];
|
|
|
|
oact->sa_mask = ps->ps_catchmask[_SIG_IDX(sig)];
|
|
|
|
oact->sa_flags = 0;
|
|
|
|
if (SIGISMEMBER(ps->ps_sigonstack, sig))
|
|
|
|
oact->sa_flags |= SA_ONSTACK;
|
|
|
|
if (!SIGISMEMBER(ps->ps_sigintr, sig))
|
|
|
|
oact->sa_flags |= SA_RESTART;
|
|
|
|
if (SIGISMEMBER(ps->ps_sigreset, sig))
|
|
|
|
oact->sa_flags |= SA_RESETHAND;
|
|
|
|
if (SIGISMEMBER(ps->ps_signodefer, sig))
|
|
|
|
oact->sa_flags |= SA_NODEFER;
|
|
|
|
if (SIGISMEMBER(ps->ps_siginfo, sig))
|
|
|
|
oact->sa_flags |= SA_SIGINFO;
|
1999-10-11 20:33:17 +00:00
|
|
|
if (sig == SIGCHLD && p->p_procsig->ps_flag & PS_NOCLDSTOP)
|
1999-09-29 15:03:48 +00:00
|
|
|
oact->sa_flags |= SA_NOCLDSTOP;
|
1999-10-11 20:33:17 +00:00
|
|
|
if (sig == SIGCHLD && p->p_procsig->ps_flag & PS_NOCLDWAIT)
|
1999-09-29 15:03:48 +00:00
|
|
|
oact->sa_flags |= SA_NOCLDWAIT;
|
|
|
|
}
|
|
|
|
if (act) {
|
|
|
|
if ((sig == SIGKILL || sig == SIGSTOP) &&
|
2001-03-07 02:59:54 +00:00
|
|
|
act->sa_handler != SIG_DFL) {
|
|
|
|
PROC_UNLOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
return (EINVAL);
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Change setting atomically.
|
|
|
|
*/
|
|
|
|
|
|
|
|
ps->ps_catchmask[_SIG_IDX(sig)] = act->sa_mask;
|
|
|
|
SIG_CANTMASK(ps->ps_catchmask[_SIG_IDX(sig)]);
|
|
|
|
if (act->sa_flags & SA_SIGINFO) {
|
2001-08-01 20:35:24 +00:00
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] =
|
|
|
|
(__sighandler_t *)act->sa_sigaction;
|
2001-10-07 16:11:37 +00:00
|
|
|
SIGADDSET(ps->ps_siginfo, sig);
|
|
|
|
} else {
|
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] = act->sa_handler;
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(ps->ps_siginfo, sig);
|
|
|
|
}
|
|
|
|
if (!(act->sa_flags & SA_RESTART))
|
|
|
|
SIGADDSET(ps->ps_sigintr, sig);
|
|
|
|
else
|
|
|
|
SIGDELSET(ps->ps_sigintr, sig);
|
|
|
|
if (act->sa_flags & SA_ONSTACK)
|
|
|
|
SIGADDSET(ps->ps_sigonstack, sig);
|
|
|
|
else
|
|
|
|
SIGDELSET(ps->ps_sigonstack, sig);
|
|
|
|
if (act->sa_flags & SA_RESETHAND)
|
|
|
|
SIGADDSET(ps->ps_sigreset, sig);
|
|
|
|
else
|
|
|
|
SIGDELSET(ps->ps_sigreset, sig);
|
|
|
|
if (act->sa_flags & SA_NODEFER)
|
|
|
|
SIGADDSET(ps->ps_signodefer, sig);
|
|
|
|
else
|
|
|
|
SIGDELSET(ps->ps_signodefer, sig);
|
|
|
|
#ifdef COMPAT_SUNOS
|
|
|
|
if (act->sa_flags & SA_USERTRAMP)
|
|
|
|
SIGADDSET(ps->ps_usertramp, sig);
|
|
|
|
else
|
|
|
|
SIGDELSET(ps->ps_usertramp, seg);
|
|
|
|
#endif
|
|
|
|
if (sig == SIGCHLD) {
|
|
|
|
if (act->sa_flags & SA_NOCLDSTOP)
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag |= PS_NOCLDSTOP;
|
1999-09-29 15:03:48 +00:00
|
|
|
else
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag &= ~PS_NOCLDSTOP;
|
2001-08-01 20:35:24 +00:00
|
|
|
if ((act->sa_flags & SA_NOCLDWAIT) ||
|
|
|
|
ps->ps_sigact[_SIG_IDX(SIGCHLD)] == SIG_IGN) {
|
1999-09-29 15:03:48 +00:00
|
|
|
/*
|
|
|
|
* Paranoia: since SA_NOCLDWAIT is implemented
|
|
|
|
* by reparenting the dying child to PID 1 (and
|
|
|
|
* trust it to reap the zombie), PID 1 itself
|
|
|
|
* is forbidden to set SA_NOCLDWAIT.
|
|
|
|
*/
|
|
|
|
if (p->p_pid == 1)
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag &= ~PS_NOCLDWAIT;
|
1999-09-29 15:03:48 +00:00
|
|
|
else
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag |= PS_NOCLDWAIT;
|
1999-09-29 15:03:48 +00:00
|
|
|
} else
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag &= ~PS_NOCLDWAIT;
|
1999-09-29 15:03:48 +00:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Set bit in p_sigignore for signals that are set to SIG_IGN,
|
|
|
|
* and for signals set to SIG_DFL where the default is to
|
|
|
|
* ignore. However, don't put SIGCONT in p_sigignore, as we
|
|
|
|
* have to restart the process.
|
|
|
|
*/
|
|
|
|
if (ps->ps_sigact[_SIG_IDX(sig)] == SIG_IGN ||
|
|
|
|
(sigprop(sig) & SA_IGNORE &&
|
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] == SIG_DFL)) {
|
|
|
|
/* never to be seen again */
|
|
|
|
SIGDELSET(p->p_siglist, sig);
|
|
|
|
if (sig != SIGCONT)
|
|
|
|
/* easier in psignal */
|
|
|
|
SIGADDSET(p->p_sigignore, sig);
|
|
|
|
SIGDELSET(p->p_sigcatch, sig);
|
1999-10-11 20:33:17 +00:00
|
|
|
} else {
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_sigignore, sig);
|
|
|
|
if (ps->ps_sigact[_SIG_IDX(sig)] == SIG_DFL)
|
|
|
|
SIGDELSET(p->p_sigcatch, sig);
|
|
|
|
else
|
|
|
|
SIGADDSET(p->p_sigcatch, sig);
|
|
|
|
}
|
2001-08-21 02:32:59 +00:00
|
|
|
#ifdef COMPAT_43
|
1999-10-11 20:33:17 +00:00
|
|
|
if (ps->ps_sigact[_SIG_IDX(sig)] == SIG_IGN ||
|
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] == SIG_DFL || !old)
|
|
|
|
SIGDELSET(ps->ps_osigset, sig);
|
|
|
|
else
|
|
|
|
SIGADDSET(ps->ps_osigset, sig);
|
2001-08-21 02:32:59 +00:00
|
|
|
#endif
|
1999-09-29 15:03:48 +00:00
|
|
|
}
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigaction_args {
|
1999-09-29 15:03:48 +00:00
|
|
|
int sig;
|
|
|
|
struct sigaction *act;
|
|
|
|
struct sigaction *oact;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
sigaction(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct sigaction_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct sigaction act, oact;
|
|
|
|
register struct sigaction *actp, *oactp;
|
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
actp = (uap->act != NULL) ? &act : NULL;
|
|
|
|
oactp = (uap->oact != NULL) ? &oact : NULL;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (actp) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyin(uap->act, actp, sizeof(act));
|
1999-09-29 15:03:48 +00:00
|
|
|
if (error)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-10-11 20:33:17 +00:00
|
|
|
error = do_sigaction(p, uap->sig, actp, oactp, 0);
|
1999-09-29 15:03:48 +00:00
|
|
|
if (oactp && !error) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyout(oactp, uap->oact, sizeof(oact));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
done2:
|
|
|
|
mtx_unlock(&Giant);
|
1999-09-29 15:03:48 +00:00
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
2000-08-26 02:27:01 +00:00
|
|
|
#ifdef COMPAT_43 /* XXX - COMPAT_FBSD3 */
|
1999-09-29 15:03:48 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
|
|
|
struct osigaction_args {
|
|
|
|
int signum;
|
|
|
|
struct osigaction *nsa;
|
|
|
|
struct osigaction *osa;
|
|
|
|
};
|
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
/* ARGSUSED */
|
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigaction(td, uap)
|
|
|
|
struct thread *td;
|
1999-09-29 15:03:48 +00:00
|
|
|
register struct osigaction_args *uap;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct osigaction sa;
|
|
|
|
struct sigaction nsa, osa;
|
|
|
|
register struct sigaction *nsap, *osap;
|
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
if (uap->signum <= 0 || uap->signum >= ONSIG)
|
|
|
|
return (EINVAL);
|
2001-09-01 18:19:21 +00:00
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
nsap = (uap->nsa != NULL) ? &nsa : NULL;
|
|
|
|
osap = (uap->osa != NULL) ? &osa : NULL;
|
2001-09-01 18:19:21 +00:00
|
|
|
|
|
|
|
mtx_lock(&Giant);
|
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
if (nsap) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyin(uap->nsa, &sa, sizeof(sa));
|
1999-09-29 15:03:48 +00:00
|
|
|
if (error)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
1999-09-29 15:03:48 +00:00
|
|
|
nsap->sa_handler = sa.sa_handler;
|
|
|
|
nsap->sa_flags = sa.sa_flags;
|
|
|
|
OSIG2SIG(sa.sa_mask, nsap->sa_mask);
|
Implement SA_SIGINFO for i386. Thanks to Bruce Evans for much more
than a review, this was a nice puzzle.
This is supposed to be binary and source compatible with older
applications that access the old FreeBSD-style three arguments to a
signal handler.
Except those applications that access hidden signal handler arguments
bejond the documented third one. If you have applications that do,
please let me know so that we take the opportunity to provide the
functionality they need in a documented manner.
Also except application that use 'struct sigframe' directly. You need
to recompile gdb and doscmd. `make world` is recommended.
Example program that demonstrates how SA_SIGINFO and old-style FreeBSD
handlers (with their three args) may be used in the same process is at
http://www3.cons.org/tmp/fbsd-siginfo.c
Programs that use the old FreeBSD-style three arguments are easy to
change to SA_SIGINFO (although they don't need to, since the old style
will still work):
Old args to signal handler:
void handler_sn(int sig, int code, struct sigcontext *scp)
New args:
void handler_si(int sig, siginfo_t *si, void *third)
where:
old:code == new:second->si_code
old:scp == &(new:si->si_scp) /* Passed by value! */
The latter is also pointed to by new:third, but accessing via
si->si_scp is preferred because it is type-save.
FreeBSD implementation notes:
- This is just the framework to make the interface POSIX compatible.
For now, no additional functionality is provided. This is supposed
to happen now, starting with floating point values.
- We don't use 'sigcontext_t.si_value' for now (POSIX meant it for
realtime-related values).
- Documentation will be updated when new functionality is added and
the exact arguments passed are determined. The comments in
sys/signal.h are meant to be useful.
Reviewed by: BDE
1999-07-06 07:13:48 +00:00
|
|
|
}
|
1999-10-11 20:33:17 +00:00
|
|
|
error = do_sigaction(p, uap->signum, nsap, osap, 1);
|
1999-09-29 15:03:48 +00:00
|
|
|
if (osap && !error) {
|
|
|
|
sa.sa_handler = osap->sa_handler;
|
|
|
|
sa.sa_flags = osap->sa_flags;
|
|
|
|
SIG2OSIG(osap->sa_mask, sa.sa_mask);
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyout(&sa, uap->osa, sizeof(sa));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
done2:
|
|
|
|
mtx_unlock(&Giant);
|
1999-09-29 15:03:48 +00:00
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2000-08-26 02:27:01 +00:00
|
|
|
#endif /* COMPAT_43 */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize signal state for process 0;
|
|
|
|
* set to ignore signals that are ignored by default.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
siginit(p)
|
|
|
|
struct proc *p;
|
|
|
|
{
|
|
|
|
register int i;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
for (i = 1; i <= NSIG; i++)
|
|
|
|
if (sigprop(i) & SA_IGNORE && i != SIGCONT)
|
|
|
|
SIGADDSET(p->p_sigignore, i);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset signals for an exec of the specified process.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
execsigs(p)
|
|
|
|
register struct proc *p;
|
|
|
|
{
|
2001-03-07 02:59:54 +00:00
|
|
|
register struct sigacts *ps;
|
1999-09-29 15:03:48 +00:00
|
|
|
register int sig;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset caught signals. Held signals remain held
|
|
|
|
* through p_sigmask (unless they were caught,
|
|
|
|
* and are now ignored by default).
|
|
|
|
*/
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
ps = p->p_sigacts;
|
1999-09-29 15:03:48 +00:00
|
|
|
while (SIGNOTEMPTY(p->p_sigcatch)) {
|
|
|
|
sig = sig_ffs(&p->p_sigcatch);
|
|
|
|
SIGDELSET(p->p_sigcatch, sig);
|
|
|
|
if (sigprop(sig) & SA_IGNORE) {
|
|
|
|
if (sig != SIGCONT)
|
|
|
|
SIGADDSET(p->p_sigignore, sig);
|
|
|
|
SIGDELSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] = SIG_DFL;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Reset stack state to the user stack.
|
|
|
|
* Clear set of signals caught on the signal stack.
|
|
|
|
*/
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_sigstk.ss_flags = SS_DISABLE;
|
|
|
|
p->p_sigstk.ss_size = 0;
|
|
|
|
p->p_sigstk.ss_sp = 0;
|
2001-05-07 18:07:29 +00:00
|
|
|
p->p_flag &= ~P_ALTSTACK;
|
1999-07-18 13:40:11 +00:00
|
|
|
/*
|
|
|
|
* Reset no zombies if child dies flag as Solaris does.
|
|
|
|
*/
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_procsig->ps_flag &= ~PS_NOCLDWAIT;
|
2001-06-11 09:15:41 +00:00
|
|
|
if (ps->ps_sigact[_SIG_IDX(SIGCHLD)] == SIG_IGN)
|
|
|
|
ps->ps_sigact[_SIG_IDX(SIGCHLD)] = SIG_DFL;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2001-03-07 02:59:54 +00:00
|
|
|
* do_sigprocmask()
|
2000-04-02 17:52:43 +00:00
|
|
|
*
|
2001-03-07 02:59:54 +00:00
|
|
|
* Manipulate signal mask.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
static int
|
1999-10-11 20:33:17 +00:00
|
|
|
do_sigprocmask(p, how, set, oset, old)
|
1999-09-29 15:03:48 +00:00
|
|
|
struct proc *p;
|
|
|
|
int how;
|
|
|
|
sigset_t *set, *oset;
|
1999-10-11 20:33:17 +00:00
|
|
|
int old;
|
1999-09-29 15:03:48 +00:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
if (oset != NULL)
|
|
|
|
*oset = p->p_sigmask;
|
|
|
|
|
|
|
|
error = 0;
|
|
|
|
if (set != NULL) {
|
|
|
|
switch (how) {
|
|
|
|
case SIG_BLOCK:
|
1999-10-11 20:33:17 +00:00
|
|
|
SIG_CANTMASK(*set);
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGSETOR(p->p_sigmask, *set);
|
|
|
|
break;
|
|
|
|
case SIG_UNBLOCK:
|
|
|
|
SIGSETNAND(p->p_sigmask, *set);
|
|
|
|
break;
|
|
|
|
case SIG_SETMASK:
|
1999-10-11 20:33:17 +00:00
|
|
|
SIG_CANTMASK(*set);
|
|
|
|
if (old)
|
|
|
|
SIGSETLO(p->p_sigmask, *set);
|
|
|
|
else
|
|
|
|
p->p_sigmask = *set;
|
1999-09-29 15:03:48 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
error = EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2000-04-02 17:52:43 +00:00
|
|
|
/*
|
2001-09-12 08:38:13 +00:00
|
|
|
* sigprocmask() - MP SAFE (XXXKSE not under KSE it isn't)
|
2000-04-02 17:52:43 +00:00
|
|
|
*/
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigprocmask_args {
|
|
|
|
int how;
|
1999-09-29 15:03:48 +00:00
|
|
|
const sigset_t *set;
|
|
|
|
sigset_t *oset;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
sigprocmask(td, uap)
|
|
|
|
register struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigprocmask_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t set, oset;
|
|
|
|
sigset_t *setp, *osetp;
|
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
setp = (uap->set != NULL) ? &set : NULL;
|
|
|
|
osetp = (uap->oset != NULL) ? &oset : NULL;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (setp) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyin(uap->set, setp, sizeof(set));
|
1999-09-29 15:03:48 +00:00
|
|
|
if (error)
|
|
|
|
return (error);
|
|
|
|
}
|
1999-10-11 20:33:17 +00:00
|
|
|
error = do_sigprocmask(p, uap->how, setp, osetp, 0);
|
1999-09-29 15:03:48 +00:00
|
|
|
if (osetp && !error) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyout(osetp, uap->oset, sizeof(oset));
|
1999-09-29 15:03:48 +00:00
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2000-08-26 02:27:01 +00:00
|
|
|
#ifdef COMPAT_43 /* XXX - COMPAT_FBSD3 */
|
2000-04-02 17:52:43 +00:00
|
|
|
/*
|
|
|
|
* osigprocmask() - MP SAFE
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
|
|
|
struct osigprocmask_args {
|
|
|
|
int how;
|
|
|
|
osigset_t mask;
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigprocmask(td, uap)
|
|
|
|
register struct thread *td;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct osigprocmask_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t set, oset;
|
|
|
|
int error;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
OSIG2SIG(uap->mask, set);
|
1999-10-11 20:33:17 +00:00
|
|
|
error = do_sigprocmask(p, uap->how, &set, &oset, 1);
|
2001-09-12 08:38:13 +00:00
|
|
|
SIG2OSIG(oset, td->td_retval[0]);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
|
|
|
}
|
2000-08-26 02:27:01 +00:00
|
|
|
#endif /* COMPAT_43 */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigpending_args {
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t *set;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
sigpending(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigpending_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
2001-03-07 02:59:54 +00:00
|
|
|
sigset_t siglist;
|
2001-09-01 18:19:21 +00:00
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
siglist = p->p_siglist;
|
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
|
|
|
error = copyout(&siglist, uap->set, sizeof(sigset_t));
|
|
|
|
return(error);
|
1999-09-29 15:03:48 +00:00
|
|
|
}
|
|
|
|
|
2000-08-26 02:27:01 +00:00
|
|
|
#ifdef COMPAT_43 /* XXX - COMPAT_FBSD3 */
|
1999-09-29 15:03:48 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
|
|
|
struct osigpending_args {
|
|
|
|
int dummy;
|
|
|
|
};
|
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
/* ARGSUSED */
|
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigpending(td, uap)
|
|
|
|
struct thread *td;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct osigpending_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
|
|
|
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2001-09-12 08:38:13 +00:00
|
|
|
SIG2OSIG(p->p_siglist, td->td_retval[0]);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
2000-08-26 02:27:01 +00:00
|
|
|
#endif /* COMPAT_43 */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#if defined(COMPAT_43) || defined(COMPAT_SUNOS)
|
|
|
|
/*
|
|
|
|
* Generalized interface signal handler, 4.3-compatible.
|
|
|
|
*/
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigvec_args {
|
|
|
|
int signum;
|
|
|
|
struct sigvec *nsv;
|
|
|
|
struct sigvec *osv;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigvec(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct osigvec_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigvec vec;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct sigaction nsa, osa;
|
|
|
|
register struct sigaction *nsap, *osap;
|
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
if (uap->signum <= 0 || uap->signum >= ONSIG)
|
|
|
|
return (EINVAL);
|
|
|
|
nsap = (uap->nsv != NULL) ? &nsa : NULL;
|
|
|
|
osap = (uap->osv != NULL) ? &osa : NULL;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (nsap) {
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyin(uap->nsv, &vec, sizeof(vec));
|
1999-09-29 15:03:48 +00:00
|
|
|
if (error)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (error);
|
1999-09-29 15:03:48 +00:00
|
|
|
nsap->sa_handler = vec.sv_handler;
|
|
|
|
OSIG2SIG(vec.sv_mask, nsap->sa_mask);
|
|
|
|
nsap->sa_flags = vec.sv_flags;
|
|
|
|
nsap->sa_flags ^= SA_RESTART; /* opposite of SV_INTERRUPT */
|
|
|
|
#ifdef COMPAT_SUNOS
|
|
|
|
nsap->sa_flags |= SA_USERTRAMP;
|
|
|
|
#endif
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
1999-10-11 20:33:17 +00:00
|
|
|
error = do_sigaction(p, uap->signum, nsap, osap, 1);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1999-09-29 15:03:48 +00:00
|
|
|
if (osap && !error) {
|
|
|
|
vec.sv_handler = osap->sa_handler;
|
|
|
|
SIG2OSIG(osap->sa_mask, vec.sv_mask);
|
|
|
|
vec.sv_flags = osap->sa_flags;
|
|
|
|
vec.sv_flags &= ~SA_NOCLDWAIT;
|
|
|
|
vec.sv_flags ^= SA_RESTART;
|
1994-05-24 10:09:53 +00:00
|
|
|
#ifdef COMPAT_SUNOS
|
1999-09-29 15:03:48 +00:00
|
|
|
vec.sv_flags &= ~SA_NOCLDSTOP;
|
1994-05-24 10:09:53 +00:00
|
|
|
#endif
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyout(&vec, uap->osv, sizeof(vec));
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigblock_args {
|
|
|
|
int mask;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigblock(td, uap)
|
|
|
|
register struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigblock_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t set;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
OSIG2SIG(uap->mask, set);
|
|
|
|
SIG_CANTMASK(set);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2001-09-12 08:38:13 +00:00
|
|
|
SIG2OSIG(p->p_sigmask, td->td_retval[0]);
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGSETOR(p->p_sigmask, set);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigsetmask_args {
|
|
|
|
int mask;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigsetmask(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigsetmask_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t set;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
OSIG2SIG(uap->mask, set);
|
|
|
|
SIG_CANTMASK(set);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2001-09-12 08:38:13 +00:00
|
|
|
SIG2OSIG(p->p_sigmask, td->td_retval[0]);
|
1999-10-11 20:33:17 +00:00
|
|
|
SIGSETLO(p->p_sigmask, set);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
#endif /* COMPAT_43 || COMPAT_SUNOS */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Suspend process until signal, providing mask to be set
|
|
|
|
* in the meantime. Note nonstandard calling convention:
|
|
|
|
* libc stub passes mask, not pointer, to save a copyin.
|
2001-09-12 08:38:13 +00:00
|
|
|
***** XXXKSE this doesn't make sense under KSE.
|
|
|
|
***** Do we suspend the thread or all threads in the process?
|
|
|
|
***** How do we suspend threads running NOW on another processor?
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigsuspend_args {
|
1999-09-29 15:03:48 +00:00
|
|
|
const sigset_t *sigmask;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
sigsuspend(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigsuspend_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t mask;
|
2001-03-07 02:59:54 +00:00
|
|
|
register struct sigacts *ps;
|
1999-09-29 15:03:48 +00:00
|
|
|
int error;
|
|
|
|
|
1999-10-12 13:14:18 +00:00
|
|
|
error = copyin(uap->sigmask, &mask, sizeof(mask));
|
1999-09-29 15:03:48 +00:00
|
|
|
if (error)
|
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
1999-10-11 20:33:17 +00:00
|
|
|
* When returning from sigsuspend, we want
|
1994-05-24 10:09:53 +00:00
|
|
|
* the old mask to be restored after the
|
|
|
|
* signal handler has finished. Thus, we
|
|
|
|
* save it here and mark the sigacts structure
|
|
|
|
* to indicate this.
|
|
|
|
*/
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
ps = p->p_sigacts;
|
1998-12-19 02:55:34 +00:00
|
|
|
p->p_oldsigmask = p->p_sigmask;
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_flag |= P_OLDMASK;
|
|
|
|
|
|
|
|
SIG_CANTMASK(mask);
|
1999-09-29 15:03:48 +00:00
|
|
|
p->p_sigmask = mask;
|
2001-03-07 02:59:54 +00:00
|
|
|
while (msleep((caddr_t) ps, &p->p_mtx, PPAUSE|PCATCH, "pause", 0) == 0)
|
1999-09-29 15:03:48 +00:00
|
|
|
/* void */;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1999-09-29 15:03:48 +00:00
|
|
|
/* always return EINTR rather than ERESTART... */
|
|
|
|
return (EINTR);
|
|
|
|
}
|
|
|
|
|
2000-08-26 02:27:01 +00:00
|
|
|
#ifdef COMPAT_43 /* XXX - COMPAT_FBSD3 */
|
1999-09-29 15:03:48 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
|
|
|
struct osigsuspend_args {
|
|
|
|
osigset_t mask;
|
|
|
|
};
|
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
/* ARGSUSED */
|
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigsuspend(td, uap)
|
|
|
|
struct thread *td;
|
1999-09-29 15:03:48 +00:00
|
|
|
struct osigsuspend_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-10-11 20:33:17 +00:00
|
|
|
sigset_t mask;
|
2001-03-07 02:59:54 +00:00
|
|
|
register struct sigacts *ps;
|
1999-09-29 15:03:48 +00:00
|
|
|
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
ps = p->p_sigacts;
|
1999-09-29 15:03:48 +00:00
|
|
|
p->p_oldsigmask = p->p_sigmask;
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_flag |= P_OLDMASK;
|
|
|
|
OSIG2SIG(uap->mask, mask);
|
|
|
|
SIG_CANTMASK(mask);
|
|
|
|
SIGSETLO(p->p_sigmask, mask);
|
2001-03-07 02:59:54 +00:00
|
|
|
while (msleep((caddr_t) ps, &p->p_mtx, PPAUSE|PCATCH, "opause", 0) == 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
/* void */;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
/* always return EINTR rather than ERESTART... */
|
|
|
|
return (EINTR);
|
|
|
|
}
|
2000-08-26 02:27:01 +00:00
|
|
|
#endif /* COMPAT_43 */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
#if defined(COMPAT_43) || defined(COMPAT_SUNOS)
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct osigstack_args {
|
|
|
|
struct sigstack *nss;
|
|
|
|
struct sigstack *oss;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
osigstack(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct osigstack_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigstack ss;
|
2001-09-01 18:19:21 +00:00
|
|
|
int error = 0;
|
|
|
|
|
|
|
|
mtx_lock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2000-11-30 05:23:49 +00:00
|
|
|
if (uap->oss != NULL) {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
ss.ss_sp = p->p_sigstk.ss_sp;
|
2001-09-12 08:38:13 +00:00
|
|
|
ss.ss_onstack = sigonstack(cpu_getstack(td));
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
error = copyout(&ss, uap->oss, sizeof(struct sigstack));
|
|
|
|
if (error)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
2000-11-30 05:23:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (uap->nss != NULL) {
|
|
|
|
if ((error = copyin(uap->nss, &ss, sizeof(ss))) != 0)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_sigstk.ss_sp = ss.ss_sp;
|
|
|
|
p->p_sigstk.ss_size = 0;
|
|
|
|
p->p_sigstk.ss_flags |= ss.ss_onstack & SS_ONSTACK;
|
|
|
|
p->p_flag |= P_ALTSTACK;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
done2:
|
|
|
|
mtx_unlock(&Giant);
|
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
#endif /* COMPAT_43 || COMPAT_SUNOS */
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct sigaltstack_args {
|
1999-09-29 15:03:48 +00:00
|
|
|
stack_t *ss;
|
|
|
|
stack_t *oss;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
sigaltstack(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct sigaltstack_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-29 15:03:48 +00:00
|
|
|
stack_t ss;
|
2001-09-01 18:19:21 +00:00
|
|
|
int oonstack;
|
|
|
|
int error = 0;
|
|
|
|
|
|
|
|
mtx_lock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
oonstack = sigonstack(cpu_getstack(td));
|
2000-11-30 05:23:49 +00:00
|
|
|
|
|
|
|
if (uap->oss != NULL) {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
ss = p->p_sigstk;
|
|
|
|
ss.ss_flags = (p->p_flag & P_ALTSTACK)
|
|
|
|
? ((oonstack) ? SS_ONSTACK : 0) : SS_DISABLE;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
if ((error = copyout(&ss, uap->oss, sizeof(stack_t))) != 0)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
2000-11-30 05:23:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (uap->ss != NULL) {
|
2001-09-01 18:19:21 +00:00
|
|
|
if (oonstack) {
|
|
|
|
error = EPERM;
|
|
|
|
goto done2;
|
|
|
|
}
|
2000-11-30 05:23:49 +00:00
|
|
|
if ((error = copyin(uap->ss, &ss, sizeof(ss))) != 0)
|
2001-09-01 18:19:21 +00:00
|
|
|
goto done2;
|
|
|
|
if ((ss.ss_flags & ~SS_DISABLE) != 0) {
|
|
|
|
error = EINVAL;
|
|
|
|
goto done2;
|
|
|
|
}
|
2000-11-30 05:23:49 +00:00
|
|
|
if (!(ss.ss_flags & SS_DISABLE)) {
|
2001-09-01 18:19:21 +00:00
|
|
|
if (ss.ss_size < p->p_sysent->sv_minsigstksz) {
|
|
|
|
error = ENOMEM;
|
|
|
|
goto done2;
|
|
|
|
}
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
p->p_sigstk = ss;
|
|
|
|
p->p_flag |= P_ALTSTACK;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
} else {
|
|
|
|
PROC_LOCK(p);
|
2000-11-30 05:23:49 +00:00
|
|
|
p->p_flag &= ~P_ALTSTACK;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
done2:
|
|
|
|
mtx_unlock(&Giant);
|
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
1994-10-10 01:00:49 +00:00
|
|
|
/*
|
|
|
|
* Common code for kill process group/broadcast kill.
|
|
|
|
* cp is calling process.
|
|
|
|
*/
|
|
|
|
int
|
1999-09-29 15:03:48 +00:00
|
|
|
killpg1(cp, sig, pgid, all)
|
1994-10-10 01:00:49 +00:00
|
|
|
register struct proc *cp;
|
1999-09-29 15:03:48 +00:00
|
|
|
int sig, pgid, all;
|
1994-10-10 01:00:49 +00:00
|
|
|
{
|
|
|
|
register struct proc *p;
|
|
|
|
struct pgrp *pgrp;
|
|
|
|
int nfound = 0;
|
1995-05-30 08:16:23 +00:00
|
|
|
|
2000-11-22 07:42:04 +00:00
|
|
|
if (all) {
|
1995-05-30 08:16:23 +00:00
|
|
|
/*
|
|
|
|
* broadcast
|
1994-10-10 01:00:49 +00:00
|
|
|
*/
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_slock(&allproc_lock);
|
1999-11-16 10:56:05 +00:00
|
|
|
LIST_FOREACH(p, &allproc, p_list) {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (p->p_pid <= 1 || p->p_flag & P_SYSTEM || p == cp) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
continue;
|
|
|
|
}
|
2001-04-24 00:51:53 +00:00
|
|
|
if (p_cansignal(cp, p, sig) == 0) {
|
|
|
|
nfound++;
|
|
|
|
if (sig)
|
|
|
|
psignal(p, sig);
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
2001-04-24 00:51:53 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-10-10 01:00:49 +00:00
|
|
|
}
|
2001-03-28 11:52:56 +00:00
|
|
|
sx_sunlock(&allproc_lock);
|
2000-11-22 07:42:04 +00:00
|
|
|
} else {
|
1995-05-30 08:16:23 +00:00
|
|
|
if (pgid == 0)
|
|
|
|
/*
|
1994-10-10 01:00:49 +00:00
|
|
|
* zero pgid means send to my process group.
|
|
|
|
*/
|
|
|
|
pgrp = cp->p_pgrp;
|
|
|
|
else {
|
|
|
|
pgrp = pgfind(pgid);
|
|
|
|
if (pgrp == NULL)
|
|
|
|
return (ESRCH);
|
|
|
|
}
|
1999-11-16 10:56:05 +00:00
|
|
|
LIST_FOREACH(p, &pgrp->pg_members, p_pglist) {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
if (p->p_pid <= 1 || p->p_flag & P_SYSTEM) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mtx_lock_spin(&sched_lock);
|
|
|
|
if (p->p_stat == SZOMB) {
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-04-24 00:51:53 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-03-07 02:59:54 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-04-24 00:51:53 +00:00
|
|
|
if (p_cansignal(cp, p, sig) == 0) {
|
|
|
|
nfound++;
|
|
|
|
if (sig)
|
|
|
|
psignal(p, sig);
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
2001-04-24 00:51:53 +00:00
|
|
|
PROC_UNLOCK(p);
|
1994-10-10 01:00:49 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return (nfound ? 0 : ESRCH);
|
|
|
|
}
|
|
|
|
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct kill_args {
|
|
|
|
int pid;
|
|
|
|
int signum;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
kill(td, uap)
|
|
|
|
register struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct kill_args *uap;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
register struct proc *cp = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct proc *p;
|
2001-09-01 18:19:21 +00:00
|
|
|
int error = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-11-03 13:26:15 +00:00
|
|
|
if ((u_int)uap->signum > _SIG_MAXSIG)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (EINVAL);
|
2001-09-01 18:19:21 +00:00
|
|
|
|
|
|
|
mtx_lock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (uap->pid > 0) {
|
|
|
|
/* kill single process */
|
2001-09-01 18:19:21 +00:00
|
|
|
if ((p = pfind(uap->pid)) == NULL) {
|
|
|
|
error = ESRCH;
|
|
|
|
} else if (p_cansignal(cp, p, uap->signum)) {
|
|
|
|
PROC_UNLOCK(p);
|
|
|
|
error = EPERM;
|
|
|
|
} else {
|
|
|
|
if (uap->signum)
|
|
|
|
psignal(p, uap->signum);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
error = 0;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
switch (uap->pid) {
|
|
|
|
case -1: /* broadcast signal */
|
|
|
|
error = killpg1(cp, uap->signum, 0, 1);
|
|
|
|
break;
|
|
|
|
case 0: /* signal own process group */
|
|
|
|
error = killpg1(cp, uap->signum, 0, 0);
|
|
|
|
break;
|
|
|
|
default: /* negative explicit process group */
|
|
|
|
error = killpg1(cp, uap->signum, -uap->pid, 0);
|
|
|
|
break;
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
|
|
|
return(error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(COMPAT_43) || defined(COMPAT_SUNOS)
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct okillpg_args {
|
|
|
|
int pgid;
|
|
|
|
int signum;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
okillpg(td, uap)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct okillpg_args *uap;
|
|
|
|
{
|
2001-09-01 18:19:21 +00:00
|
|
|
int error;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-11-03 13:26:15 +00:00
|
|
|
if ((u_int)uap->signum > _SIG_MAXSIG)
|
1994-05-24 10:09:53 +00:00
|
|
|
return (EINVAL);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-09-12 08:38:13 +00:00
|
|
|
error = killpg1(td->td_proc, uap->signum, uap->pgid, 0);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
|
|
|
return (error);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
#endif /* COMPAT_43 || COMPAT_SUNOS */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Send a signal to a process group.
|
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
gsignal(pgid, sig)
|
|
|
|
int pgid, sig;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
struct pgrp *pgrp;
|
|
|
|
|
|
|
|
if (pgid && (pgrp = pgfind(pgid)))
|
1999-09-29 15:03:48 +00:00
|
|
|
pgsignal(pgrp, sig, 0);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
1997-02-10 02:22:35 +00:00
|
|
|
* Send a signal to a process group. If checktty is 1,
|
1994-05-24 10:09:53 +00:00
|
|
|
* limit to members which have a controlling terminal.
|
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
pgsignal(pgrp, sig, checkctty)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct pgrp *pgrp;
|
1999-09-29 15:03:48 +00:00
|
|
|
int sig, checkctty;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
register struct proc *p;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
if (pgrp) {
|
|
|
|
LIST_FOREACH(p, &pgrp->pg_members, p_pglist) {
|
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (checkctty == 0 || p->p_flag & P_CONTROLT)
|
1999-09-29 15:03:48 +00:00
|
|
|
psignal(p, sig);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Send a signal caused by a trap to the current process.
|
|
|
|
* If it will be caught immediately, deliver it with correct code.
|
|
|
|
* Otherwise, post it normally.
|
2001-08-30 18:50:57 +00:00
|
|
|
*
|
|
|
|
* MPSAFE
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
trapsignal(p, sig, code)
|
1994-05-24 10:09:53 +00:00
|
|
|
struct proc *p;
|
1999-09-29 15:03:48 +00:00
|
|
|
register int sig;
|
1996-03-11 02:22:02 +00:00
|
|
|
u_long code;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
|
|
|
register struct sigacts *ps = p->p_sigacts;
|
|
|
|
|
2001-08-30 18:50:57 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1999-09-29 15:03:48 +00:00
|
|
|
if ((p->p_flag & P_TRACED) == 0 && SIGISMEMBER(p->p_sigcatch, sig) &&
|
2000-12-16 21:03:48 +00:00
|
|
|
!SIGISMEMBER(p->p_sigmask, sig)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_stats->p_ru.ru_nsignals++;
|
|
|
|
#ifdef KTRACE
|
|
|
|
if (KTRPOINT(p, KTR_PSIG))
|
1999-09-29 15:03:48 +00:00
|
|
|
ktrpsig(p->p_tracep, sig, ps->ps_sigact[_SIG_IDX(sig)],
|
|
|
|
&p->p_sigmask, code);
|
1994-05-24 10:09:53 +00:00
|
|
|
#endif
|
1999-09-29 15:03:48 +00:00
|
|
|
(*p->p_sysent->sv_sendsig)(ps->ps_sigact[_SIG_IDX(sig)], sig,
|
|
|
|
&p->p_sigmask, code);
|
|
|
|
SIGSETOR(p->p_sigmask, ps->ps_catchmask[_SIG_IDX(sig)]);
|
|
|
|
if (!SIGISMEMBER(ps->ps_signodefer, sig))
|
|
|
|
SIGADDSET(p->p_sigmask, sig);
|
|
|
|
if (SIGISMEMBER(ps->ps_sigreset, sig)) {
|
1996-03-30 15:15:30 +00:00
|
|
|
/*
|
1999-09-29 15:03:48 +00:00
|
|
|
* See do_sigaction() for origin of this code.
|
1996-03-30 15:15:30 +00:00
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_sigcatch, sig);
|
|
|
|
if (sig != SIGCONT &&
|
|
|
|
sigprop(sig) & SA_IGNORE)
|
|
|
|
SIGADDSET(p->p_sigignore, sig);
|
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] = SIG_DFL;
|
1996-03-15 08:01:33 +00:00
|
|
|
}
|
1999-10-12 13:14:18 +00:00
|
|
|
} else {
|
1998-12-19 02:55:34 +00:00
|
|
|
p->p_code = code; /* XXX for core dump/debugger */
|
1999-09-29 15:03:48 +00:00
|
|
|
p->p_sig = sig; /* XXX to verify code */
|
|
|
|
psignal(p, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-08-30 18:50:57 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Send the signal to the process. If the signal has an action, the action
|
|
|
|
* is usually performed by the target process rather than the caller; we add
|
|
|
|
* the signal to the set of pending signals for the process.
|
|
|
|
*
|
|
|
|
* Exceptions:
|
|
|
|
* o When a stop signal is sent to a sleeping process that takes the
|
|
|
|
* default action, the process is stopped without awakening it.
|
|
|
|
* o SIGCONT restarts stopped processes (or puts them back to sleep)
|
|
|
|
* regardless of the signal action (eg, blocked or ignored).
|
|
|
|
*
|
|
|
|
* Other ignored signals are discarded immediately.
|
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
psignal(p, sig)
|
1994-05-24 10:09:53 +00:00
|
|
|
register struct proc *p;
|
1999-09-29 15:03:48 +00:00
|
|
|
register int sig;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-01-24 11:08:02 +00:00
|
|
|
register int prop;
|
1994-05-24 10:09:53 +00:00
|
|
|
register sig_t action;
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td;
|
|
|
|
struct ksegrp *kg;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-11-02 23:50:00 +00:00
|
|
|
KASSERT(_SIG_VALID(sig),
|
|
|
|
("psignal(): invalid signal %d\n", sig));
|
1999-09-29 15:03:48 +00:00
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2000-04-16 18:53:38 +00:00
|
|
|
KNOTE(&p->p_klist, NOTE_SIGNAL | sig);
|
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
prop = sigprop(sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
1997-12-06 04:11:14 +00:00
|
|
|
* If proc is traced, always give parent a chance;
|
|
|
|
* if signal event is tracked by procfs, give *that*
|
|
|
|
* a chance, as well.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((p->p_flag & P_TRACED) || (p->p_stops & S_SIG)) {
|
1994-05-24 10:09:53 +00:00
|
|
|
action = SIG_DFL;
|
2001-09-12 08:38:13 +00:00
|
|
|
} else {
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* If the signal is being ignored,
|
|
|
|
* then we forget about it immediately.
|
|
|
|
* (Note: we don't set SIGCONT in p_sigignore,
|
|
|
|
* and if it is set to SIG_IGN,
|
|
|
|
* action will be SIG_DFL here.)
|
|
|
|
*/
|
2001-03-07 02:59:54 +00:00
|
|
|
if (SIGISMEMBER(p->p_sigignore, sig) || (p->p_flag & P_WEXIT))
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (SIGISMEMBER(p->p_sigmask, sig))
|
1994-05-24 10:09:53 +00:00
|
|
|
action = SIG_HOLD;
|
1999-09-29 15:03:48 +00:00
|
|
|
else if (SIGISMEMBER(p->p_sigcatch, sig))
|
1994-05-24 10:09:53 +00:00
|
|
|
action = SIG_CATCH;
|
|
|
|
else
|
|
|
|
action = SIG_DFL;
|
|
|
|
}
|
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
/*
|
|
|
|
* bring the priority of a process up if we want it to get
|
|
|
|
* killed in this lifetime.
|
|
|
|
* XXXKSE think if a better way to do this.
|
|
|
|
*
|
|
|
|
* What we need to do is see if there is a thread that will
|
|
|
|
* be able to accept the signal. e.g.
|
|
|
|
* FOREACH_THREAD_IN_PROC() {
|
|
|
|
* if runnable, we're done
|
|
|
|
* else pick one at random.
|
|
|
|
* }
|
|
|
|
*/
|
|
|
|
/* XXXKSE
|
|
|
|
* For now there is one thread per proc.
|
|
|
|
* Effectively select one sucker thread..
|
|
|
|
*/
|
2002-02-07 20:58:47 +00:00
|
|
|
td = FIRST_THREAD_IN_PROC(p);
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((p->p_ksegrp.kg_nice > NZERO) && (action == SIG_DFL) &&
|
|
|
|
(prop & SA_KILL) && ((p->p_flag & P_TRACED) == 0))
|
|
|
|
p->p_ksegrp.kg_nice = NZERO; /* XXXKSE */
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if (prop & SA_CONT)
|
1999-09-29 15:03:48 +00:00
|
|
|
SIG_STOPSIGMASK(p->p_siglist);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
if (prop & SA_STOP) {
|
|
|
|
/*
|
|
|
|
* If sending a tty stop signal to a member of an orphaned
|
|
|
|
* process group, discard the signal here if the action
|
|
|
|
* is default; don't stop the process below if sleeping,
|
|
|
|
* and don't clear any pending SIGCONT.
|
|
|
|
*/
|
|
|
|
if (prop & SA_TTYSTOP && p->p_pgrp->pg_jobc == 0 &&
|
2001-03-07 02:59:54 +00:00
|
|
|
action == SIG_DFL)
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
1999-09-29 15:03:48 +00:00
|
|
|
SIG_CONTSIGMASK(p->p_siglist);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGADDSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Defer further processing for signals which are held,
|
|
|
|
* except that stopped processes must be continued by SIGCONT.
|
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2000-12-01 23:43:15 +00:00
|
|
|
if (action == SIG_HOLD && (!(prop & SA_CONT) || p->p_stat != SSTOP)) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
return;
|
2000-12-01 23:43:15 +00:00
|
|
|
}
|
2001-09-12 08:38:13 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
switch (p->p_stat) {
|
|
|
|
|
|
|
|
case SSLEEP:
|
|
|
|
/*
|
|
|
|
* If process is sleeping uninterruptibly
|
|
|
|
* we can't interrupt the sleep... the signal will
|
|
|
|
* be noticed when the process returns through
|
|
|
|
* trap() or syscall().
|
|
|
|
*/
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((td->td_flags & TDF_SINTR) == 0) {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
2001-01-02 18:54:09 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Process is sleeping and traced... make it runnable
|
|
|
|
* so it can discover the signal in issignal() and stop
|
|
|
|
* for the parent.
|
|
|
|
*/
|
|
|
|
if (p->p_flag & P_TRACED)
|
|
|
|
goto run;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* If SIGCONT is default (or ignored) and process is
|
|
|
|
* asleep, we are finished; the process should not
|
|
|
|
* be awakened.
|
|
|
|
*/
|
|
|
|
if ((prop & SA_CONT) && action == SIG_DFL) {
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* When a sleeping process receives a stop
|
|
|
|
* signal, process immediately if possible.
|
|
|
|
* All other (caught or default) signals
|
|
|
|
* cause the process to run.
|
|
|
|
*/
|
|
|
|
if (prop & SA_STOP) {
|
2001-01-02 18:54:09 +00:00
|
|
|
if (action != SIG_DFL)
|
1994-05-24 10:09:53 +00:00
|
|
|
goto runfast;
|
|
|
|
/*
|
|
|
|
* If a child holding parent blocked,
|
|
|
|
* stopping could cause deadlock.
|
|
|
|
*/
|
|
|
|
if (p->p_flag & P_PPWAIT)
|
|
|
|
goto out;
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig);
|
|
|
|
p->p_xstat = sig;
|
2001-03-07 02:59:54 +00:00
|
|
|
if ((p->p_pptr->p_procsig->ps_flag & PS_NOCLDSTOP) == 0) {
|
|
|
|
PROC_LOCK(p->p_pptr);
|
1994-05-24 10:09:53 +00:00
|
|
|
psignal(p->p_pptr, SIGCHLD);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p->p_pptr);
|
|
|
|
}
|
2001-04-03 01:39:23 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
stop(p);
|
2001-04-03 01:39:23 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
2001-01-02 18:54:09 +00:00
|
|
|
} else
|
1994-05-24 10:09:53 +00:00
|
|
|
goto runfast;
|
2001-01-02 18:54:09 +00:00
|
|
|
/* NOTREACHED */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
case SSTOP:
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* If traced process is already stopped,
|
|
|
|
* then no further action is necessary.
|
|
|
|
*/
|
2001-01-24 11:08:02 +00:00
|
|
|
if (p->p_flag & P_TRACED)
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Kill signal always sets processes running.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
if (sig == SIGKILL)
|
1994-05-24 10:09:53 +00:00
|
|
|
goto runfast;
|
|
|
|
|
|
|
|
if (prop & SA_CONT) {
|
|
|
|
/*
|
|
|
|
* If SIGCONT is default (or ignored), we continue the
|
|
|
|
* process but don't leave the signal in p_siglist, as
|
|
|
|
* it has no further action. If SIGCONT is held, we
|
|
|
|
* continue the process and leave the signal in
|
|
|
|
* p_siglist. If the process catches SIGCONT, let it
|
|
|
|
* handle the signal itself. If it isn't waiting on
|
|
|
|
* an event, then it goes back to run state.
|
|
|
|
* Otherwise, process goes back to sleep state.
|
|
|
|
*/
|
|
|
|
if (action == SIG_DFL)
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (action == SIG_CATCH)
|
|
|
|
goto runfast;
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-09-12 08:38:13 +00:00
|
|
|
/*
|
|
|
|
* XXXKSE
|
|
|
|
* do this for each thread.
|
|
|
|
*/
|
|
|
|
if (p->p_flag & P_KSES) {
|
|
|
|
mtx_assert(&sched_lock,
|
|
|
|
MA_OWNED | MA_NOTRECURSED);
|
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
|
|
|
if (td->td_wchan == NULL) {
|
|
|
|
setrunnable(td); /* XXXKSE */
|
|
|
|
} else {
|
|
|
|
/* mark it as sleeping */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
} else {
|
2002-02-07 20:58:47 +00:00
|
|
|
if (td->td_wchan == NULL)
|
2001-09-12 08:38:13 +00:00
|
|
|
goto run;
|
|
|
|
p->p_stat = SSLEEP;
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
|
|
|
}
|
2001-09-17 20:42:25 +00:00
|
|
|
goto out;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (prop & SA_STOP) {
|
|
|
|
/*
|
|
|
|
* Already stopped, don't need to stop again.
|
|
|
|
* (If we did the shell could get confused.)
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If process is sleeping interruptibly, then simulate a
|
|
|
|
* wakeup so that when it is continued, it will be made
|
|
|
|
* runnable and can look at the signal. But don't make
|
|
|
|
* the process runnable, leave it stopped.
|
2001-09-12 08:38:13 +00:00
|
|
|
* XXXKSE should we wake ALL blocked threads?
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-09-12 08:38:13 +00:00
|
|
|
if (p->p_flag & P_KSES) {
|
|
|
|
FOREACH_THREAD_IN_PROC(p, td) {
|
|
|
|
if (td->td_wchan && (td->td_flags & TDF_SINTR)){
|
|
|
|
if (td->td_flags & TDF_CVWAITQ)
|
|
|
|
cv_waitq_remove(td);
|
|
|
|
else
|
|
|
|
unsleep(td); /* XXXKSE */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (td->td_wchan && td->td_flags & TDF_SINTR) {
|
|
|
|
if (td->td_flags & TDF_CVWAITQ)
|
|
|
|
cv_waitq_remove(td);
|
|
|
|
else
|
|
|
|
unsleep(td); /* XXXKSE */
|
|
|
|
}
|
2001-01-16 01:00:43 +00:00
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
default:
|
|
|
|
/*
|
|
|
|
* SRUN, SIDL, SZOMB do nothing with the signal,
|
|
|
|
* other than kicking ourselves if we are running.
|
|
|
|
* It will either never be noticed, or noticed very soon.
|
|
|
|
*/
|
2001-02-19 09:40:58 +00:00
|
|
|
if (p->p_stat == SRUN) {
|
1998-03-03 20:55:26 +00:00
|
|
|
#ifdef SMP
|
2001-09-12 08:38:13 +00:00
|
|
|
struct kse *ke;
|
|
|
|
struct thread *td = curthread;
|
|
|
|
signotify(&p->p_kse); /* XXXKSE */
|
|
|
|
/* we should only deliver to one thread.. but which one? */
|
|
|
|
FOREACH_KSEGRP_IN_PROC(p, kg) {
|
|
|
|
FOREACH_KSE_IN_GROUP(kg, ke) {
|
|
|
|
if (ke->ke_thread == td) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
forward_signal(ke->ke_thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
signotify(&p->p_kse); /* XXXKSE */
|
1998-03-03 20:55:26 +00:00
|
|
|
#endif
|
2001-04-27 19:28:25 +00:00
|
|
|
}
|
|
|
|
mtx_unlock_spin(&sched_lock);
|
1994-05-24 10:09:53 +00:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/*NOTREACHED*/
|
|
|
|
|
|
|
|
runfast:
|
|
|
|
/*
|
|
|
|
* Raise priority to at least PUSER.
|
2001-09-12 08:38:13 +00:00
|
|
|
* XXXKSE Should we make them all run fast?
|
|
|
|
* Maybe just one would be enough?
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2002-02-11 20:37:54 +00:00
|
|
|
|
|
|
|
if (FIRST_THREAD_IN_PROC(p)->td_priority > PUSER) {
|
|
|
|
FIRST_THREAD_IN_PROC(p)->td_priority = PUSER;
|
2001-09-12 08:38:13 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
run:
|
2001-01-02 18:54:09 +00:00
|
|
|
/* If we jump here, sched_lock has to be owned. */
|
|
|
|
mtx_assert(&sched_lock, MA_OWNED | MA_NOTRECURSED);
|
2001-09-12 08:38:13 +00:00
|
|
|
setrunnable(td); /* XXXKSE */
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2001-01-02 18:54:09 +00:00
|
|
|
out:
|
|
|
|
/* If we jump here, sched_lock should not be owned. */
|
|
|
|
mtx_assert(&sched_lock, MA_NOTOWNED);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the current process has received a signal (should be caught or cause
|
|
|
|
* termination, should interrupt current syscall), return the signal number.
|
|
|
|
* Stop signals with default action are processed immediately, then cleared;
|
|
|
|
* they aren't returned. This is checked after each entry to the system for
|
|
|
|
* a syscall or trap (though this can usually be done without calling issignal
|
|
|
|
* by checking the pending signal masks in the CURSIG macro.) The normal call
|
|
|
|
* sequence is
|
|
|
|
*
|
1999-09-29 15:03:48 +00:00
|
|
|
* while (sig = CURSIG(curproc))
|
|
|
|
* postsig(sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
1994-05-24 10:09:53 +00:00
|
|
|
issignal(p)
|
|
|
|
register struct proc *p;
|
|
|
|
{
|
1999-09-29 15:03:48 +00:00
|
|
|
sigset_t mask;
|
|
|
|
register int sig, prop;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
1994-05-24 10:09:53 +00:00
|
|
|
for (;;) {
|
1997-12-06 04:11:14 +00:00
|
|
|
int traced = (p->p_flag & P_TRACED) || (p->p_stops & S_SIG);
|
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
mask = p->p_siglist;
|
|
|
|
SIGSETNAND(mask, p->p_sigmask);
|
1994-05-24 10:09:53 +00:00
|
|
|
if (p->p_flag & P_PPWAIT)
|
1999-09-29 15:03:48 +00:00
|
|
|
SIG_STOPSIGMASK(mask);
|
2001-03-07 02:59:54 +00:00
|
|
|
if (!SIGNOTEMPTY(mask)) /* no signal to send */
|
1994-05-24 10:09:53 +00:00
|
|
|
return (0);
|
1999-09-29 15:03:48 +00:00
|
|
|
sig = sig_ffs(&mask);
|
|
|
|
prop = sigprop(sig);
|
1997-12-06 04:11:14 +00:00
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
_STOPEVENT(p, S_SIG, sig);
|
1997-12-06 04:11:14 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* We should see pending but ignored signals
|
|
|
|
* only if P_TRACED was on when they were posted.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
if (SIGISMEMBER(p->p_sigignore, sig) && (traced == 0)) {
|
|
|
|
SIGDELSET(p->p_siglist, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (p->p_flag & P_TRACED && (p->p_flag & P_PPWAIT) == 0) {
|
|
|
|
/*
|
|
|
|
* If traced, always stop, and stay
|
|
|
|
* stopped until released by the parent.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
p->p_xstat = sig;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p->p_pptr);
|
1994-05-24 10:09:53 +00:00
|
|
|
psignal(p->p_pptr, SIGCHLD);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p->p_pptr);
|
1994-05-24 10:09:53 +00:00
|
|
|
do {
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-04-03 01:39:23 +00:00
|
|
|
stop(p);
|
Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.
Reviewed by: peter
Tested on: i386, alpha
2002-01-05 08:47:13 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
DROP_GIANT();
|
2001-06-22 23:02:37 +00:00
|
|
|
p->p_stats->p_ru.ru_nivcsw++;
|
1994-05-24 10:09:53 +00:00
|
|
|
mi_switch();
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-11-16 02:16:44 +00:00
|
|
|
PICKUP_GIANT();
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1997-12-06 04:11:14 +00:00
|
|
|
} while (!trace_req(p)
|
|
|
|
&& p->p_flag & P_TRACED);
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the traced bit got turned off, go back up
|
|
|
|
* to the top to rescan signals. This ensures
|
|
|
|
* that p_sig* and ps_sigact are consistent.
|
|
|
|
*/
|
|
|
|
if ((p->p_flag & P_TRACED) == 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If parent wants us to take the signal,
|
|
|
|
* then it will leave it in p->p_xstat;
|
|
|
|
* otherwise we just look for signals again.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig); /* clear old signal */
|
|
|
|
sig = p->p_xstat;
|
|
|
|
if (sig == 0)
|
1994-05-24 10:09:53 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Put the new signal into p_siglist. If the
|
|
|
|
* signal is being masked, look for other signals.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGADDSET(p->p_siglist, sig);
|
|
|
|
if (SIGISMEMBER(p->p_sigmask, sig))
|
1994-05-24 10:09:53 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Decide whether the signal should be returned.
|
|
|
|
* Return the signal's number, or fall through
|
|
|
|
* to clear it from the pending mask.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
switch ((int)(intptr_t)p->p_sigacts->ps_sigact[_SIG_IDX(sig)]) {
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1994-09-20 05:42:46 +00:00
|
|
|
case (int)SIG_DFL:
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Don't take default actions on system processes.
|
|
|
|
*/
|
|
|
|
if (p->p_pid <= 1) {
|
|
|
|
#ifdef DIAGNOSTIC
|
|
|
|
/*
|
|
|
|
* Are you sure you want to ignore SIGSEGV
|
|
|
|
* in init? XXX
|
|
|
|
*/
|
1994-10-10 01:00:49 +00:00
|
|
|
printf("Process (pid %lu) got signal %d\n",
|
1999-09-29 15:03:48 +00:00
|
|
|
(u_long)p->p_pid, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
#endif
|
|
|
|
break; /* == ignore */
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* If there is a pending stop signal to process
|
|
|
|
* with default action, stop here,
|
|
|
|
* then clear the signal. However,
|
|
|
|
* if process is member of an orphaned
|
|
|
|
* process group, ignore tty stop signals.
|
|
|
|
*/
|
|
|
|
if (prop & SA_STOP) {
|
|
|
|
if (p->p_flag & P_TRACED ||
|
|
|
|
(p->p_pgrp->pg_jobc == 0 &&
|
|
|
|
prop & SA_TTYSTOP))
|
|
|
|
break; /* == ignore */
|
1999-09-29 15:03:48 +00:00
|
|
|
p->p_xstat = sig;
|
2001-03-07 02:59:54 +00:00
|
|
|
if ((p->p_pptr->p_procsig->ps_flag & PS_NOCLDSTOP) == 0) {
|
|
|
|
PROC_LOCK(p->p_pptr);
|
1994-05-24 10:09:53 +00:00
|
|
|
psignal(p->p_pptr, SIGCHLD);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p->p_pptr);
|
|
|
|
}
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_lock_spin(&sched_lock);
|
2001-04-03 01:39:23 +00:00
|
|
|
stop(p);
|
Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.
Reviewed by: peter
Tested on: i386, alpha
2002-01-05 08:47:13 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
DROP_GIANT();
|
2001-06-22 23:02:37 +00:00
|
|
|
p->p_stats->p_ru.ru_nivcsw++;
|
1994-05-24 10:09:53 +00:00
|
|
|
mi_switch();
|
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.
Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
|
|
|
mtx_unlock_spin(&sched_lock);
|
2000-11-16 02:16:44 +00:00
|
|
|
PICKUP_GIANT();
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
break;
|
|
|
|
} else if (prop & SA_IGNORE) {
|
|
|
|
/*
|
|
|
|
* Except for SIGCONT, shouldn't get here.
|
|
|
|
* Default action is to ignore; drop it.
|
|
|
|
*/
|
|
|
|
break; /* == ignore */
|
|
|
|
} else
|
1999-09-29 15:03:48 +00:00
|
|
|
return (sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
/*NOTREACHED*/
|
|
|
|
|
1994-09-20 05:42:46 +00:00
|
|
|
case (int)SIG_IGN:
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Masking above should prevent us ever trying
|
|
|
|
* to take action on an ignored signal other
|
|
|
|
* than SIGCONT, unless process is traced.
|
|
|
|
*/
|
|
|
|
if ((prop & SA_CONT) == 0 &&
|
|
|
|
(p->p_flag & P_TRACED) == 0)
|
|
|
|
printf("issignal\n");
|
|
|
|
break; /* == ignore */
|
|
|
|
|
|
|
|
default:
|
|
|
|
/*
|
|
|
|
* This signal has an action, let
|
|
|
|
* postsig() process it.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
return (sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig); /* take the signal! */
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Put the argument process into the stopped state and notify the parent
|
|
|
|
* via wakeup. Signals are handled elsewhere. The process must not be
|
2001-04-03 01:39:23 +00:00
|
|
|
* on the run queue. Must be called with the proc p locked and the scheduler
|
|
|
|
* lock held.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
2001-04-03 01:39:23 +00:00
|
|
|
static void
|
1994-05-24 10:09:53 +00:00
|
|
|
stop(p)
|
|
|
|
register struct proc *p;
|
|
|
|
{
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2001-04-03 01:39:23 +00:00
|
|
|
mtx_assert(&sched_lock, MA_OWNED);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_stat = SSTOP;
|
|
|
|
p->p_flag &= ~P_WAITED;
|
|
|
|
wakeup((caddr_t)p->p_pptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Take the action for the specified signal
|
|
|
|
* from the current set of pending signals.
|
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
postsig(sig)
|
|
|
|
register int sig;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct thread *td = curthread;
|
|
|
|
register struct proc *p = td->td_proc;
|
2001-03-07 02:59:54 +00:00
|
|
|
struct sigacts *ps;
|
1999-09-29 15:03:48 +00:00
|
|
|
sig_t action;
|
|
|
|
sigset_t returnmask;
|
|
|
|
int code;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
1999-09-29 15:03:48 +00:00
|
|
|
KASSERT(sig != 0, ("postsig"));
|
1999-01-08 17:31:30 +00:00
|
|
|
|
2001-06-22 23:02:37 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2001-03-07 02:59:54 +00:00
|
|
|
ps = p->p_sigacts;
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_siglist, sig);
|
|
|
|
action = ps->ps_sigact[_SIG_IDX(sig)];
|
1994-05-24 10:09:53 +00:00
|
|
|
#ifdef KTRACE
|
|
|
|
if (KTRPOINT(p, KTR_PSIG))
|
1999-10-11 20:33:17 +00:00
|
|
|
ktrpsig(p->p_tracep, sig, action, p->p_flag & P_OLDMASK ?
|
1999-09-29 15:03:48 +00:00
|
|
|
&p->p_oldsigmask : &p->p_sigmask, 0);
|
1994-05-24 10:09:53 +00:00
|
|
|
#endif
|
2001-03-07 02:59:54 +00:00
|
|
|
_STOPEVENT(p, S_SIG, sig);
|
1997-12-06 04:11:14 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
if (action == SIG_DFL) {
|
|
|
|
/*
|
|
|
|
* Default action, where the default is to kill
|
|
|
|
* the process. (Other cases were ignored above.)
|
|
|
|
*/
|
2001-09-12 08:38:13 +00:00
|
|
|
sigexit(td, sig);
|
1994-05-24 10:09:53 +00:00
|
|
|
/* NOTREACHED */
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* If we get here, the signal must be caught.
|
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
KASSERT(action != SIG_IGN && !SIGISMEMBER(p->p_sigmask, sig),
|
1999-01-10 01:58:29 +00:00
|
|
|
("postsig action"));
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Set the new mask value and also defer further
|
1999-10-11 20:33:17 +00:00
|
|
|
* occurrences of this signal.
|
1994-05-24 10:09:53 +00:00
|
|
|
*
|
1999-10-11 20:33:17 +00:00
|
|
|
* Special case: user has done a sigsuspend. Here the
|
1994-05-24 10:09:53 +00:00
|
|
|
* current mask is not of interest, but rather the
|
1999-10-11 20:33:17 +00:00
|
|
|
* mask from before the sigsuspend is what we want
|
1994-05-24 10:09:53 +00:00
|
|
|
* restored after the signal processing is completed.
|
|
|
|
*/
|
1999-10-11 20:33:17 +00:00
|
|
|
if (p->p_flag & P_OLDMASK) {
|
1998-12-19 02:55:34 +00:00
|
|
|
returnmask = p->p_oldsigmask;
|
1999-10-11 20:33:17 +00:00
|
|
|
p->p_flag &= ~P_OLDMASK;
|
1994-05-24 10:09:53 +00:00
|
|
|
} else
|
|
|
|
returnmask = p->p_sigmask;
|
1999-09-29 15:03:48 +00:00
|
|
|
|
|
|
|
SIGSETOR(p->p_sigmask, ps->ps_catchmask[_SIG_IDX(sig)]);
|
|
|
|
if (!SIGISMEMBER(ps->ps_signodefer, sig))
|
|
|
|
SIGADDSET(p->p_sigmask, sig);
|
|
|
|
|
|
|
|
if (SIGISMEMBER(ps->ps_sigreset, sig)) {
|
1996-03-30 15:15:30 +00:00
|
|
|
/*
|
1999-09-29 15:03:48 +00:00
|
|
|
* See do_sigaction() for origin of this code.
|
1996-03-30 15:15:30 +00:00
|
|
|
*/
|
1999-09-29 15:03:48 +00:00
|
|
|
SIGDELSET(p->p_sigcatch, sig);
|
|
|
|
if (sig != SIGCONT &&
|
|
|
|
sigprop(sig) & SA_IGNORE)
|
|
|
|
SIGADDSET(p->p_sigignore, sig);
|
|
|
|
ps->ps_sigact[_SIG_IDX(sig)] = SIG_DFL;
|
1996-03-15 08:01:33 +00:00
|
|
|
}
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_stats->p_ru.ru_nsignals++;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (p->p_sig != sig) {
|
1994-05-24 10:09:53 +00:00
|
|
|
code = 0;
|
|
|
|
} else {
|
1998-12-19 02:55:34 +00:00
|
|
|
code = p->p_code;
|
|
|
|
p->p_code = 0;
|
|
|
|
p->p_sig = 0;
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
1999-09-29 15:03:48 +00:00
|
|
|
(*p->p_sysent->sv_sendsig)(action, sig, &returnmask, code);
|
1994-05-24 10:09:53 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Kill the current process for stated reason.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
1994-05-24 10:09:53 +00:00
|
|
|
killproc(p, why)
|
|
|
|
struct proc *p;
|
|
|
|
char *why;
|
|
|
|
{
|
2001-05-15 23:13:58 +00:00
|
|
|
|
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
2000-09-07 01:33:02 +00:00
|
|
|
CTR3(KTR_PROC, "killproc: proc %p (pid %d, %s)",
|
|
|
|
p, p->p_pid, p->p_comm);
|
1996-01-31 12:44:33 +00:00
|
|
|
log(LOG_ERR, "pid %d (%s), uid %d, was killed: %s\n", p->p_pid, p->p_comm,
|
o Merge contents of struct pcred into struct ucred. Specifically, add the
real uid, saved uid, real gid, and saved gid to ucred, as well as the
pcred->pc_uidinfo, which was associated with the real uid, only rename
it to cr_ruidinfo so as not to conflict with cr_uidinfo, which
corresponds to the effective uid.
o Remove p_cred from struct proc; add p_ucred to struct proc, replacing
original macro that pointed.
p->p_ucred to p->p_cred->pc_ucred.
o Universally update code so that it makes use of ucred instead of pcred,
p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo,
cr_{r,sv}{u,g}id instead of p_*, etc.
o Remove pcred0 and its initialization from init_main.c; initialize
cr_ruidinfo there.
o Restruction many credential modification chunks to always crdup while
we figure out locking and optimizations; generally speaking, this
means moving to a structure like this:
newcred = crdup(oldcred);
...
p->p_ucred = newcred;
crfree(oldcred);
It's not race-free, but better than nothing. There are also races
in sys_process.c, all inter-process authorization, fork, exec, and
exit.
o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid;
remove comments indicating that the old arrangement was a problem.
o Restructure exec1() a little to use newcred/oldcred arrangement, and
use improved uid management primitives.
o Clean up exit1() so as to do less work in credential cleanup due to
pcred removal.
o Clean up fork1() so as to do less work in credential cleanup and
allocation.
o Clean up ktrcanset() to take into account changes, and move to using
suser_xxx() instead of performing a direct uid==0 comparision.
o Improve commenting in various kern_prot.c credential modification
calls to better document current behavior. In a couple of places,
current behavior is a little questionable and we need to check
POSIX.1 to make sure it's "right". More commenting work still
remains to be done.
o Update credential management calls, such as crfree(), to take into
account new ruidinfo reference.
o Modify or add the following uid and gid helper routines:
change_euid()
change_egid()
change_ruid()
change_rgid()
change_svuid()
change_svgid()
In each case, the call now acts on a credential not a process, and as
such no longer requires more complicated process locking/etc. They
now assume the caller will do any necessary allocation of an
exclusive credential reference. Each is commented to document its
reference requirements.
o CANSIGIO() is simplified to require only credentials, not processes
and pcreds.
o Remove lots of (p_pcred==NULL) checks.
o Add an XXX to authorization code in nfs_lock.c, since it's
questionable, and needs to be considered carefully.
o Simplify posix4 authorization code to require only credentials, not
processes and pcreds. Note that this authorization, as well as
CANSIGIO(), needs to be updated to use the p_cansignal() and
p_cansched() centralized authorization routines, as they currently
do not take into account some desirable restrictions that are handled
by the centralized routines, as well as being inconsistent with other
similar authorization instances.
o Update libkvm to take these changes into account.
Obtained from: TrustedBSD Project
Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
2001-05-25 16:59:11 +00:00
|
|
|
p->p_ucred ? p->p_ucred->cr_uid : -1, why);
|
1994-05-24 10:09:53 +00:00
|
|
|
psignal(p, SIGKILL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force the current process to exit with the specified signal, dumping core
|
|
|
|
* if appropriate. We bypass the normal tests for masked and caught signals,
|
|
|
|
* allowing unrecoverable failures to terminate the process without changing
|
|
|
|
* signal state. Mark the accounting record with the signal termination.
|
|
|
|
* If dumping core, save the signal number for the debugger. Calls exit and
|
|
|
|
* does not return.
|
|
|
|
*/
|
1994-05-25 09:21:21 +00:00
|
|
|
void
|
2001-09-12 08:38:13 +00:00
|
|
|
sigexit(td, sig)
|
|
|
|
struct thread *td;
|
1999-09-29 15:03:48 +00:00
|
|
|
int sig;
|
1994-05-24 10:09:53 +00:00
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1994-05-24 10:09:53 +00:00
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK_ASSERT(p, MA_OWNED);
|
1994-05-24 10:09:53 +00:00
|
|
|
p->p_acflag |= AXSIG;
|
1999-09-29 15:03:48 +00:00
|
|
|
if (sigprop(sig) & SA_CORE) {
|
|
|
|
p->p_sig = sig;
|
1994-09-30 00:38:34 +00:00
|
|
|
/*
|
|
|
|
* Log signals which would cause core dumps
|
1995-05-30 08:16:23 +00:00
|
|
|
* (Log as LOG_INFO to appease those who don't want
|
1994-09-30 00:38:34 +00:00
|
|
|
* these messages.)
|
|
|
|
* XXX : Todo, as well as euid, write out ruid too
|
|
|
|
*/
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-03-28 08:41:04 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
|
|
|
mtx_lock(&Giant);
|
2001-09-12 08:38:13 +00:00
|
|
|
if (coredump(td) == 0)
|
1999-09-29 15:03:48 +00:00
|
|
|
sig |= WCOREFLAG;
|
1998-07-28 22:34:12 +00:00
|
|
|
if (kern_logsigexit)
|
|
|
|
log(LOG_INFO,
|
|
|
|
"pid %d (%s), uid %d: exited on signal %d%s\n",
|
|
|
|
p->p_pid, p->p_comm,
|
o Merge contents of struct pcred into struct ucred. Specifically, add the
real uid, saved uid, real gid, and saved gid to ucred, as well as the
pcred->pc_uidinfo, which was associated with the real uid, only rename
it to cr_ruidinfo so as not to conflict with cr_uidinfo, which
corresponds to the effective uid.
o Remove p_cred from struct proc; add p_ucred to struct proc, replacing
original macro that pointed.
p->p_ucred to p->p_cred->pc_ucred.
o Universally update code so that it makes use of ucred instead of pcred,
p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo,
cr_{r,sv}{u,g}id instead of p_*, etc.
o Remove pcred0 and its initialization from init_main.c; initialize
cr_ruidinfo there.
o Restruction many credential modification chunks to always crdup while
we figure out locking and optimizations; generally speaking, this
means moving to a structure like this:
newcred = crdup(oldcred);
...
p->p_ucred = newcred;
crfree(oldcred);
It's not race-free, but better than nothing. There are also races
in sys_process.c, all inter-process authorization, fork, exec, and
exit.
o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid;
remove comments indicating that the old arrangement was a problem.
o Restructure exec1() a little to use newcred/oldcred arrangement, and
use improved uid management primitives.
o Clean up exit1() so as to do less work in credential cleanup due to
pcred removal.
o Clean up fork1() so as to do less work in credential cleanup and
allocation.
o Clean up ktrcanset() to take into account changes, and move to using
suser_xxx() instead of performing a direct uid==0 comparision.
o Improve commenting in various kern_prot.c credential modification
calls to better document current behavior. In a couple of places,
current behavior is a little questionable and we need to check
POSIX.1 to make sure it's "right". More commenting work still
remains to be done.
o Update credential management calls, such as crfree(), to take into
account new ruidinfo reference.
o Modify or add the following uid and gid helper routines:
change_euid()
change_egid()
change_ruid()
change_rgid()
change_svuid()
change_svgid()
In each case, the call now acts on a credential not a process, and as
such no longer requires more complicated process locking/etc. They
now assume the caller will do any necessary allocation of an
exclusive credential reference. Each is commented to document its
reference requirements.
o CANSIGIO() is simplified to require only credentials, not processes
and pcreds.
o Remove lots of (p_pcred==NULL) checks.
o Add an XXX to authorization code in nfs_lock.c, since it's
questionable, and needs to be considered carefully.
o Simplify posix4 authorization code to require only credentials, not
processes and pcreds. Note that this authorization, as well as
CANSIGIO(), needs to be updated to use the p_cansignal() and
p_cansched() centralized authorization routines, as they currently
do not take into account some desirable restrictions that are handled
by the centralized routines, as well as being inconsistent with other
similar authorization instances.
o Update libkvm to take these changes into account.
Obtained from: TrustedBSD Project
Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
2001-05-25 16:59:11 +00:00
|
|
|
p->p_ucred ? p->p_ucred->cr_uid : -1,
|
1999-09-29 15:03:48 +00:00
|
|
|
sig &~ WCOREFLAG,
|
|
|
|
sig & WCOREFLAG ? " (core dumped)" : "");
|
2001-03-28 08:41:04 +00:00
|
|
|
} else {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-03-28 08:41:04 +00:00
|
|
|
if (!mtx_owned(&Giant))
|
|
|
|
mtx_lock(&Giant);
|
|
|
|
}
|
2001-09-12 08:38:13 +00:00
|
|
|
exit1(td, W_EXITCODE(0, sig));
|
1994-05-24 10:09:53 +00:00
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
1998-07-08 06:38:39 +00:00
|
|
|
static char corefilename[MAXPATHLEN+1] = {"%N.core"};
|
|
|
|
SYSCTL_STRING(_kern, OID_AUTO, corefile, CTLFLAG_RW, corefilename,
|
|
|
|
sizeof(corefilename), "process corefile name format string");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* expand_name(name, uid, pid)
|
|
|
|
* Expand the name described in corefilename, using name, uid, and pid.
|
|
|
|
* corefilename is a printf-like string, with three format specifiers:
|
|
|
|
* %N name of process ("name")
|
|
|
|
* %P process id (pid)
|
|
|
|
* %U user id (uid)
|
|
|
|
* For example, "%N.core" is the default; they can be disabled completely
|
|
|
|
* by using "/dev/null", or all core files can be stored in "/cores/%U/%N-%P".
|
|
|
|
* This is controlled by the sysctl variable kern.corefile (see above).
|
|
|
|
*/
|
|
|
|
|
1999-09-01 00:29:56 +00:00
|
|
|
static char *
|
1998-07-08 06:38:39 +00:00
|
|
|
expand_name(name, uid, pid)
|
1999-08-16 18:13:39 +00:00
|
|
|
const char *name; uid_t uid; pid_t pid; {
|
1998-07-08 06:38:39 +00:00
|
|
|
char *temp;
|
|
|
|
char buf[11]; /* Buffer for pid/uid -- max 4B */
|
|
|
|
int i, n;
|
|
|
|
char *format = corefilename;
|
1999-08-14 19:58:58 +00:00
|
|
|
size_t namelen;
|
1998-07-08 06:38:39 +00:00
|
|
|
|
1999-08-14 19:58:58 +00:00
|
|
|
temp = malloc(MAXPATHLEN + 1, M_TEMP, M_NOWAIT);
|
1998-12-02 01:53:48 +00:00
|
|
|
if (temp == NULL)
|
|
|
|
return NULL;
|
1999-08-14 19:58:58 +00:00
|
|
|
namelen = strlen(name);
|
|
|
|
for (i = 0, n = 0; n < MAXPATHLEN && format[i]; i++) {
|
1998-07-08 06:38:39 +00:00
|
|
|
int l;
|
|
|
|
switch (format[i]) {
|
|
|
|
case '%': /* Format character */
|
|
|
|
i++;
|
|
|
|
switch (format[i]) {
|
|
|
|
case '%':
|
|
|
|
temp[n++] = '%';
|
|
|
|
break;
|
|
|
|
case 'N': /* process name */
|
1999-08-14 19:58:58 +00:00
|
|
|
if ((n + namelen) > MAXPATHLEN) {
|
1999-08-16 18:13:39 +00:00
|
|
|
log(LOG_ERR, "pid %d (%s), uid (%u): Path `%s%s' is too long\n",
|
1998-07-08 06:38:39 +00:00
|
|
|
pid, name, uid, temp, name);
|
|
|
|
free(temp, M_TEMP);
|
|
|
|
return NULL;
|
|
|
|
}
|
1999-08-14 19:58:58 +00:00
|
|
|
memcpy(temp+n, name, namelen);
|
|
|
|
n += namelen;
|
1998-07-08 06:38:39 +00:00
|
|
|
break;
|
|
|
|
case 'P': /* process id */
|
1999-08-14 19:58:58 +00:00
|
|
|
l = sprintf(buf, "%u", pid);
|
1998-07-08 06:38:39 +00:00
|
|
|
if ((n + l) > MAXPATHLEN) {
|
1999-08-16 18:13:39 +00:00
|
|
|
log(LOG_ERR, "pid %d (%s), uid (%u): Path `%s%s' is too long\n",
|
1998-07-08 06:38:39 +00:00
|
|
|
pid, name, uid, temp, name);
|
|
|
|
free(temp, M_TEMP);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
memcpy(temp+n, buf, l);
|
|
|
|
n += l;
|
|
|
|
break;
|
|
|
|
case 'U': /* user id */
|
1999-08-14 19:58:58 +00:00
|
|
|
l = sprintf(buf, "%u", uid);
|
1998-07-08 06:38:39 +00:00
|
|
|
if ((n + l) > MAXPATHLEN) {
|
1999-08-16 18:13:39 +00:00
|
|
|
log(LOG_ERR, "pid %d (%s), uid (%u): Path `%s%s' is too long\n",
|
1998-07-08 06:38:39 +00:00
|
|
|
pid, name, uid, temp, name);
|
|
|
|
free(temp, M_TEMP);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
memcpy(temp+n, buf, l);
|
|
|
|
n += l;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
log(LOG_ERR, "Unknown format character %c in `%s'\n", format[i], format);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
temp[n++] = format[i];
|
|
|
|
}
|
|
|
|
}
|
1999-08-14 19:58:58 +00:00
|
|
|
temp[n] = '\0';
|
1998-07-08 06:38:39 +00:00
|
|
|
return temp;
|
|
|
|
}
|
|
|
|
|
1999-09-01 00:29:56 +00:00
|
|
|
/*
|
|
|
|
* Dump a process' core. The main routine does some
|
|
|
|
* policy checking, and creates the name of the coredump;
|
|
|
|
* then it passes on a vnode and a size limit to the process-specific
|
|
|
|
* coredump routine if there is one; if there _is not_ one, it returns
|
|
|
|
* ENOSYS; otherwise it returns the error from the process-specific routine.
|
2002-02-10 21:45:16 +00:00
|
|
|
*
|
|
|
|
* XXX: VOP_GETATTR() here requires holding the vnode lock.
|
1999-09-01 00:29:56 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
2001-09-12 08:38:13 +00:00
|
|
|
coredump(struct thread *td)
|
1999-09-01 00:29:56 +00:00
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
1999-09-01 00:29:56 +00:00
|
|
|
register struct vnode *vp;
|
1999-11-21 12:38:21 +00:00
|
|
|
register struct ucred *cred = p->p_ucred;
|
2001-09-08 20:02:33 +00:00
|
|
|
struct flock lf;
|
1999-09-01 00:29:56 +00:00
|
|
|
struct nameidata nd;
|
|
|
|
struct vattr vattr;
|
2000-07-04 03:34:11 +00:00
|
|
|
int error, error1, flags;
|
2000-07-11 22:07:57 +00:00
|
|
|
struct mount *mp;
|
1999-09-01 00:29:56 +00:00
|
|
|
char *name; /* name of corefile */
|
|
|
|
off_t limit;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
_STOPEVENT(p, S_CORE, 0);
|
|
|
|
|
|
|
|
if (((sugid_coredump == 0) && p->p_flag & P_SUGID) || do_coredump == 0) {
|
|
|
|
PROC_UNLOCK(p);
|
1999-09-01 00:29:56 +00:00
|
|
|
return (EFAULT);
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
1999-09-01 00:29:56 +00:00
|
|
|
|
|
|
|
/*
|
1999-10-30 18:55:11 +00:00
|
|
|
* Note that the bulk of limit checking is done after
|
|
|
|
* the corefile is created. The exception is if the limit
|
|
|
|
* for corefiles is 0, in which case we don't bother
|
|
|
|
* creating the corefile at all. This layout means that
|
|
|
|
* a corefile is truncated instead of not being created,
|
|
|
|
* if it is larger than the limit.
|
1999-09-01 00:29:56 +00:00
|
|
|
*/
|
1999-10-30 18:55:11 +00:00
|
|
|
limit = p->p_rlimit[RLIMIT_CORE].rlim_cur;
|
2001-03-07 02:59:54 +00:00
|
|
|
if (limit == 0) {
|
|
|
|
PROC_UNLOCK(p);
|
1999-10-30 18:55:11 +00:00
|
|
|
return 0;
|
2001-03-07 02:59:54 +00:00
|
|
|
}
|
|
|
|
PROC_UNLOCK(p);
|
1999-10-30 18:55:11 +00:00
|
|
|
|
2000-07-11 22:07:57 +00:00
|
|
|
restart:
|
1999-09-01 00:29:56 +00:00
|
|
|
name = expand_name(p->p_comm, p->p_ucred->cr_uid, p->p_pid);
|
2001-08-24 15:49:30 +00:00
|
|
|
if (name == NULL)
|
|
|
|
return (EINVAL);
|
2001-09-12 08:38:13 +00:00
|
|
|
NDINIT(&nd, LOOKUP, NOFOLLOW, UIO_SYSSPACE, name, td); /* XXXKSE */
|
2000-07-04 03:34:11 +00:00
|
|
|
flags = O_CREAT | FWRITE | O_NOFOLLOW;
|
|
|
|
error = vn_open(&nd, &flags, S_IRUSR | S_IWUSR);
|
1999-09-01 00:29:56 +00:00
|
|
|
free(name, M_TEMP);
|
|
|
|
if (error)
|
|
|
|
return (error);
|
1999-12-15 23:02:35 +00:00
|
|
|
NDFREE(&nd, NDF_ONLY_PNBUF);
|
1999-09-01 00:29:56 +00:00
|
|
|
vp = nd.ni_vp;
|
2001-09-08 20:02:33 +00:00
|
|
|
|
2001-09-12 08:38:13 +00:00
|
|
|
VOP_UNLOCK(vp, 0, td);
|
2001-09-08 20:02:33 +00:00
|
|
|
lf.l_whence = SEEK_SET;
|
|
|
|
lf.l_start = 0;
|
|
|
|
lf.l_len = 0;
|
|
|
|
lf.l_type = F_WRLCK;
|
|
|
|
error = VOP_ADVLOCK(vp, (caddr_t)p, F_SETLK, &lf, F_FLOCK);
|
|
|
|
if (error)
|
|
|
|
goto out2;
|
|
|
|
|
2000-07-11 22:07:57 +00:00
|
|
|
if (vn_start_write(vp, &mp, V_NOWAIT) != 0) {
|
2001-09-08 20:02:33 +00:00
|
|
|
lf.l_type = F_UNLCK;
|
|
|
|
VOP_ADVLOCK(vp, (caddr_t)p, F_UNLCK, &lf, F_FLOCK);
|
2001-09-12 08:38:13 +00:00
|
|
|
if ((error = vn_close(vp, FWRITE, cred, td)) != 0)
|
2000-07-11 22:07:57 +00:00
|
|
|
return (error);
|
|
|
|
if ((error = vn_start_write(NULL, &mp, V_XSLEEP | PCATCH)) != 0)
|
|
|
|
return (error);
|
|
|
|
goto restart;
|
|
|
|
}
|
1999-09-01 00:29:56 +00:00
|
|
|
|
|
|
|
/* Don't dump to non-regular files or files with links. */
|
|
|
|
if (vp->v_type != VREG ||
|
2001-09-12 08:38:13 +00:00
|
|
|
VOP_GETATTR(vp, &vattr, cred, td) || vattr.va_nlink != 1) {
|
1999-09-01 00:29:56 +00:00
|
|
|
error = EFAULT;
|
2001-09-08 20:02:33 +00:00
|
|
|
goto out1;
|
1999-09-01 00:29:56 +00:00
|
|
|
}
|
|
|
|
VATTR_NULL(&vattr);
|
|
|
|
vattr.va_size = 0;
|
2001-09-26 01:24:07 +00:00
|
|
|
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, td);
|
2001-09-12 08:38:13 +00:00
|
|
|
VOP_LEASE(vp, td, cred, LEASE_WRITE);
|
|
|
|
VOP_SETATTR(vp, &vattr, cred, td);
|
2001-09-26 01:24:07 +00:00
|
|
|
VOP_UNLOCK(vp, 0, td);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1999-09-01 00:29:56 +00:00
|
|
|
p->p_acflag |= ACORE;
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
1999-09-01 00:29:56 +00:00
|
|
|
|
|
|
|
error = p->p_sysent->sv_coredump ?
|
2001-09-12 08:38:13 +00:00
|
|
|
p->p_sysent->sv_coredump(td, vp, limit) :
|
1999-09-01 00:29:56 +00:00
|
|
|
ENOSYS;
|
|
|
|
|
2001-09-08 20:02:33 +00:00
|
|
|
out1:
|
|
|
|
lf.l_type = F_UNLCK;
|
|
|
|
VOP_ADVLOCK(vp, (caddr_t)p, F_UNLCK, &lf, F_FLOCK);
|
2000-07-11 22:07:57 +00:00
|
|
|
vn_finished_write(mp);
|
2001-09-08 20:02:33 +00:00
|
|
|
out2:
|
2001-09-12 08:38:13 +00:00
|
|
|
error1 = vn_close(vp, FWRITE, cred, td);
|
1999-09-01 00:29:56 +00:00
|
|
|
if (error == 0)
|
|
|
|
error = error1;
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Nonexistent system call-- signal process (may want to handle it).
|
|
|
|
* Flag error in case process won't see signal immediately (blocked or ignored).
|
|
|
|
*/
|
1995-11-12 06:43:28 +00:00
|
|
|
#ifndef _SYS_SYSPROTO_H_
|
1994-05-24 10:09:53 +00:00
|
|
|
struct nosys_args {
|
|
|
|
int dummy;
|
|
|
|
};
|
1995-11-12 06:43:28 +00:00
|
|
|
#endif
|
2001-09-01 18:19:21 +00:00
|
|
|
/*
|
|
|
|
* MPSAFE
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
/* ARGSUSED */
|
1994-05-25 09:21:21 +00:00
|
|
|
int
|
2001-09-12 08:38:13 +00:00
|
|
|
nosys(td, args)
|
|
|
|
struct thread *td;
|
1994-05-24 10:09:53 +00:00
|
|
|
struct nosys_args *args;
|
|
|
|
{
|
2001-09-12 08:38:13 +00:00
|
|
|
struct proc *p = td->td_proc;
|
|
|
|
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_lock(&Giant);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
1994-05-24 10:09:53 +00:00
|
|
|
psignal(p, SIGSYS);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2001-09-01 18:19:21 +00:00
|
|
|
mtx_unlock(&Giant);
|
1994-05-24 10:09:53 +00:00
|
|
|
return (EINVAL);
|
|
|
|
}
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
|
|
|
|
/*
|
2001-12-14 00:38:01 +00:00
|
|
|
* Send a SIGIO or SIGURG signal to a process or process group using
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
* stored credentials rather than those of the current process.
|
|
|
|
*/
|
|
|
|
void
|
1999-09-29 15:03:48 +00:00
|
|
|
pgsigio(sigio, sig, checkctty)
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
struct sigio *sigio;
|
1999-09-29 15:03:48 +00:00
|
|
|
int sig, checkctty;
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
{
|
|
|
|
if (sigio == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (sigio->sio_pgid > 0) {
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(sigio->sio_proc);
|
2002-01-10 01:25:35 +00:00
|
|
|
if (CANSIGIO(sigio->sio_ucred, sigio->sio_proc->p_ucred))
|
1999-09-29 15:03:48 +00:00
|
|
|
psignal(sigio->sio_proc, sig);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(sigio->sio_proc);
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
} else if (sigio->sio_pgid < 0) {
|
|
|
|
struct proc *p;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
LIST_FOREACH(p, &sigio->sio_pgrp->pg_members, p_pglist) {
|
|
|
|
PROC_LOCK(p);
|
2002-01-10 01:25:35 +00:00
|
|
|
if (CANSIGIO(sigio->sio_ucred, p->p_ucred) &&
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
(checkctty == 0 || (p->p_flag & P_CONTROLT)))
|
1999-09-29 15:03:48 +00:00
|
|
|
psignal(p, sig);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
|
|
|
}
|
Installed the second patch attached to kern/7899 with some changes suggested
by bde, a few other tweaks to get the patch to apply cleanly again and
some improvements to the comments.
This change closes some fairly minor security holes associated with
F_SETOWN, fixes a few bugs, and removes some limitations that F_SETOWN
had on tty devices. For more details, see the description on the PR.
Because this patch increases the size of the proc and pgrp structures,
it is necessary to re-install the includes and recompile libkvm,
the vinum lkm, fstat, gcore, gdb, ipfilter, ps, top, and w.
PR: kern/7899
Reviewed by: bde, elvind
1998-11-11 10:04:13 +00:00
|
|
|
}
|
|
|
|
}
|
2000-04-16 18:53:38 +00:00
|
|
|
|
|
|
|
static int
|
|
|
|
filt_sigattach(struct knote *kn)
|
|
|
|
{
|
|
|
|
struct proc *p = curproc;
|
|
|
|
|
|
|
|
kn->kn_ptr.p_proc = p;
|
|
|
|
kn->kn_flags |= EV_CLEAR; /* automatically set */
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2000-04-16 18:53:38 +00:00
|
|
|
SLIST_INSERT_HEAD(&p->p_klist, kn, kn_selnext);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2000-04-16 18:53:38 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
filt_sigdetach(struct knote *kn)
|
|
|
|
{
|
|
|
|
struct proc *p = kn->kn_ptr.p_proc;
|
|
|
|
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_LOCK(p);
|
2000-05-26 02:09:24 +00:00
|
|
|
SLIST_REMOVE(&p->p_klist, kn, knote, kn_selnext);
|
2001-03-07 02:59:54 +00:00
|
|
|
PROC_UNLOCK(p);
|
2000-04-16 18:53:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* signal knotes are shared with proc knotes, so we apply a mask to
|
|
|
|
* the hint in order to differentiate them from process hints. This
|
|
|
|
* could be avoided by using a signal-specific knote list, but probably
|
|
|
|
* isn't worth the trouble.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
filt_signal(struct knote *kn, long hint)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (hint & NOTE_SIGNAL) {
|
|
|
|
hint &= ~NOTE_SIGNAL;
|
|
|
|
|
|
|
|
if (kn->kn_id == hint)
|
|
|
|
kn->kn_data++;
|
|
|
|
}
|
|
|
|
return (kn->kn_data != 0);
|
|
|
|
}
|