1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-23 11:18:54 +00:00
Commit Graph

264 Commits

Author SHA1 Message Date
Dag-Erling Smørgrav
096ba44775 _pthread_mutex_isowned_np(): use a more reliable method; the current code
will work in simple cases, but may fail in more complicated ones.

Reviewed by:	davidxu
2008-02-14 12:37:58 +00:00
Dag-Erling Smørgrav
d29b081fee Remove unnecessary prototype. 2008-02-06 20:43:19 +00:00
Dag-Erling Smørgrav
1cbdac2668 Per discussion on -threads, rename _islocked_np() to _isowned_np(). 2008-02-06 19:34:31 +00:00
Dag-Erling Smørgrav
fcd61d9141 After careful consideration (and a brief discussion with attilio@), change
the semantics of pthread_mutex_islocked_np() to return true if and only if
the mutex is held by the current thread.

Obviously, change the regression test to match.

MFC after:	2 weeks
2008-02-04 12:35:23 +00:00
Dag-Erling Smørgrav
5fd410a787 Add pthread_mutex_islocked_np(), a cheap way to verify that a mutex is
locked.  This is intended primarily to support the userland equivalent
of the various *_ASSERT_LOCKED() macros we have in the kernel.

MFC after:	2 weeks
2008-02-03 22:38:10 +00:00
David Xu
0c921dadbb sem_post() requires to return -1 on error. 2008-01-07 02:26:29 +00:00
David Xu
9ba01c866b call underscore version of pthread_cleanup_pop instead. 2007-12-20 04:40:12 +00:00
David Xu
06c8eb55ce Remove vfork() overloading, it is no longer needed. 2007-12-20 04:32:28 +00:00
David Xu
c5f411515c Add function prototypes. 2007-12-17 02:53:11 +00:00
David Xu
093fcf1694 1. Add function pthread_mutex_setspinloops_np to turn a mutex's spin
loop count.
2. Add function pthread_mutex_setyieldloops_np to turn a mutex's yield
   loop count.
3. Make environment variables PTHREAD_SPINLOOPS and PTHREAD_YIELDLOOPS
   to be only used for turnning PTHREAD_MUTEX_ADAPTIVE_NP mutex.
2007-12-14 06:25:57 +00:00
David Xu
6a663207e7 Enclose all code for macro ENQUEUE_MUTEX in do while statement, and
add missing brackets.

MFC: after 1 day
2007-12-11 08:00:58 +00:00
Jason Evans
b6b7fd3e2a Fix pointer dereferencing problems in _pthread_mutex_init_calloc_cb() that
were obscured by pseudo-opaque pthreads API pointer casting.
2007-11-28 00:16:24 +00:00
Jason Evans
e1636e1f97 Add _pthread_mutex_init_calloc_cb() to libthr and libkse, so that malloc(3)
(part of libc) can use pthreads mutexes without causing infinite recursion
during initialization.
2007-11-27 03:16:44 +00:00
David Xu
da4410f25f Simplify code, fix a thread cancellation bug in sem_wait and sem_timedwait. 2007-11-23 05:42:52 +00:00
David Xu
4877aaebc1 Reuse nwaiter member field to record number of waiters, in sem_post(),
this should reduce the chance having to do a syscall when there is no
waiter in the semaphore.
2007-11-21 06:01:02 +00:00
David Xu
9e1ddd5fa0 Convert ceiling type to unsigned integer before comparing, fix compiler
warnings.
2007-11-21 05:25:27 +00:00
David Xu
922d56f9de Add some function prototypes. 2007-11-21 05:23:54 +00:00
David Xu
6fdfcacb4a Remove umtx_t definition, use type long directly, add wrapper function
_thr_umtx_wait_uint() for umtx operation UMTX_OP_WAIT_UINT, use the
function in semaphore operations, this fixed compiler warnings.
2007-11-21 05:21:58 +00:00
Marius Strobl
9a2706abcc In _pthread_key_create() ensure that libthr is initialized. This
fixes a NULL-dereference of curthread when libstdc+ initializes
the exception handling globals on archs we can't use GNU TLS due
to lack of support in binutils 2.15 (i.e. arm and sparc64), yet,
thus making threaded C++ programs compiled with GCC 4.2.1 work
again on these archs.

Reviewed by:	davidxu
MFC after:	3 days
2007-11-06 21:50:43 +00:00
David Xu
56b45d9067 Avoid doing adaptive spinning for priority protected mutex, current
implementation always does lock in kernel.
2007-10-31 01:50:48 +00:00
David Xu
55f18e070f Don't do adaptive spinning if it is running on UP kernel. 2007-10-31 01:44:50 +00:00
David Xu
e8ef3c283b Restore revision 1.55, the kris's adaptive mutex type. 2007-10-31 01:37:13 +00:00
Kris Kennaway
83941f797f Adaptive mutexes should have the same deadlock detection properties that
default (errorcheck) mutexes do.

Noticed by:          davidxu
2007-10-30 09:24:23 +00:00
David Xu
7416cdabcd Add my recent work of adaptive spin mutex code. Use two environments variable
to tune pthread mutex performance:
1. LIBPTHREAD_SPINLOOPS
	If a pthread mutex is being locked by another thread, this environment
	variable sets total number of spin loops before the current thread
	sleeps in kernel, this saves a syscall overhead if the mutex will be
	unlocked very soon (well written application code).
2. LIBPTHREAD_YIELDLOOPS
	If a pthread mutex is being locked by other threads, this environment
	variable sets total number of sched_yield() loops before the currrent
	thread sleeps in kernel. if a pthread mutex is locked, the current thread
	gives up cpu, but will not sleep in kernel, this means, current thread
	does not set contention bit in mutex, but let lock owner to run again
	if the owner is on kernel's run queue, and when lock owner unlocks the
	mutex, it does not need to enter kernel and do lots of work to resume
	mutex waiters, in some cases, this saves lots of syscall overheads for
	mutex owner.

In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance
than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments
are global to all pthread mutex, there is no interface to set them for each
pthread mutex, the default values are zero, this means spinning is turned off
by default.
2007-10-30 05:57:37 +00:00
Kris Kennaway
2017a7cdfe Add a new "non-portable" mutex type, PTHREAD_MUTEX_ADAPTIVE_NP. This
is also implemented in glibc and is used by a number of existing
applications (mysql, firefox, etc).

This mutex type is a default mutex with the additional property that
it spins briefly when attempting to acquire a contested lock, doing
trylock operations in userland before entering the kernel to block if
eventually unsuccessful.

The expectation is that applications requesting this mutex type know
that the mutex is likely to be only held for very brief periods, so it
is faster to spin in userland and probably succeed in acquiring the
mutex, than to enter the kernel and sleep, only to be woken up almost
immediately.  This can help significantly in certain cases when
pthread mutexes are heavily contended and held for brief durations
(such as mysql).

Spin up to 200 times before entering the kernel, which represents only
a few us on modern CPUs.  No performance degradation was observed with
this value and it is sufficient to avoid a large performance drop in
mysql performance in the heavily contended pthread mutex case.

The libkse implementation is a NOP.

Reviewed by:      jeff
MFC after:        3 days
2007-10-29 21:01:47 +00:00
David Xu
5150e987d2 Use macro THR_CLEANUP_PUSH/POP, they are cheaper than pthread_cleanup_push/pop. 2007-10-16 07:46:15 +00:00
David Xu
286b41104d Reverse the logic of UP and SMP.
Submitted by: jasone
2007-10-16 07:36:02 +00:00
David Xu
4aa80591b6 Output error message to STDERR_FILENO.
Approved by: re (bmah)
2007-08-07 04:50:14 +00:00
David Xu
00784f8b10 backout experimental adaptive spinning mutex for product use. 2007-05-09 08:39:33 +00:00
David Xu
6839e793cf If a thread who's name is being set is not the current thread, use macros
THR_THREAD_LOCK and THR_THREAD_UNLOCK instead, this should fix wrong
lock level problem.

Bug reported by: ed dot maste at gmail dot com
2007-04-05 07:20:31 +00:00
Warner Losh
fed32d7544 Remove 3rd clause, renumber, ok per email 2007-01-12 07:26:21 +00:00
David Xu
03779e5c2b Insert mutex at tail if it has highest ceiling. 2007-01-05 03:57:11 +00:00
David Xu
da20a63dbb Oops, don't corrupt the list. 2007-01-05 03:33:47 +00:00
David Xu
5470bb56fc Check if the PP mutex is recursive, if we have already locked it, place the
mutex in right order sorted by priority ceiling.
2007-01-05 03:29:15 +00:00
David Xu
74c751131b get LIBPTHREAD_ADAPTIVE_SPIN early, so it can be used for some global
mutexes.
2006-12-20 05:05:44 +00:00
David Xu
842a092b74 Check environment variable PTHREAD_ADAPTIVE_SPIN, if it is set, use
it as a default spin cycle count.
2006-12-20 04:43:34 +00:00
David Xu
d99f6dac14 - Remove variable _thr_scope_system, all threads are system scope.
- Rename _thr_smp_cpus to boolean variable _thr_is_smp.
- Define CPU_SPINWAIT macro for each arch, only X86 supports it.
2006-12-15 11:52:01 +00:00
David Xu
8a8178c010 Create inline function _thr_umutex_trylock2 to only try one atomic
operation, if it is failed, we call syscall directly, this saves
one atomic operation per lock contention.
2006-12-14 13:22:02 +00:00
David Xu
9e8a8aa551 Correctly check failed syscall. 2006-12-12 05:26:39 +00:00
David Xu
347126a2e2 Move checking for c_has_waiters into low level _thr_ucond_signal and
_thr_ucond_broadcast, clear condition variable pointer in cancellation
info after returing from _thr_ucond_wait, since kernel has already
dropped the internal lock, so we don't need to unlock it in cancellation
handler again.
2006-12-12 03:08:49 +00:00
David Xu
b774466b61 test cancel_pending to save a thr_wake call in some specical cases. 2006-12-06 00:15:35 +00:00
David Xu
a8a343d2e6 _thr_ucond_wait drops lock, we should pick it up again. 2006-12-05 23:46:11 +00:00
David Xu
3b8a017442 the c_has_waiters is lazily updated, temporarily disable the false
alarm code.
2006-12-05 07:23:58 +00:00
David Xu
4d617f2d10 Use ucond to implement barrier. 2006-12-05 06:54:25 +00:00
David Xu
670b44d65a Add _thr_ucond_init(). 2006-12-05 06:53:44 +00:00
David Xu
3ce4e91d4e Tweak _thr_cancel_leave_defer a bit to fix a possible race. 2006-12-05 05:01:57 +00:00
David Xu
3c61d00ab6 Fix typo, I was using a wrong header file, and the typo is not detected
by compiler.
2006-12-04 14:27:42 +00:00
David Xu
2bd2c90703 Use kernel provided userspace condition variable to implement pthread
condition variable.
2006-12-04 14:20:41 +00:00
David Xu
6f54e82927 If a thread was detached, return EINVAL instead, the error code
is also returned by pthread_detach() if a thread was already
detached, the error code was already documented:

>    [EINVAL]	The implementation has detected that the value speci-
>		fied by thread does not refer to a joinable thread.
2006-11-28 11:05:31 +00:00
David Xu
f08e1bf682 Eliminate atomic operations in thread cancellation functions, it should
reduce overheads of cancellation points.
2006-11-24 09:57:38 +00:00