adding (weak definitions to) stubs for some of the pthread
functions. If the threads library is linked in, the real
pthread functions will pulled in.
Use the following convention for system calls wrapped by the
threads library:
__sys_foo - actual system call
_foo - weak definition to __sys_foo
foo - weak definition to __sys_foo
Change all libc uses of system calls wrapped by the threads
library from foo to _foo. In order to define the prototypes
for _foo(), we introduce namespace.h and un-namespace.h
(suggested by bde). All files that need to reference these
system calls, should include namespace.h before any standard
includes, then include un-namespace.h after the standard
includes and before any local includes. <db.h> is an exception
and shouldn't be included in between namespace.h and
un-namespace.h namespace.h will define foo to _foo, and
un-namespace.h will undefine foo.
Try to eliminate some of the recursive calls to MT-safe
functions in libc/stdio in preparation for adding a mutex
to FILE. We have recursive mutexes, but would like to avoid
using them if possible.
Remove uneeded includes of <errno.h> from a few files.
Add $FreeBSD$ to a few files in order to pass commitprep.
Approved by: -arch
just use _foo() <-- foo(). In the case of a libpthread that doesn't do
call conversion (such as linuxthreads and our upcoming libpthread), this
is adequate. In the case of libc_r, we still need three names, which are
now _thread_sys_foo() <-- _foo() <-- foo().
Convert all internal libc usage of: aio_suspend(), close(), fsync(), msync(),
nanosleep(), open(), fcntl(), read(), and write() to _foo() instead of foo().
Remove all internal libc usage of: creat(), pause(), sleep(), system(),
tcdrain(), wait(), and waitpid().
Make thread cancellation fully POSIX-compliant.
Suggested by: deischen
Updated date. 1987 was a while ago.
Removed trailing comma in NAME section.
Uncapitalised Bindresvport and Bindresvport_sa in DESCRIPTION section.
Don't use .Nm there either.
Added bindresvport_sa() to the RETURN VALUES and ERROR sections.
-changed bindresvport2 to bindresvport_sa
-merged the man into bindresvport.3
All discussion between Jean-Luc Richier <Jean-Luc.Richier@imag.fr>,
Theo de Raadt <deraadt@cvs.openbsd.org>, itojun, is reflected to
this code. (Actually Theo de Raadt write the code simultaneously as the
discussion change.)
points. For library functions, the pattern is __sleep() <--
_libc_sleep() <-- sleep(). The arrows represent weak aliases. For
system calls, the pattern is _read() <-- _libc_read() <-- read().
is an application space macro and the applications are supposed to be free
to use it as they please (but cannot). This is consistant with the other
BSD's who made this change quite some time ago. More commits to come.
MAN8+= rstat_svc.8
The file it talks about doesn't exist on FreeBSD, so there's no point in
installing the manual page. There was already a comment to this effect in
this file, but the entry hadn't been commented out.
rstat.1 and rstat_svc.8 can probably actually be removed.
PR: docs/13767
Submitted by: Seth <seth@freebie.dp.ny.frb.org>
mode. This addresses a well-known race condition that can cause
servers to hang in accept(). The relevant case is when somebody
connects to the server and then immediately kills the connection
by sending a TCP reset. On the server this causes select to report
a ready condition on the socket, after which the accept call blocks
because there is no longer any pending connection to accept.
In -current there is already a work-around for this in the kernel.
It was merged into -stable some time ago, but then David Greenman
reverted it because it seemed to be causing a socket leak in some
cases. (See uipc_socket.c revision 1.51.2.3.) Hence this userland
fix is needed in -stable, and I plan to merge it into that branch
soon because it fixes a potential DoS attack. It may also be needed
in -current if the suspected socket leak turns out to be real. In
any case, after thinking it over I believe the fix belongs in
userland. An application shouldn't assume that a ready return from
select guarantees that the subsequent I/O operation cannot block.
A lot can happen between the select and the accept.
A similar fix should most likely be applied to the Unix domain
socket transport too.
Submitted by: peter
Reviewed by: jdp
It used to loop back up to the accept() call and block there,
shutting out all other transports until a new connection came in.
Now it returns instead after dropping the connection. That will
take it back to the select() loop where all transports can be
serviced. I intend to MFC this within a day or two since it
fixes a DoS vulnerability.
track.
The $Id$ line is normally at the bottom of the main comment block in the
man page, separated from the rest of the manpage by an empty comment,
like so;
.\" $Id$
.\"
If the immediately preceding comment is a @(#) format ID marker than the
the $Id$ will line up underneath it with no intervening blank lines.
Otherwise, an additional blank line is inserted.
Approved by: bde
uses readtcp() to gather data from the network; readtcp() uses select(),
with a timeout of 35 seconds. The problem with this is that if you
connect to a TCP server, send two bytes of data, then just pause, the
server will remain blocked in readtcp() for up to 35 seconds, which is
sort of a long time. If you keep doing this every 35 seconds, you can
keep the server occupied indefinitely.
To fix this, I modified readtcp() (and its cousin, readunix() in svc_unix.c)
to monitor all service transport handles instead of just the current socket.
This allows the server to keep handling new connections that arrive while
readtcp() is running. This prevents one client from potentially monopolizing
a server.
Also, while I was here, I fixed a bug in the timeout calculations. Someone
attempted to adjust the timeout so that if select() returned EINTR and the
loop was restarted, the timeout would be reduced so that rather than waiting
for another 35 seconds, you could never wait for more than 35 seconds total.
Unfortunately, the calculation was wrong, and the timeout could expire much
sooner than 35 seconds.
recently in BUGTRAQ. If a stream oriented transport fails to properly decode
an RPC message header structure where there should be one, it should mark
the stream as dead so that the connection will be dropped.
partway through its attempt to decode the result structure sent by
the server. If this happens, it can leave the result partially
populated with dynamically allocated memory. In this event, the
xdr_replymsg() failure is detected and RPC_CANTDECODERES is returned,
but the memory in the partially populated result struct is not
free()d.
The end result is that memory is leaked when an RPC_CANTDECODERES
error occurs. (This condition can occur if a CLIENT * handle is created
using clntudp_bufcreate() with a receive buffer size that is too small
to handle the result sent by the server.)
Fixed by setting reply_xdrs.x_op to XDR_FREE and calling
xdr_replymsg() again to free the memory if an RPC_CANTDECODERES error
is detected.
I suspect that the clnt_tcp.c, clnt_unix.c and clnt_raw.c modules
may ha a similar problem, but I haven't duplicated the condition with
those yet.
Found by: dbmalloc
to fail under certain circumstances.
1. In one spot, the ifr_flags member was being examined in the
wrong structure, thus it contained garbage. On a machine in which
only the loopback interface was up, this caused everything that
wanted to talk to the portmapper to fail -- a particular problem
with laptops, where the pccard ethernet interface is likely to come
up long after the attempt to start mountd, nfsd, amd, etc.
2. Compounding the above problem, get_myaddress() returned a
successful status even though it failed to find an address that it
considered good enough.
made to the RPC code some months ago. The value of __svc_fdsetsize is being
calculated incorrectly.
Logically, one would assume that __svc_fdsetsize is being used as a
substitute for FD_SETSIZE, with the difference being that __svc_fdsetsize
can be expanded on the fly to accomodate more descriptors if need be.
There are two problems: first, __svc_fdsetsize is not initialized to 0.
Second, __svc_fdsetsize is being calculated in svc.c:xprt_registere() as:
__svc_fdsetsize = howmany(sock+1, NFDBITS);
This is wrong. If we are adding a socket with index value 4 to the
descriptor set, then __svc_fdsetsize will be 1 (since fds_bits is
an unsigned long, it can support any descriptor from 0 to 31, so we
only need one of them). In order for this to make sense with the
rest of the code though, it should be:
__svc_fdsetsize = howmany(sock+1, NFDBITS) * NFDBITS;
Now if sock == 4, __svc_fdsetsize will be 32.
This bug causes 2 errors to occur. First, in xprt_register(), it
causes the __svc_fdset descriptor array to be freed and reallocated
unnecessarily. The code checks if it needs to expand the array using
the test: if (sock + 1 > __svc_fdsetsize). The very first time through,
__svc_fdsetsize is 0, which is fine: an array has to be allocated the
first time out. However __svc_fdsetsize is incorrectly set to 1, so
on the second time through, the test (sock + 1 > __svc_fdsetsize)
will still succeed, and the __svc_fdset array will be destroyed and
reallocated for no reason.
Second, the code in svc_run.c:svc_run() can become hopelessly confused.
The svc_run() routine malloc()s its own fd_set array using the value
of __svc_fdsetsize to decide how much memory to allocate. Once the
xprt_register() function expands the __svc_fdset array the first time,
the value for __svc_fdsetsize becomes 2, which is too small: the resulting
calculation causes the code to allocate an array that's only 32 bits wide
when it actually needs 64 bits. It also uses the valuse of __svc_fdsetsize
when copying the contents of the __svc_fdset array into the new array.
The end result is that all but the first 32 file descriptors get lost.
Note: from what I can tell, this bug originated in OpenBSD and was
brought over to us when the code was merged. The bug is still there
in the OpenBSD source.
Total nervous breakdown averted by: Electric Fence 2.0.5
undefined symbol referenced from libc. Without the stub, it is
impossible to execute any program using the shared library if
LD_BIND_NOW=1 is in the environment. The stub always returns
failure, but it can be overridden outside the library when necessary.
I don't know whether this is the "correct" fix, but it is intolerable
to have any undefined symbols referenced from libc.
The logic in get_myaddress() is broken: it always returns the loopback
address due to the following rule:
if ((ifreq.ifr_flags & IFF_UP) &&
ifr->ifr_addr.sa_family == AF_INET &&
(loopback == 1 && (ifreq.ifr_flags & IFF_LOOPBACK))) {
The idea is that we want to select the interface address only if it's
up and it's in the AF_INET family. If it turns uout we don't have
such an interface available, we make a second pass through the loop,
this time settling for the loopback interface. But the logic inadvertently
locks out all cases when loopback == 0, so nothing is ever selected until
the second pass (when loopback == 1).
This is changed to:
if (((ifreq.ifr_flags & IFF_UP) &&
ifr->ifr_addr.sa_family == AF_INET) ||
(loopback == 1 && (ifreq.ifr_flags & IFF_LOOPBACK))) {
which I think does the right thing.
This is yet another bogon I discovered during NIS+ testing; I need
get_myaddress() to work correctly so that the callback code in the
client library will work.