stale obj directory and we wouldn't want to do that! I trust he knows
what he's talking about. 8-)
Also avoid building libm at all until the NetBSD asm code is imported.
I wrongly commented this out last time. Oops.
that this source is compiled against. This source is referenced by
install which is needed as a build tool and must be able to compile
against NetBSD headers and libraries if we have a hope of supporting
another architecture.
With this change, that's two working programs down and 3945 (?) to go.
The other one was make, but that didn't need any changes to work under
FreeBSD/Alpha. 8-)
to another architecture (in this case the Alpha) we can continue to use
the host csu objects (from NetBSD). This should be a non-function change
to FreeBSD/i386.
case has very little to do with the output size being larger than
INT_MAX.
2. The new #include of <limits.h> was disordered.
3. The new declaration of `on' was disordered (integer types go together).
4. Testing an unsigned value for > 0 was fishy.
Submitted by: bde
mlock, mmap, mprotect, msync, munlock, and munmap are defined by
POSIX as taking void *. The const modifier has been added to
mlock, munlock, and mprotect as the standard dictates.
minherit comes from OpenBSD and has been updated to conform with
their recent change to void *.
madvise and mincore are not defined by POSIX, but their arguments
have been modified to be consistent with the POSIX-defined functions.
mincore takes a const pointer, but madvise does not due to the
MADV_FREE case.
Discussed with: bde
at the first position on either of the last two lines of the
screen. Ie. append contents of current line to the previous
line and scroll the next line's contents up.
PR: 5392
Submitted by: Kouichi Hirabayashi <kh@mogami-wire.co.jp>
instead of Singe Unix, thanx Bruce for explaining, I am not realize
standards war was there.
But now, fix n == 0 case to not return error and fix check for too
big n.
Things left to do: check for overflow in arguments.
Final word is Bruce's quote:
C9x specifies the BSD4.4-Lite behaviour:
[#3] ... Thus, the
null-terminated output has been completely written if and
only if the returned value is less than n.
It means that if we not have any null-terminated output as for n == 0
we can't return value less than n, so we forced to return value
equal to n i.e. 0
The next good thing is glibc compatibility, of course.
2) Do check for too big n in machine-independent way.
3) Minor optimization assuming EOF is < 0
The main argument is that it is impossible to determine if %n evaluated or not
when snprintf return 0, because it can happens for both n == 0 and n == 1.
Although EOF here is good indication of the end of process, if n is
decreased in the loop...
Since it is already supposed in many places that EOF *is* negative, f.e.
from Single Unix specs for snprintf
"return ... a negative value if an output error was encountered"
this not makes situation worse.
to pass not more than buffer size to %n agrument, old variant
always assume infinite buffer.
%n is for actually transmitted characters, not for planned ones.
"return the number of bytes needed, rather the number used"
According to Single Unix specs:
Upon successful completion, these functions return the number of bytes
transmitted excluding the terminating null
1) if buffer size is smaller than arguments size, return buffer
size, not arguments size as before.
2) if buffer size is 0, return 0, not EOF as before.
(now it is compatible with Linux and Apache implementations too).
NOTE: Single Unix specs says:
If the value of n {buffer size} is zero on a call to snprintf(), an
unspecified value less than 1 is returned.
It means we can't return EOF since EOF can take *any* value in general
not especially < 1. Better variant will be return -1 (it is less then
1 and different with n == 1 case) but -1 value is already occuped by
EOF in our implementation, so we can't distinguish true IO error
in that case. So 0 here is only possible case still conforming
to Single Unix specs.