encryption compatibility with Windows 2000. Stateful encryption
uses less CPU but is bad on lossy transports.
The ``set mppe'' command has been expanded. If it's used with any
arguments, ppp will insist on encryption, closing LCP if the other
end refuses.
Unfortunately, Microsoft have abused the CCP reset request so that
receiving a reset request does not result in a reset ack when using
MPPE...
Sponsored by: Monzoon Networks AG and FreeBSD Services Limited
allow MRU/MTU negotiations to exceed 1492.
Add an optional ``max'' specifier to ``set m[rt]u'', ie.
set mtu max 1480
Bump the ppp version number.
Sponsored by: Monzoon Networks AG and FreeBSD Services Limited
CLOSE_NORMAL meanings. CLOSE_NORMAL doesn't change the currently
required state, the others do. This should stop ppp from entering
DATALINK_READY when LCP shutdown doesn't end up happening cleanly.
Bump our version number to reflect this change.
Only show the mask in ``show bundle'' when it's been specified.
Complain about unexpected arguments after ``set server {none,open,closed}''
Log re-open failures as warnings rather than phase messages.
Fix some markup for the ``set server'' man page description.
Allow ``set server open'' to re-open the diagnostic socket.
Handle SIGUSR1 by re-opening the diagnostic socket
When receiving SIGUSR2 (and in ``set server none''), don't forget the
socket details so that ``set server open'' and SIGUSR1 open it again.
Don't create the diagnostic socket as uid 0 ! It's far to dangerous.
of the two when calculating the MP throughput average for the ``set
autoload'' implementation.
This makes more sense as all links I know of are full-duplex. This
also means that people may need to adjust their autoload settings
as 100% bandwidth is now the theoretical maximum rather than 200%
(but of course, halfing the current settings is probably not the
correct answer either!).
This involves a ppp version bump as we need to pass an extra
throughput array through the MP local domain socket.
cumulative total of all active links rather than basing it on the
total of PROTO_MP traffic.
This fixes a problem whereby Cisco routers send PROTO_IP packets only
when there's only one link (hmm, what a good idea!).
o If the new ``filter-decapsulation'' is enabled, delve into UDP packets
that contain 0xff 0x03 as the first two bytes, and if we recognise it
as PROTO_IP, decapsulate it for the purpose of filter checking.
If we recognise it as PROTO_<anything else> mention this for logging
purposes only.
This change is aimed at people running PPPoUDP where the UDP traffic is
being sent over another PPP link. It's desireable to have the top level
link connected all the time, but to have the bottom level link capable
of decapsulating the traffic and comparing the payload against the filters,
thus allowing ``set filter dial ...'' to work in tunnelled environments.
The caveat here is that the top ppp cannot employ any compression layers
without making the data unreadable for the bottom ppp. ``disable deflate
pred1 vj'' and ``deny deflate pred1 vj'' is suggested.
This is invaluable for dial-on-demand connections...
In ppp.linkup:
set log -dns -tcp/ip
and in ppp.linkdown
set log +dns +tcp/ip
giving a much better account of why the link came up.
method avoided all race conditions, but suffered from
sometimes running out of buffer space if enough clients
were piled up at the same time.
Now, the client pushes the link descriptor, one end of a
socketpair() and the ppp version via sendmsg() at the
server. The server replies with a pid. The client then
transfers any link lock with uu_lock_txfr() and writev()s
the actual link contents. The socketpair is now the only
place we need to have large socket buffers and the bind()ed
socket can keep the default 4k buffer while still handling
around 90 racing clients.
Previously, ppp attempted to bind() to a local domain tcp socket
based on the peer authname & enddisc. If it succeeded, it listen()ed
and became MP server. If it failed, it connect()ed and became MP
client. The server then select()ed on the descriptor, accept()ed
it and wrote its pid to it then read the link data & link file descriptor,
and finally sent an ack (``!''). The client would read() the server
pid, transfer the link lock to that pid, send the link data & descriptor
and read the ack. It would then close the descriptor and clean up.
There was a race between the bind() and listen() where someone could
attempt to connect() and fail.
This change removes the race. Now ppp makes the RCVBUF big enough on a
socket descriptor and attempts to bind() to a local domain *udp* socket
(same name as before). If it succeeds, it becomes MP server. If it
fails, it sets the SNDBUF and connect()s, becoming MP client. The server
select()s on the descriptor and recvmsg()s the message, insisting on at
least two descriptors (plus the link data). It uses the second descriptor
to write() its pid then read()s an ack (``!''). The client creates a
socketpair() and sendmsg()s the link data, link descriptor and one of
the socketpair descriptors. It then read()s the server pid from the
other socketpair descriptor, transfers any locks and write()s an ack.
Now, there can be no race, and a connect() failure indicates a stale
socket file.
This also fixes MP ppp over ethernet, where the struct msghdr was being
misconstructed when transferring the control socket descriptor.
Also, if we fail to send the link, don't hang around in a ``session
owner'' state, just do the setsid() and fork() if it's required to
disown a tty.
UDP idea suggested by: Chris Bennet from Mindspring at FreeBSDCon