invocations). It also fixes some edge cases that were not handled in
the previous version.
TODO: Correctly report IPv6 sockets (already in use by the sparc64 build)
ordering, which had become too limited.
We now build packages ordered by those that are part of the longest
dependency chains first. This has the effect of building the deepest
parts of the tree first and levelling out the tree height, hopefully
avoiding the situation we currently face where there appear
bottlenecks late in the build where the cluster becomes mostly idle
while waiting for a few long dependency chains to finish building
before the cluster can become fully loaded again.
The algorithm is that we sort the list of remaining packages according
to height (longest dependency chain), then add leaf packages from each
in order until we have filled a queue of length between 100 and 200,
to amortise the cost of this queue rebalancing while not losing the
height averaging property. Jobs are dispatched from this queue into
worker threads as machine slots become available.
Unlike the make-based solution that required a fixed -j concurrency
value and could not respond to addition/removal of build resources, we
now can dynamically add new machines as they become available to the
queue.
The other advantage of using python is that we have more
customisability and visibility into the build status, e.g. we
periodically report the number of remaining packages, as well as the
list of deepest packages that we are working on.
TODO:
* Implement mtime checking for parent package staleness, so that
parents are rebuilt if the dependencies are touched more recently.
Currently packages will not be rebuild if they exist, whether or not
they are "stale" wrt their dependencies.
* Offload the machine selection into an external queue manager.
Currently the queue manager used here doesn't interoperate with the
old one (getmachine/releasemachine) because it's not possible to use
the lockf()-based mutual exclusion within a multithreaded client.
Doing that will also allow for a more flexible job placement
algorithm as well as finer queue customization.
just the plist ones. If the log is less than 1000 lines after the header,
include it all; else, trim to last 1000 lines.
This should help when deciding where to forward logs.
Tested on: pointyhat
makes it possible to correctly analyze why packages were not built for a
specific run.
Add a beginning and ending email notification to help coordinate between
multiple portmgrs doing runs.
lines has 3 spaces before SUBDIR word and all other categories has 4.
I've asked pav@ if there is a default format of category Makefiles and he said
the number of spaces doesn't matter, so, i fix addport to respect the current
number of spaces and/or tabs the file has.
Reported by: miwi, erwin
- check if an installed libtool records dependencies recursively and
print a warning if it does
currently it prints the warning on every system which has libtool
installed from ports (only my local version doesn't do this, the
version in the ports is not correctly patched for this, a patch
similar in complexity (= simple) like the ltdl.m4 one in the
libtool-port-patch-directory is needed)
- enhance the regex which is responsible to not print a dependency to
the port we are just checking
- add a work in progress (not executed) to collapse the USE_* which
can have more than one value
neededlibs.sh:
- we also care about shared libs
resolveportsfromlibs.sh:
- take care about USE_OPENSSL, USE_EFL, USE_GL, USE_FAM, USE_OPENLDAP,
USE_SDL
- search in the "ldconfig -r" output if we can not find the lib ourself
- a better way of getting the first part of the LIB_DEPENDS stuff
(lib/libXYZ.so can be specified now too)
- some line wrapping + whitespace
- print the origin for the USE_* too (except USE_OPENSSL), so an user
can make some sanity checks and the explicit_lib_depends.sh can DTRT
if we check the USE_* port itself
- warn if we can not determine the right component (can happen for XORG)
unambiguously.
dependencies of a port:
neededlibs.sh
Extract direct library dependencies (filenames) from binaries.
resolveportsfromlibs.sh
Prints the name(s) of ports(s) given a library filename,
suitable for direct use (copy&paste) in LIB_DEPENDS.
Example usage is included in the scripts. The following combined usage may
be helpful for further porting/testing automation:
resolveportsfromlibs.sh -b /usr/local $(neededlibs.sh /test/bin/*)
Requested by: kris, lofi (sort of)
bsd.commands.mk and can be easily reused within the infrastructure.
- Revert old DESTDIR implementation.
- Add a new, fully chrooted DESTDIR implementation as bsd.destdir.mk.
Sponsored by: Google Summer of Code 2007
Approved by: portmgr (pav)
zfs:
* Enabled by use_zfs=1 in portbuild.conf
* Populate build chroots by cloning a zfs snapshot instead of maintaining
many duplicate copies. In principle this is very efficient since
everything is copy-on-write and zfs snapshot creation is almost
instantaneous. There might be additional overheads from building on zfs
though. Currently the snapshot base is hard-wired to y/${branch}@base
but should be parametrized. This also must be populated beforehand, e.g.
during machine startup
* Clean build chroots by just destroying the snapshot.
tmpfs:
* Enabled by use_tmpfs=1 and tmpfs_size in portbuild.conf
* The previous md strategy of mounting in used/, populating and then
remounting (to avoid possible races from multiple builds claiming the
same chroot) doesn't work here because tmpfs instances are destroyed at
umount. I am not entirely sure the simpler approach is free from races.