1
0
mirror of https://git.FreeBSD.org/src.git synced 2024-12-21 11:13:30 +00:00
Commit Graph

17 Commits

Author SHA1 Message Date
Tim Kientzle
cc1e3ebe54 Eliminate an unused assignment. 2009-12-28 02:29:21 +00:00
Tim Kientzle
bfe2732de8 LZW bugfix: when we hit end-of-file, return an invalid code. 2009-04-17 00:58:44 +00:00
Tim Kientzle
634fb9dd48 Merge r491,493,500,507,510,530,543 from libarchive.googlecode.com:
This implements the new generic options framework that provides a way
to override format- and compression-specific parameters.
2009-03-06 05:58:56 +00:00
Tim Kientzle
ffd201719e Argh. r189389 was supposed to include r539 from libarchive.googlecode.com
but those compile fixes somehow got lost.  This should fix the build.
2009-03-05 06:26:08 +00:00
Tim Kientzle
facbbae9f9 Merge r364, r378, r379, r393, and r539 from libarchive.googlecode.com:
This is the last phase of the "big decompression refactor" that
puts a lazy reblocking layer between each pair of read filters.
I've also changed the terminology for this area---the two kinds
of objects are now called "read filters" and "read filter bidders"---and
moved ownership of these objects to the archive_read core.

This greatly simplifies implementing new read filters, which
can now use peek/consume I/O semantics both for bidding (arbitrary
look-ahead!) and for reading streams (look-ahead simplifies handling
concatenated streams, for instance).

The first merge here is the overhaul proper; the remainder are small
fixes to correct errors in the initial implementation.
2009-03-05 02:19:42 +00:00
Tim Kientzle
bc14277c79 Merge r282 from libarchive.googlecode.com: Close multiple filters
by walking the filter list in archive_read_close().
2009-03-03 03:33:25 +00:00
Tim Kientzle
b1ff9c25b8 MfP4: Big read filter refactoring.
This is an attempt to eliminate a lot of redundant
code from the read ("decompression") filters by
changing them to juggle arbitrary-sized blocks
and consolidate reblocking code at a single point
in archive_read.c.

Along the way, I've changed the internal read/consume
API used by the format handlers to a slightly
different style originally suggested by des@.  It
does seem to simplify a lot of common cases.

The most dramatic change is, of course, to
archive_read_support_compression_none(), which
has just evaporated into a no-op as the blocking
code this used to hold has all been moved up
a level.

There's at least one more big round of refactoring
yet to come before the individual filters are as
straightforward as I think they should be...
2008-12-06 06:45:15 +00:00
Tim Kientzle
b48b40f1f8 libarchive 2.2.3
* "compression_program" support uses an external program
  * Portability: no longer uses "struct stat" as a primary
    data interchange structure internally
  * Part of the above: refactor archive_entry to separate
    out copy_stat() and stat() functions
  * More complete tests for archive_entry
  * Finish archive_entry_clone()
  * Isolate major()/minor()/makedev() in archive_entry; remove
    these from everywhere else.
  * Bug fix: properly handle decompression look-ahead at end-of-data
  * Bug fixes to 'ar' support
  * Fix memory leak in ZIP reader
  * Portability: better timegm() emulation in iso9660 reader
  * New write_disk flags to suppress auto dir creation and not
    overwrite newer files (for future cpio front-end)
  * Simplify trailing-'/' fixup when writing tar and pax
  * Test enhancements:  fix various compiler warnings, improve
    portability, add lots of new tests.
  * Documentation: document new functions, first draft of
    libarchive_internals.3

MFC after: 14 days
Thanks to: Joerg Sonnenberger (compression_program)
Thanks to: Kai Wang (ar)
Thanks to: Colin Percival (many small fixes)
Thanks to: Many others who sent me various patches and problem reports.
2007-05-29 01:00:21 +00:00
Tim Kientzle
72654d08e1 From Joerg Sonnenberger: Fix a number of style gaffes,
including type puns and avoidable casts.
2007-04-05 05:18:16 +00:00
Tim Kientzle
f81da3e584 libarchive 2.0
* libarchive_test program exercises many of the core features
  * Refactored old "read_extract" into new "archive_write_disk", which
    uses archive_write methods to put entries onto disk.  In particular,
    you can now use archive_write_disk to create objects on disk
    without having an archive available.
  * Pushed some security checks from bsdtar down into libarchive, where
    they can be better optimized.
  * Rearchitected the logic for creating objects on disk to reduce
    the number of system calls.  Several common cases now use a
    minimum number of system calls.
  * Virtualized some internal interfaces to provide a clearer separation
    of read and write handling and make it simpler to override key
    methods.
  * New "empty" format reader.
  * Corrected return types (this ABI breakage required the "2.0" version bump)
  * Many bug fixes.
2007-03-03 07:37:37 +00:00
Tim Kientzle
63165a380d Fix the copyright notice; it was always intended to be
a vanilla 2-clause BSD license, but somehow some confusing
extra verbage get copied from somewhere.

Also, update the copyright dates to 2007 for all of the files.

Prompted by: several questions about what those extra words really mean
2007-01-09 08:05:56 +00:00
Tim Kientzle
aa1eeda578 Portability and style fixes:
* Actually use the HAVE_<header>_H macros to conditionally include
    system headers.  They've been defined for a long time, but only
    used in a few places.  Now they're used pretty consistently
    throughout.
  * Fill in a lot of missing casts for conversions from void*.
    Although Standard C doesn't require this, some people have been
    trying to use C++ compilers with this code, and they do require it.

Bit-for-bit, the compiled object files are identical, except for
one assert() whose line number changed, so I'm pretty confident I
didn't break anything.  ;-)
2006-11-10 06:39:46 +00:00
Tim Kientzle
693285bc87 Use 'skip' when ignoring data in tar archives. This dramatically
increases performance when extracting a single entry from a large
uncompressed archive, especially on slow devices such as USB hard
drives.

Requires a number of changes:
   * New archive_read_open2() supports a 'skip' client function
   * Old archive_read_open() is implemented as a wrapper now, to
     continue supporting the old API/ABI.
   * _read_open_fd and _read_open_file sprout new 'skip' functions.
   * compression layer gets a new 'skip' operation.
   * compression_none passes skip requests through to client.
   * compression_{gzip,bzip2,compress} simply ignore skip requests.

Thanks to: Benjamin Lutz, who designed and implemented the whole thing.
   I'm just committing it.  ;-)

TODO: Need to update the documentation a little bit.
2006-07-30 00:29:01 +00:00
Tim Kientzle
f0e9186bf9 Correctly clean up if gzip format gets mis-identified as compress format.
(This can only happen in the pathalogical case where the client is
providing single-byte blocks.)
2005-11-08 07:42:42 +00:00
Tim Kientzle
5255e61f1a Refine the error-checking and reporting in the
"compress" format decompression code.  In particular,
distinguish between EOF and fatal data errors.
2004-10-17 23:40:10 +00:00
Tim Kientzle
57b665990a Eliminate reliance on non-portable <err.h> by implementing a very
simple errx() function.
Improve behavior when bzlib/zlib are missing by detecting and
issuing an error message on attempts to read gzip/bzip2 compressed
archives.
2004-08-14 03:45:45 +00:00
Tim Kientzle
72271236bb Add read-only support for .Z compressed archives. 2004-05-27 03:58:55 +00:00