general format of a zip file

general format of a zip file

-------------------------------------------------

editorial note:

this version was downloaded from the file appnote.txt

from the pkzip website on july 13, 1998.

to obtain this file go to: 

http://www.pkware.com/download.html

 

then download --> appnote.zip

end of editorial note

-------------------------------------------------

disclaimer

----------

although pkware will attempt to supply current and accurate

information relating to its file formats, algorithms, and the

subject programs, the possibility of error can not be eliminated.

pkware therefore expressly disclaims any warranty that the

information contained in the associated materials relating to the

subject programs and/or the format of the files created or

accessed by the subject programs and/or the algorithms used by

the subject programs, or any other matter, is current, correct or

accurate as delivered.  any risk of damage due to any possible

inaccurate information is assumed by the user of the information.

furthermore, the information relating to the subject programs

and/or the file formats created or accessed by the subject

programs and/or the algorithms used by the subject programs is

subject to change without notice.

general format of a zip file

----------------------------

  files stored in arbitrary order.  large zipfiles can span multiple

  diskette media.

  overall zipfile format:

    [local file header + file data + data_descriptor] . . .

    [central directory] end of central directory record

  a.  local file header:

        local file header signature     4 bytes  (0x04034b50)

        version needed to extract       2 bytes

        general purpose bit flag        2 bytes

        compression method              2 bytes

        last mod file time              2 bytes

        last mod file date              2 bytes

        crc-32                          4 bytes

        compressed size                 4 bytes

        uncompressed size               4 bytes

        filename length                 2 bytes

        extra field length              2 bytes

        filename (variable size)

        extra field (variable size)

  b.  data descriptor:

        crc-32                          4 bytes

        compressed size                 4 bytes

        uncompressed size               4 bytes

      this descriptor exists only if bit 3 of the general

      purpose bit flag is set (see below).  it is byte aligned

      and immediately follows the last byte of compressed data.

      this descriptor is used only when it was not possible to

      seek in the output zip file, e.g., when the output zip file

      was standard output or a non seekable device.

  c.  central directory structure:

      [file header] . . .  end of central dir record

      file header:

        central file header signature   4 bytes  (0x02014b50)

        version made by                 2 bytes

        version needed to extract       2 bytes

        general purpose bit flag        2 bytes

        compression method              2 bytes

        last mod file time              2 bytes

        last mod file date              2 bytes

        crc-32                          4 bytes

        compressed size                 4 bytes

        uncompressed size               4 bytes

        filename length                 2 bytes

        extra field length              2 bytes

        file comment length             2 bytes

        disk number start               2 bytes

        internal file attributes        2 bytes

        external file attributes        4 bytes

        relative offset of local header 4 bytes

        filename (variable size)

        extra field (variable size)

        file comment (variable size)

      end of central dir record:

        end of central dir signature    4 bytes  (0x06054b50)

        number of this disk             2 bytes

        number of the disk with the

        start of the central directory  2 bytes

        total number of entries in

        the central dir on this disk    2 bytes

        total number of entries in

        the central dir                 2 bytes

        size of the central directory   4 bytes

        offset of start of central

        directory with respect to

        the starting disk number        4 bytes

        zipfile comment length          2 bytes

        zipfile comment (variable size)

  d.  explanation of fields:

      version made by (2 bytes)

          the upper byte indicates the compatibility of the file

          attribute information.  if the external file attributes

          are compatible with ms-dos and can be read by pkzip for

          dos version 2.04g then this value will be zero.  if these

          attributes are not compatible, then this value will identify

          the host system on which the attributes are compatible.

          software can use this information to determine the line

          record format for text files etc.  the current

          mappings are:

          0 - ms-dos and os/2 (fat / vfat / fat32 file systems)

          1 - amiga                     2 - vax/vms

          3 - unix                      4 - vm/cms

          5 - atari st                  6 - os/2 h.p.f.s.

          7 - macintosh                 8 - z-system

          9 - cp/m                     10 - windows ntfs

         11 thru 255 - unused

          the lower byte indicates the version number of the

          software used to encode the file.  the value/10

          indicates the major version number, and the value

          mod 10 is the minor version number.

      version needed to extract (2 bytes)

          the minimum software version needed to extract the

          file, mapped as above.

      general purpose bit flag: (2 bytes)

          bit 0: if set, indicates that the file is encrypted.

          (for method 6 - imploding)

          bit 1: if the compression method used was type 6,

                 imploding, then this bit, if set, indicates

                 an 8k sliding dictionary was used.  if clear,

                 then a 4k sliding dictionary was used.

          bit 2: if the compression method used was type 6,

                 imploding, then this bit, if set, indicates

                 an 3 shannon-fano trees were used to encode the

                 sliding dictionary output.  if clear, then 2

                 shannon-fano trees were used.

          (for method 8 - deflating)

          bit 2  bit 1

            0      0    normal (-en) compression option was used.

            0      1    maximum (-ex) compression option was used.

            1      0    fast (-ef) compression option was used.

            1      1    super fast (-es) compression option was used.

          note:  bits 1 and 2 are undefined if the compression

                 method is any other.

          bit 3: if this bit is set, the fields crc-32, compressed size

                 and uncompressed size are set to zero in the local

                 header.  the correct values are put in the data descriptor

                 immediately following the compressed data.  (note: pkzip

                 version 2.04g for dos only recognizes this bit for method 8

                 compression, newer versions of pkzip recognize this bit

                 for any compression method.)

          the upper three bits are reserved and used internally

          by the software when processing the zipfile.  the

          remaining bits are unused.

      compression method: (2 bytes)

          (see accompanying documentation for algorithm

          descriptions)

          0 - the file is stored (no compression)

          1 - the file is shrunk

          2 - the file is reduced with compression factor 1

          3 - the file is reduced with compression factor 2

          4 - the file is reduced with compression factor 3

          5 - the file is reduced with compression factor 4

          6 - the file is imploded

          7 - reserved for tokenizing compression algorithm

          8 - the file is deflated

          9 - reserved for enhanced deflating

         10 - pkware date compression library imploding

      date and time fields: (2 bytes each)

          the date and time are encoded in standard ms-dos format.

          if input came from standard input, the date and time are

          those at which compression was started for this data.

      crc-32: (4 bytes)

          the crc-32 algorithm was generously contributed by

          david schwaderer and can be found in his excellent

          book "c programmers guide to netbios" published by

          howard w. sams & co. inc.  the 'magic number' for

          the crc is 0xdebb20e3.  the proper crc pre and post

          conditioning is used, meaning that the crc register

          is pre-conditioned with all ones (a starting value

          of 0xffffffff) and the value is post-conditioned by

          taking the one's complement of the crc residual.

          if bit 3 of the general purpose flag is set, this

          field is set to zero in the local header and the correct

          value is put in the data descriptor and in the central

          directory.

      compressed size: (4 bytes)

      uncompressed size: (4 bytes)

          the size of the file compressed and uncompressed,

          respectively.  if bit 3 of the general purpose bit flag

          is set, these fields are set to zero in the local header

          and the correct values are put in the data descriptor and

          in the central directory.

      filename length: (2 bytes)

      extra field length: (2 bytes)

      file comment length: (2 bytes)

          the length of the filename, extra field, and comment

          fields respectively.  the combined length of any

          directory record and these three fields should not

          generally exceed 65,535 bytes.  if input came from standard

          input, the filename length is set to zero.

      disk number start: (2 bytes)

          the number of the disk on which this file begins.

      internal file attributes: (2 bytes)

          the lowest bit of this field indicates, if set, that

          the file is apparently an ascii or text file.  if not

          set, that the file apparently contains binary data.

          the remaining bits are unused in version 1.0.

      external file attributes: (4 bytes)

          the mapping of the external attributes is

          host-system dependent (see 'version made by').  for

          ms-dos, the low order byte is the ms-dos directory

          attribute byte.  if input came from standard input, this

          field is set to zero.

      relative offset of local header: (4 bytes)

          this is the offset from the start of the first disk on

          which this file appears, to where the local header should

          be found.

      filename: (variable)

          the name of the file, with optional relative path.

          the path stored should not contain a drive or

          device letter, or a leading slash.  all slashes

          should be forward slashes '/' as opposed to

          backwards slashes '\' for compatibility with amiga

          and unix file systems etc.  if input came from standard

          input, there is no filename field.

      extra field: (variable)

          this is for future expansion.  if additional information

          needs to be stored in the future, it should be stored

          here.  earlier versions of the software can then safely

          skip this file, and find the next file or header.  this

          field will be 0 length in version 1.0.

          in order to allow different programs and different types

          of information to be stored in the 'extra' field in .zip

          files, the following structure should be used for all

          programs storing data in this field:

          header1+data1 + header2+data2 . . .

          each header should consist of:

            header id - 2 bytes

            data size - 2 bytes

          note: all fields stored in intel low-byte/high-byte order.

          the header id field indicates the type of data that is in

          the following data block.

          header id's of 0 thru 31 are reserved for use by pkware.

          the remaining id's can be used by third party vendors for

          proprietary usage.

          the current header id mappings defined by pkware are:

          0x0007        av info

          0x0009        os/2

          0x000c        vax/vms

          0x000d        reserved for unix

          several third party mappings commonly used are:

          0x4b46        fwkcs md5 (see below)

          0x07c8        macintosh

          0x4341        acorn/sparkfs

          0x4453        windows nt security descriptor (binary acl)

          0x4704        vm/cms

          0x470f        mvs

          0x4c41        os/2 access control list (text acl)

          0x4d49        info-zip vms (vax or alpha)

          0x5455        extended timestamp

          0x5855        info-zip unix (original, also os/2, nt, etc)

          0x6542        beos/bebox

          0x756e        asi unix

          0x7855        info-zip unix (new)

          0xfd4a        sms/qdos

          the data size field indicates the size of the following

          data block. programs can use this value to skip to the

          next header block, passing over any data blocks that are

          not of interest.

          note: as stated above, the size of the entire .zip file

                header, including the filename, comment, and extra

                field should not exceed 64k in size.

          in case two different programs should appropriate the same

          header id value, it is strongly recommended that each

          program place a unique signature of at least two bytes in

          size (and preferably 4 bytes or bigger) at the start of

          each data area.  every program should verify that its

          unique signature is present, in addition to the header id

          value being correct, before assuming that it is a block of

          known type.

         -os/2 extra field:

          the following is the layout of the os/2 attributes "extra" block.

          (last revision  09/05/95)

          note: all fields stored in intel low-byte/high-byte order.

          value         size            description

          -----         ----            -----------

  (os/2)  0x0009        short           tag for this "extra" block type

          tsize         short           size for the following data block

          bsize         long            uncompressed block size

          ctype         short           compression type

          eacrc         long            crc value for uncompress block

          (var)         variable        compressed block

        the os/2 extended attribute structure (fea2list) is compressed and then stored

        in it's entirety within this structure.  there will only ever be one "block" of data

        in varfields[].

         -vax/vms extra field:

          the following is the layout of the vax/vms attributes "extra"

          block.  (last revision 12/17/91)

          note: all fields stored in intel low-byte/high-byte order.

          value         size            description

          -----         ----            -----------

  (vms)   0x000c        short           tag for this "extra" block type

          tsize         short           size of the total "extra" block

          crc           long            32-bit crc for remainder of the block

          tag1          short           vms attribute tag value #1

          size1         short           size of attribute #1, in bytes

          (var.)        size1           attribute #1 data

          .

          .

          .

          tagn          short           vms attribute tage value #n

          sizen         short           size of attribute #n, in bytes

          (var.)        sizen           attribute #n data

          rules:

          1. there will be one or more of attributes present, which will

             each be preceded by the above tagx & sizex values.  these

             values are identical to the atr$c_xxxx and atr$s_xxxx constants

             which are defined in atr.h under vms c.  neither of these values

             will ever be zero.

          2. no word alignment or padding is performed.

          3. a well-behaved pkzip/vms program should never produce more than

             one sub-block with the same tagx value.  also, there will never

             be more than one "extra" block of type 0x000c in a particular

             directory record.

          - fwkcs md5 extra field:

          the fwkcs contents_signature system, used in

          automatically identifying files independent of filename,

          optionally adds and uses an extra field to support the

          rapid creation of an enhanced contents_signature:

              header id = 0x4b46

              data size = 0x0013

              preface   = 'm','d','5'

              followed by 16 bytes containing the uncompressed

                  file's 128_bit md5 hash(1), low byte first.

          when fwkcs revises a zipfile central directory to add

          this extra field for a file, it also replaces the

          central directory entry for that file's uncompressed

          filelength with a measured value.

          fwkcs provides an option to strip this extra field, if

          present, from a zipfile central directory. in adding

          this extra field, fwkcs preserves zipfile authenticity

          verification; if stripping this extra field, fwkcs

          preserves all versions of av through pkzip version 2.04g.

          fwkcs, and fwkcs contents_signature system, are

          trademarks of frederick w. kantor.

          (1) r. rivest, rfc1321.txt, mit laboratory for computer

              science and rsa data security, inc., april 1992.

              ll.76-77: "the md5 algorithm is being placed in the

              public domain for review and possible adoption as a

              standard."

      file comment: (variable)

          the comment for this file.

      number of this disk: (2 bytes)

          the number of this disk, which contains central

          directory end record.

      number of the disk with the start of the central directory: (2 bytes)

          the number of the disk on which the central

          directory starts.

      total number of entries in the central dir on this disk: (2 bytes)

          the number of central directory entries on this disk.

      total number of entries in the central dir: (2 bytes)

          the total number of files in the zipfile.

      size of the central directory: (4 bytes)

          the size (in bytes) of the entire central directory.

      offset of start of central directory with respect to

      the starting disk number:  (4 bytes)

          offset of the start of the central directory on the

          disk on which the central directory starts.

      zipfile comment length: (2 bytes)

          the length of the comment for this zipfile.

      zipfile comment: (variable)

          the comment for this zipfile.

  d.  general notes:

      1)  all fields unless otherwise noted are unsigned and stored

          in intel low-byte:high-byte, low-word:high-word order.

      2)  string fields are not null terminated, since the

          length is given explicitly.

      3)  local headers should not span disk boundaries.  also, even

          though the central directory can span disk boundaries, no

          single record in the central directory should be split

          across disks.

      4)  the entries in the central directory may not necessarily

          be in the same order that files appear in the zipfile.

unshrinking - method 1

----------------------

shrinking is a dynamic ziv-lempel-welch compression algorithm

with partial clearing.  the initial code size is 9 bits, and

the maximum code size is 13 bits.  shrinking differs from

conventional dynamic ziv-lempel-welch implementations in several

respects:

1)  the code size is controlled by the compressor, and is not

    automatically increased when codes larger than the current

    code size are created (but not necessarily used).  when

    the decompressor encounters the code sequence 256

    (decimal) followed by 1, it should increase the code size

    read from the input stream to the next bit size.  no

    blocking of the codes is performed, so the next code at

    the increased size should be read from the input stream

    immediately after where the previous code at the smaller

    bit size was read.  again, the decompressor should not

    increase the code size used until the sequence 256,1 is

    encountered.

2)  when the table becomes full, total clearing is not

    performed.  rather, when the compressor emits the code

    sequence 256,2 (decimal), the decompressor should clear

    all leaf nodes from the ziv-lempel tree, and continue to

    use the current code size.  the nodes that are cleared

    from the ziv-lempel tree are then re-used, with the lowest

    code value re-used first, and the highest code value

    re-used last.  the compressor can emit the sequence 256,2

    at any time.

 

expanding - methods 2-5

-----------------------

the reducing algorithm is actually a combination of two

distinct algorithms.  the first algorithm compresses repeated

byte sequences, and the second algorithm takes the compressed

stream from the first algorithm and applies a probabilistic

compression method.

the probabilistic compression stores an array of 'follower

sets' s(j), for j=0 to 255, corresponding to each possible

ascii character.  each set contains between 0 and 32

characters, to be denoted as s(j)[0],...,s(j)[m], where m<32.

the sets are stored at the beginning of the data area for a

reduced file, in reverse order, with s(255) first, and s(0)

last.

the sets are encoded as { n(j), s(j)[0],...,s(j)[n(j)-1] },

where n(j) is the size of set s(j).  n(j) can be 0, in which

case the follower set for s(j) is empty.  each n(j) value is

encoded in 6 bits, followed by n(j) eight bit character values

corresponding to s(j)[0] to s(j)[n(j)-1] respectively.  if

n(j) is 0, then no values for s(j) are stored, and the value

for n(j-1) immediately follows.

immediately after the follower sets, is the compressed data

stream.  the compressed data stream can be interpreted for the

probabilistic decompression as follows:

let last-character <- 0.

loop until done

    if the follower set s(last-character) is empty then

        read 8 bits from the input stream, and copy this

        value to the output stream.

    otherwise if the follower set s(last-character) is non-empty then

        read 1 bit from the input stream.

        if this bit is not zero then

            read 8 bits from the input stream, and copy this

            value to the output stream.

        otherwise if this bit is zero then

            read b(n(last-character)) bits from the input

            stream, and assign this value to i.

            copy the value of s(last-character)[i] to the

            output stream.

    assign the last value placed on the output stream to

    last-character.

end loop

b(n(j)) is defined as the minimal number of bits required to

encode the value n(j)-1.

the decompressed stream from above can then be expanded to

re-create the original file as follows:

let state <- 0.

loop until done

    read 8 bits from the input stream into c.

    case state of

        0:  if c is not equal to dle (144 decimal) then

                copy c to the output stream.

            otherwise if c is equal to dle then

                let state <- 1.

        1:  if c is non-zero then

                let v <- c.

                let len <- l(v)

                let state <- f(len).

            otherwise if c is zero then

                copy the value 144 (decimal) to the output stream.

                let state <- 0

        2:  let len <- len + c

            let state <- 3.

        3:  move backwards d(v,c) bytes in the output stream

            (if this position is before the start of the output

            stream, then assume that all the data before the

            start of the output stream is filled with zeros).

            copy len+3 bytes from this position to the output stream.

            let state <- 0.

    end case

end loop

the functions f,l, and d are dependent on the 'compression

factor', 1 through 4, and are defined as follows:

for compression factor 1:

    l(x) equals the lower 7 bits of x.

    f(x) equals 2 if x equals 127 otherwise f(x) equals 3.

    d(x,y) equals the (upper 1 bit of x) * 256 + y + 1.

for compression factor 2:

    l(x) equals the lower 6 bits of x.

    f(x) equals 2 if x equals 63 otherwise f(x) equals 3.

    d(x,y) equals the (upper 2 bits of x) * 256 + y + 1.

for compression factor 3:

    l(x) equals the lower 5 bits of x.

    f(x) equals 2 if x equals 31 otherwise f(x) equals 3.

    d(x,y) equals the (upper 3 bits of x) * 256 + y + 1.

for compression factor 4:

    l(x) equals the lower 4 bits of x.

    f(x) equals 2 if x equals 15 otherwise f(x) equals 3.

    d(x,y) equals the (upper 4 bits of x) * 256 + y + 1.

imploding - method 6

--------------------

the imploding algorithm is actually a combination of two distinct

algorithms.  the first algorithm compresses repeated byte

sequences using a sliding dictionary.  the second algorithm is

used to compress the encoding of the sliding dictionary output,

using multiple shannon-fano trees.

the imploding algorithm can use a 4k or 8k sliding dictionary

size. the dictionary size used can be determined by bit 1 in the

general purpose flag word; a 0 bit indicates a 4k dictionary

while a 1 bit indicates an 8k dictionary.

the shannon-fano trees are stored at the start of the compressed

file. the number of trees stored is defined by bit 2 in the

general purpose flag word; a 0 bit indicates two trees stored, a

1 bit indicates three trees are stored.  if 3 trees are stored,

the first shannon-fano tree represents the encoding of the

literal characters, the second tree represents the encoding of

the length information, the third represents the encoding of the

distance information.  when 2 shannon-fano trees are stored, the

length tree is stored first, followed by the distance tree.

the literal shannon-fano tree, if present is used to represent

the entire ascii character set, and contains 256 values.  this

tree is used to compress any data not compressed by the sliding

dictionary algorithm.  when this tree is present, the minimum

match length for the sliding dictionary is 3.  if this tree is

not present, the minimum match length is 2.

the length shannon-fano tree is used to compress the length part

of the (length,distance) pairs from the sliding dictionary

output.  the length tree contains 64 values, ranging from the

minimum match length, to 63 plus the minimum match length.

the distance shannon-fano tree is used to compress the distance

part of the (length,distance) pairs from the sliding dictionary

output. the distance tree contains 64 values, ranging from 0 to

63, representing the upper 6 bits of the distance value.  the

distance values themselves will be between 0 and the sliding

dictionary size, either 4k or 8k.

the shannon-fano trees themselves are stored in a compressed

format. the first byte of the tree data represents the number of

bytes of data representing the (compressed) shannon-fano tree

minus 1.  the remaining bytes represent the shannon-fano tree

data encoded as:

    high 4 bits: number of values at this bit length + 1. (1 - 16)

    low  4 bits: bit length needed to represent value + 1. (1 - 16)

the shannon-fano codes can be constructed from the bit lengths

using the following algorithm:

1)  sort the bit lengths in ascending order, while retaining the

    order of the original lengths stored in the file.

2)  generate the shannon-fano trees:

    code <- 0

    codeincrement <- 0

    lastbitlength <- 0

    i <- number of shannon-fano codes - 1   (either 255 or 63)

    loop while i >= 0

        code = code + codeincrement

        if bitlength(i) <> lastbitlength then

            lastbitlength=bitlength(i)

            codeincrement = 1 shifted left (16 - lastbitlength)

        shannoncode(i) = code

        i <- i - 1

    end loop

3)  reverse the order of all the bits in the above shannoncode()

    vector, so that the most significant bit becomes the least

    significant bit.  for example, the value 0x1234 (hex) would

    become 0x2c48 (hex).

4)  restore the order of shannon-fano codes as originally stored

    within the file.

example:

    this example will show the encoding of a shannon-fano tree

    of size 8.  notice that the actual shannon-fano trees used

    for imploding are either 64 or 256 entries in size.

example:   0x02, 0x42, 0x01, 0x13

    the first byte indicates 3 values in this table.  decoding the

    bytes:

            0x42 = 5 codes of 3 bits long

            0x01 = 1 code  of 2 bits long

            0x13 = 2 codes of 4 bits long

    this would generate the original bit length array of:

    (3, 3, 3, 3, 3, 2, 4, 4)

    there are 8 codes in this table for the values 0 thru 7.  using the

    algorithm to obtain the shannon-fano codes produces:

                                  reversed     order     original

val  sorted   constructed code      value     restored    length

---  ------   -----------------   --------    --------    ------

0:     2      1100000000000000        11       101          3

1:     3      1010000000000000       101       001          3

2:     3      1000000000000000       001       110          3

3:     3      0110000000000000       110       010          3

4:     3      0100000000000000       010       100          3

5:     3      0010000000000000       100        11          2

6:     4      0001000000000000      1000      1000          4

7:     4      0000000000000000      0000      0000          4

the values in the val, order restored and original length columns

now represent the shannon-fano encoding tree that can be used for

decoding the shannon-fano encoded data.  how to parse the

variable length shannon-fano values from the data stream is beyond the

scope of this document.  (see the references listed at the end of

this document for more information.)  however, traditional decoding

schemes used for huffman variable length decoding, such as the

greenlaw algorithm, can be successfully applied.

the compressed data stream begins immediately after the

compressed shannon-fano data.  the compressed data stream can be

interpreted as follows:

loop until done

    read 1 bit from input stream.

    if this bit is non-zero then       (encoded data is literal data)

        if literal shannon-fano tree is present

            read and decode character using literal shannon-fano tree.

        otherwise

            read 8 bits from input stream.

        copy character to the output stream.

    otherwise                   (encoded data is sliding dictionary match)

        if 8k dictionary size

            read 7 bits for offset distance (lower 7 bits of offset).

        otherwise

            read 6 bits for offset distance (lower 6 bits of offset).

        using the distance shannon-fano tree, read and decode the

          upper 6 bits of the distance value.

        using the length shannon-fano tree, read and decode

          the length value.

        length <- length + minimum match length

        if length = 63 + minimum match length

            read 8 bits from the input stream,

            add this value to length.

        move backwards distance+1 bytes in the output stream, and

        copy length characters from this position to the output

        stream.  (if this position is before the start of the output

        stream, then assume that all the data before the start of

        the output stream is filled with zeros).

end loop

tokenizing - method 7

--------------------

this method is not used by pkzip.

deflating - method 8

-----------------

the deflate algorithm is similar to the implode algorithm using

a sliding dictionary of up to 32k with secondary compression

from huffman/shannon-fano codes.

the compressed data is stored in blocks with a header describing

the block and the huffman codes used in the data block.  the header

format is as follows:

   bit 0: last block bit     this bit is set to 1 if this is the last

                             compressed block in the data.

   bits 1-2: block type

      00 (0) - block is stored - all stored data is byte aligned.

               skip bits until next byte, then next word = block length,

               followed by the ones compliment of the block length word.

               remaining data in block is the stored data.

      01 (1) - use fixed huffman codes for literal and distance codes.

               lit code    bits             dist code   bits

               ---------   ----             ---------   ----

                 0 - 143    8                 0 - 31      5

               144 - 255    9

               256 - 279    7

               280 - 287    8

               literal codes 286-287 and distance codes 30-31 are never

               used but participate in the huffman construction.

      10 (2) - dynamic huffman codes.  (see expanding huffman codes)

      11 (3) - reserved - flag a "error in compressed data" if seen.

expanding huffman codes

-----------------------

if the data block is stored with dynamic huffman codes, the huffman

codes are sent in the following compressed format:

   5 bits: # of literal codes sent - 256 (256 - 286)

           all other codes are never sent.

   5 bits: # of dist codes - 1           (1 - 32)

   4 bits: # of bit length codes - 3     (3 - 19)

the huffman codes are sent as bit lengths and the codes are built as

described in the implode algorithm.  the bit lengths themselves are

compressed with huffman codes.  there are 19 bit length codes:

   0 - 15: represent bit lengths of 0 - 15

       16: copy the previous bit length 3 - 6 times.

           the next 2 bits indicate repeat length (0 = 3, ... ,3 = 6)

              example:  codes 8, 16 (+2 bits 11), 16 (+2 bits 10) will

                        expand to 12 bit lengths of 8 (1 + 6 + 5)

       17: repeat a bit length of 0 for 3 - 10 times. (3 bits of length)

       18: repeat a bit length of 0 for 11 - 138 times (7 bits of length)

the lengths of the bit length codes are sent packed 3 bits per value

(0 - 7) in the following order:

   16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15

the huffman codes should be built as described in the implode algorithm

except codes are assigned starting at the shortest bit length, i.e. the

shortest code should be all 0's rather than all 1's.  also, codes with

a bit length of zero do not participate in the tree construction.  the

codes are then used to decode the bit lengths for the literal and distance

tables.

the bit lengths for the literal tables are sent first with the number

of entries sent described by the 5 bits sent earlier.  there are up

to 286 literal characters; the first 256 represent the respective 8

bit character, code 256 represents the end-of-block code, the remaining

29 codes represent copy lengths of 3 thru 258.  there are up to 30

distance codes representing distances from 1 thru 32k as described

below.

                             length codes

                             ------------

      extra             extra              extra              extra

 code bits length  code bits lengths  code bits lengths  code bits length(s)

 ---- ---- ------  ---- ---- -------  ---- ---- -------  ---- ---- ---------

  257   0     3     265   1   11,12    273   3   35-42    281   5  131-162

  258   0     4     266   1   13,14    274   3   43-50    282   5  163-194

  259   0     5     267   1   15,16    275   3   51-58    283   5  195-226

  260   0     6     268   1   17,18    276   3   59-66    284   5  227-257

  261   0     7     269   2   19-22    277   4   67-82    285   0    258

  262   0     8     270   2   23-26    278   4   83-98

  263   0     9     271   2   27-30    279   4   99-114

  264   0    10     272   2   31-34    280   4  115-130

                            distance codes

                            --------------

      extra           extra             extra               extra

 code bits dist  code bits  dist   code bits distance  code bits distance

 ---- ---- ----  ---- ---- ------  ---- ---- --------  ---- ---- --------

   0   0    1      8   3   17-24    16    7  257-384    24   11  4097-6144

   1   0    2      9   3   25-32    17    7  385-512    25   11  6145-8192

   2   0    3     10   4   33-48    18    8  513-768    26   12  8193-12288

   3   0    4     11   4   49-64    19    8  769-1024   27   12 12289-16384

   4   1   5,6    12   5   65-96    20    9 1025-1536   28   13 16385-24576

   5   1   7,8    13   5   97-128   21    9 1537-2048   29   13 24577-32768

   6   2   9-12   14   6  129-192   22   10 2049-3072

   7   2  13-16   15   6  193-256   23   10 3073-4096

the compressed data stream begins immediately after the

compressed header data.  the compressed data stream can be

interpreted as follows:

do

   read header from input stream.

   if stored block

      skip bits until byte aligned

      read count and 1's compliment of count

      copy count bytes data block

   otherwise

      loop until end of block code sent

         decode literal character from input stream

         if literal < 256

            copy character to the output stream

         otherwise

            if literal = end of block

               break from loop

            otherwise

               decode distance from input stream

               move backwards distance bytes in the output stream, and

               copy length characters from this position to the output

               stream.

      end loop

while not last block

if data descriptor exists

   skip bits until byte aligned

   read crc and sizes

endif

decryption

----------

the encryption used in pkzip was generously supplied by roger

schlafly.  pkware is grateful to mr. schlafly for his expert

help and advice in the field of data encryption.

pkzip encrypts the compressed data stream.  encrypted files must

be decrypted before they can be extracted.

each encrypted file has an extra 12 bytes stored at the start of

the data area defining the encryption header for that file.  the

encryption header is originally set to random values, and then

itself encrypted, using three, 32-bit keys.  the key values are

initialized using the supplied encryption password.  after each byte

is encrypted, the keys are then updated using pseudo-random number

generation techniques in combination with the same crc-32 algorithm

used in pkzip and described elsewhere in this document.

the following is the basic steps required to decrypt a file:

1) initialize the three 32-bit keys with the password.

2) read and decrypt the 12-byte encryption header, further

   initializing the encryption keys.

3) read and decrypt the compressed data stream using the

   encryption keys.

step 1 - initializing the encryption keys

-----------------------------------------

key(0) <- 305419896

key(1) <- 591751049

key(2) <- 878082192

loop for i <- 0 to length(password)-1

    update_keys(password(i))

end loop

where update_keys() is defined as:

update_keys(char):

  key(0) <- crc32(key(0),char)

  key(1) <- key(1) + (key(0) & 000000ffh)

  key(1) <- key(1) * 134775813 + 1

  key(2) <- crc32(key(2),key(1) >> 24)

end update_keys

where crc32(old_crc,char) is a routine that given a crc value and a

character, returns an updated crc value after applying the crc-32

algorithm described elsewhere in this document.

step 2 - decrypting the encryption header

-----------------------------------------

the purpose of this step is to further initialize the encryption

keys, based on random data, to render a plaintext attack on the

data ineffective.

read the 12-byte encryption header into buffer, in locations

buffer(0) thru buffer(11).

loop for i <- 0 to 11

    c <- buffer(i) ^ decrypt_byte()

    update_keys(c)

    buffer(i) <- c

end loop

where decrypt_byte() is defined as:

unsigned char decrypt_byte()

    local unsigned short temp

    temp <- key(2) | 2

    decrypt_byte <- (temp * (temp ^ 1)) >> 8

end decrypt_byte

after the header is decrypted,  the last 1 or 2 bytes in buffer

should be the high-order word/byte of the crc for the file being

decrypted, stored in intel low-byte/high-byte order.  versions of

pkzip prior to 2.0 used a 2 byte crc check; a 1 byte crc check is

used on versions after 2.0.  this can be used to test if the password

supplied is correct or not.

step 3 - decrypting the compressed data stream

----------------------------------------------

the compressed data stream can be decrypted as follows:

loop until done

    read a character into c

    temp <- c ^ decrypt_byte()

    update_keys(temp)

    output temp

end loop

in addition to the above mentioned contributors to pkzip and pkunzip,

i would like to extend special thanks to robert mahoney for suggesting

the extension .zip for this software.

references:

    fiala, edward r., and greene, daniel h., "data compression with

       finite windows",  communications of the acm, volume 32, number 4,

       april 1989, pages 490-505.

    held, gilbert, "data compression, techniques and applications,

                    hardware and software considerations",

       john wiley & sons, 1987.

    huffman, d.a., "a method for the construction of minimum-redundancy

       codes", proceedings of the ire, volume 40, number 9, september 1952,

       pages 1098-1101.

    nelson, mark, "lzw data compression", dr. dobbs journal, volume 14,

       number 10, october 1989, pages 29-37.

    nelson, mark, "the data compression book",  m&t books, 1991.

    storer, james a., "data compression, methods and theory",

       computer science press, 1988

    welch, terry, "a technique for high-performance data compression",

       ieee computer, volume 17, number 6, june 1984, pages 8-19.

    ziv, j. and lempel, a., "a universal algorithm for sequential data

       compression", communications of the acm, volume 30, number 6,

       june 1987, pages 520-540.

    ziv, j. and lempel, a., "compression of individual sequences via

       variable-rate coding", ieee transactions on information theory,

       volume 24, number 5, september 1978, pages 530-536.

 


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章