The Waikato Linux Users Group hope that this page answers your questions, but, if it doesn't, we politely request that if/when you find the answer to your question you contibute your information back into this Wiki (via the Edit button at the bottom of the page) so that others can also find this information easier.
We also suggest that if this page doesn't answer your question, try
Searching the wiki,
or, to find pages similar to this one, try or
.
A History of LZ CompressionBack in the 70's, the amount of data that people wanted to store started outstripping the media that people had to store it on.
Two computer scientists named A. Lempel and J. Ziv
published a paper called "A Universal Algorithm for Sequential Data
Compression" in the IEEE Transactions on Information Theory, May 1977.
This documented an Algorithm that was referred to subsequently as The basis for the LZ series of algorithms is the building of a dictionary of common phrases in a file. Instead of outputting characters, phrase numbers in this dictionary can be output, achieving compression when you can output the number 1832 (10 bits) instead of the string "foobarbaz" (72 bits). The decompressor needs nothing more than this sequence of output and a knowledge of how the dictionary is built to be able to built the dictionary itself, and therefore decompress the file. LZ77 is a window based system: for each string that it compresses, it outputs an index back into to the input (go back N characters), a window size (copy these characters), and then first mismatching character.
Then, in 1978, they came up with a better algorithm (dubbed LZ77/78 encoding remained the de-facto standard for compression of data (along with some cool stuff like HuffmanCoding, that's how PKZIP works) until 1984, where Terry Welch published "A Technique for High-Performance Data Compression" in Computer magazine. He realised that with a dictionary that had all the possible characters in it already, you wouldn't have to output the mismatching character like in LZ78; you could output a phrase number that matched it. This algorithm is called LZW encoding (Lempel, Ziv, Welch). How it worksThe algorithm is really quite simple, and will be described here. Compression
You'll note that the first stage can be built into the while loop; I put it up the top to make it absolutely clear what to do. Decompression
The second case looks a bit strange; it is basically the case where you have a repeating phrase in your input file, so the code number you output is the newest code you add to the dictionary. Therefore if you receive a number you haven't got yet, the next character must be the first character of the previous stream. The easiest way to understand LZW is by looking at some examples. I
won't reproduce them here, but you can find some (and a really good
description of the algorithm) at Why we only talk about it in hushed terms
Something else that is important to understand is Part of CategoryAlgorithm. (I just made it up.) |
Included from SideBar Next WLUG Meeting:2003-04-28 No topic yet. Edit the MeetingTopics page to add suggestions that you'd like to see! News:2003-03-24 Thanks to guest speaker AndreasGirardet from Yoper for travelling to talk to us! If you feel the need to wiki something, see HelpingOut. See WikiNews for Wiki Related News. |