APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

File data Compression

A few years ago I had a call from someone who wanted me to look at his new compression tool. I'm not sure what he really wanted, money, exposure, advice - doesn't matter, because in a very few seconds I knew he was a genuine nut-case. He claimed that he could take already compressed data and compress it again, and then again, and so on with no data loss. I laughed, and then he went into the typical explanation abount how "they" don't want his discoveries published and somewhere in there I hung up. Oh well.

There are two basic ways to compress data. One depends upon only having to deal with a subset of characters. For example, if you have a file consisting only of the character set "0123455678", you can pack each character into three bits, which compresses the file more than 50%. That's not particularly good compression, and is obviously very limited in the data it can handle so that sort of scheme isn't often used (though that's exactly what we are doing when we use bit maps).

The other approach looks for patterns. If we had "aaaabbbcddd", whe could store that as "a4b3c1d3". If we have 30 "d" characters, we can store "d9d9d9d3". That can handle any character set, and if there is a lot of repetition, the compression could be pretty good. It's not a good general purpose compressor though.

Suppose we knew that certain characters occur more often than others. In English text, "ETAOIN SHRDLU" supposedly represents the most frequently used characters (though the space would actually come first and there is disagreement about the other characters). That's 13 characters, so if we add two more, we can encode any of those in four bits, and have one left over as an "escape" character so that we can handle all the not so popular characters: the 15 popular characters get packed into 4 bits, and anything else takes 12 (the escape plus the real character). That sort of scheme is simple to implement, can handle any 8 bit character set, and can produce decent compression, though obviously not more than 50%.

We could take it farther, though. Suppose that instead of common characters, we looked for common words. Pick the top 256 or top 512 or top any power of two words, subtract one (to leave room for an escape character) and you can replace a lot of text with a short run of bits. This is "dictionary" encoding. Anything not in the dictionary is output as the escape character, a length byte, and the original characters. Again, fairly simple to code, and the amount of compression can be pretty good.

But we're not always compressing English text, are we? Compression algorithms have to work with any kind of junk we toss at them. Therefor, they build their dictionaries using an adaptive approach - determining what the stored data means as it goes. Not as simple to code, of course, but this is how they do what they do. If you want to know more, http://www.fadden.com/techmisc/hdc/ has a much more detailed look at the details.



Got something to add? Send me email.





(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> File data Compression



Increase ad revenue 50-250% with Ezoic


More Articles by

Find me on Google+

© Tony Lawrence



Kerio Samepage


Have you tried Searching this site?

Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us





Today, kernels are too much obedient servants, blindly doing the bidding of any program that asks. (Tony Lawrence)





This post tagged: