Data Compression

Raw information to be transmitted is first encoded into a data conducive for transmission. Sometimes, when files are very large, high latency occurs which degrade the overall performance of the system. With this in mind, engineers devised ways to compress the data to achieve higher data rates and thus increase performance. Most compression schemes involve analysis of data itself such that after compression, quality is not degraded. For example, a photo is composed of pixels that represent the data after encoding. If the photo is colored, then a total of three components are sent as data (red, green, blue). With so much compression, the photo may not be fully restored to its original quality which nulls the performance increase of higher data rate. There are some compressions that can be studied by anyone but most are proprietary.

Run-length encoding compress data by taking advantage of long series of consecutive ones or consecutive zeros. Its algorithm is based on identification of such long streams and representing it with a single bit and affixing the number of consecutive ones or zeros. It effectively reduces the number of bits in the data but is only applicable for data that meets its criteria. Otherwise, using run-length encoding to any form of data can increase the number of bits instead of decreasing it.

Huffman encoding is based on the idea of Morse coding. This takes into account the frequency that a character appears in the data. The more frequent a character appears, the lesser number of bits to represent it. A more sophisticated version of Huffman encoding is arithmetic encoding which uses the probability occurrence to determine the codes for each character. It has the advantage of five to ten percent.

Delta encoding uses the idea that data samples are transmitted chronologically and if the difference between the next sample and the previous sample is known, then the difference can be sent instead of the next sample which gives the upper hand of lowering the number of bits sent.

JPEG or Joint Photographic Experts Group is a well-known compression scheme used for pictures. It bases its principle in changing data into a transform such as Fourier or discrete cosine that emphasizes the characteristics which stand out and the minor components ignored.

Posted 2010-12-14 and updated on Dec 14, 2010 7:26am by crisd

Be the 1st to write comments on this issue and make it a threaded topic!!
Name : ZIP(optional) :
Please DO NOT use html tags or links.


Since 2010 by Noel Allosa