Book Read Free

Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers

Page 13

by John MacCormick


  Let's try and apply this idea to the coding system shown on the previous page. We already know that we can save the most effort by using abbreviations for things that are used frequently. Well, the letters “e' and “t” are the ones used most often in English, so let's try to use a shorter code for each of those letters. At the moment, “e” is 31 and “t” is 46—so it takes two digits to represent each of these letters. How about cutting them down to only one digit? Let's say “e” is now represented by the single digit 8, and “t” is 9. This is a great idea! Remember how we encoded the phrase “Meet your fiancé there.” earlier, using a total of 46 digits. Now we can do it as follows, using only 40 digits:

  M e e t y o u r f i a n c é t h e r e .

  138 8 9 005141474400323527402982009 348 448 66

  Unfortunately, there is a fatal flaw in this plan. Remember that the computer does not store the spaces between the individual letters. So the encoding doesn't really look like “13 8 8 9 00 51…44 8 66.” Instead it looks like “138890051…44866.” Can you see the problem yet? Concentrate on just the first five digits, which are 13889. Notice that the code 13 represents “M,” 8 represents “e,” and 9 represents “t,” so one way of decoding the digits 13889 is to split them up as 138-8-9, giving the word “Meet.” But 88 represents the accented symbol “ú,” so the digits 13889 might also be split up as 13-88-9, which represents “Mút.” In fact the situation is even worse, because 89 represents the slightly different accented symbol “ù,” so another possible split of 13889 is 13-8-89, representing “Meù.” There is absolutely no way to tell which of the three possible interpretations is correct.

  Disaster! Our cunning plan to use shorter codes for the letters “e” and “t” has led to a coding system that doesn't work at all. Fortunately, it can be fixed with one more trick. The real problem is that whenever we see a digit 8 or 9, there is no way to tell if it is part of a one-digit code (for either “e” or “t”), or one of the two-digit codes that starts with 8 or 9 (for the various accented symbols like “á” and “è”). To solve this problem, we have to make a sacrifice: some of our codes will actually get longer. The ambiguous two-digit codes that start with 8 or 9 will become three-digit codes that do not start with 8 or 9. The table on page 114 shows one particular way of achieving this. Some of the punctuation characters got affected too, but we finally have a very nice situation: anything starting with a7isa three-digit code, anything starting with an 8 or 9 is a one-digit code, and anything starting with 0, 1, 2, 3, 4, 5 or 6 is the same two-digit code as before. So there is exactly one way to split up the digits 13889 now (13-8-8-9, representing “Meet”)—and this is true for any other correctly coded sequence of digits. All ambiguity has been removed, and our original message can be encoded like this:

  M e e t y o u r f i a n c é t h e r e .

  138 8 9 0051414744003235274029782009 348 448 66

  The original encoding used 46 digits, and this uses only 41. This might seem like a small saving, but with a longer message the savings can be very significant. For example, the text of this book (that is, just the words, with images excluded) requires nearly 500 kilobytes of storage—that's half a million characters! But when compressed using the two tricks just described, the size is reduced to only 160 kilobytes, or less than one-third of the original.

  Summary: Where Did the Free Lunch Come From?

  At this point, we understand all the important concepts behind the creation of typical compressed ZIP files on a computer. Here's how it happens:

  Step 1. The original uncompressed file is transformed using the same-as-earlier trick, so that most of the repeated data in the file is replaced by much shorter instructions to go back and copy the data from somewhere else.

  Step 2. The transformed file is examined to see which symbols occur frequently. For example, if the original file was written in English, then the computer will probably discover that “e” and “t” are the two most common symbols. The computer then constructs a table like the one on the following page, in which frequently used symbols are given short numeric codes and rarely used symbols are given longer numeric codes.

  Step 3. The file is transformed again by directly translating into the numeric codes from Step 2.

  The table of numeric codes, computed in step 2, is also stored in the ZIP file—otherwise it would be impossible to decode (and hence decompress) the ZIP file later. Note that different uncompressed files will result in different tables of numeric codes. In fact, in a real ZIP file, the original file is broken up into chunks and each chunk can have a different numeric code table. All of this can be done efficiently and automatically, achieving excellent compression on many types of files.

  Numeric codes using the shorter-symbol trick. Changes to the previous table on page 111 are shown in bold. The codes for two common letters have been shortened, at the expense of lengthening the codes for a larger number of uncommon symbols. This results in a shorter total length for most messages.

  LOSSY COMPRESSION: NOT A FREE LUNCH, BUT A VERY GOOD DEAL

  So far, we have been talking about the type of compression known as lossless, because you can take a compressed file and reconstruct exactly the same file that you started with, without even one character or one punctuation mark being changed. In contrast, sometimes it is much more useful to use lossy compression, which lets you take a compressed file and reconstruct one that is very similar to the original, but not necessarily exactly the same. For example, lossy compression is used very frequently on files that contain images or audio data: as long as a picture looks the same to the human eye, it doesn't really matter whether the file that stores that picture on your computer is exactly the same as the file that stores it on your camera. And the same is true for audio data: as long as a song sounds the same to the human ear, it doesn't really matter whether the file storing that song on your digital music player is exactly the same as the file that stores that song on a compact disc.

  In fact, sometimes lossy compression is used in a much more extreme way. We have all seen low-quality videos and images on the internet in which the picture is blurry or the sound quality rather bad. This is the result of lossy compression being used in a more aggressive fashion to make the file size of the videos or images very small. The idea here is not that the video looks the same as the original to the human eye, but rather that it is at least recognizable. By tuning just how “lossy” the compression is, website operators can trade off between large, high-quality files that look and sound almost perfect, and low-quality files that have obvious defects but require much less bandwidth to transmit. You may have done the same thing on a digital camera, where you can usually choose different settings for the quality of images and videos. If you choose a high-quality setting, the number of pictures or videos you can store on the camera is smaller than when you choose a lower quality setting. That's because high-quality media files take up more space than low-quality ones. And it's all done by tuning just how “lossy” the compression is. In this section, we will find out a few of the tricks for doing this tuning.

  The Leave-It-Out Trick

  One simple and useful trick for lossy compression is to simply leave out some of the data. Let's take a look at how this “leave-it-out” trick works in the case of black-and-white pictures. First we need to understand a little about how black-and-white pictures are stored in a computer. A picture consists of a large number of small dots, called “pixels.” Each pixel has exactly one color, which could be black, white, or any shade of gray in between. Of course, we are not generally aware of these pixels because they are so small, but you can see the individual pixels if you look closely enough at a monitor or TV screen.

  In a black-and-white picture stored in a computer, each possible pixel color is represented by a number. For this example, let's assume that higher numbers represent whiter colors, with 100 being the highest. So 100 represents white, 0 represents black, 50 represents a medium shade of gray, 90 represents a light gray, and so on. The
pixels are arranged in a rectangular array of rows and columns, with each pixel representing the color at some very small part of the picture. The total number of rows and columns tells you the “resolution” of the image. For example, many high-definition TV sets have a resolution of 1920 by 1080—that means there are 1920 columns of pixels and 1080 rows of pixels. The total number of pixels is found by multiplying 1920 by 1080, which gives over 2 million pixels! Digital cameras use the same terminology. A “megapixel” is just a fancy name for a million pixels. So a 5-megapixel camera has enough rows and columns of pixels so that when you multiply the number of rows by the number of columns, you get more than 5 million. When a picture is stored in a computer, it is just a list of numbers, one for each pixel.

  The picture of a house with a turret shown at the top left of the figure on the next page has a much lower resolution than a highdefinition TV: only 320 by 240. Nevertheless, the number of pixels is still rather large (320 × 240 = 76,800), and the file that stores this picture in uncompressed form uses over 230 kilobytes of storage space. A kilobyte, by the way, is equivalent to about 1000 characters of text—roughly the size of a one-paragraph e-mail, for instance. Very approximately, then, the top-left picture, when stored as a file, requires the same amount of disk space as around 200 short e-mail messages.

  We can compress this file with the following extremely simple technique: ignore, or “leave out,” every second row of pixels and every second column of pixels. The leave-it-out trick really is that simple! In this case, it results in a picture with a smaller resolution of 160 by 120, shown below the original picture in the figure. The size of this file is only one-quarter of the original (about 57 kilobytes). This is because there are only one-quarter as many pixels—we reduced both the width and the height of the image by one-half. Effectively, the size of the image was reduced by 50% twice—once horizontally and once vertically—resulting in a final size that is only 25% of the original.

  And we can do this trick again. Take the new 160 by 120 image, and leave out every second row and column to get another new image, this time only 80 by 60—the result is shown at the bottom left of the figure. The image size is reduced by 75% again, resulting in a final file size of only 14 kilobytes. That's only about 6% of the original—some very impressive compression.

  Compression using the leave-it-out trick. The left column shows the original image, and two smaller, reduced versions of this image. Each reduced image is computed by leaving out half of the rows and columns in the previous one. In the right column, we see the effect of decompressing the reduced images to the same size as the original. The reconstruction is not perfect and there are some noticeable differences between the reconstructions and the original.

  But remember, we are using lossy compression, so we don't get a free lunch this time. The lunch is cheap, but we do have to pay for it. Look at what happens when we take one of the compressed files and decompress it back to the original size. Because some of the rows and columns of pixels were deleted, the computer has to guess what the colors of those missing pixels should be. The simplest possible guess is to give any missing pixel the same color as one of its neighbors. Any choice of neighbor would work fine, but the examples shown here choose the pixel immediately above and to the left of the missing pixel.

  The result of this decompression scheme is shown in the right-hand column of the figure. You can see that most of the visual features have been retained, but there is some definite loss of quality and detail, especially in complex areas like the tree, the turret's roof, and the fret-work in the gable of the house. Also, especially in the version decompressed from the 80 by 60 image, you can see some rather unpleasant jagged edges, for example, on the diagonal lines of the house's roof. These are what we call “compression artifacts”: not just a loss of detail, but noticeable new features that are introduced by a particular method of lossy compression followed by decompression.

  Although it's useful for understanding the basic idea of lossy compression, the leave-it-out trick is rarely used in the simple form described here. Computers do indeed “leave out” information to achieve lossy compression, but they are much more careful about which information they leave out. A common example of this is the JPEG image compression format. JPEG is a carefully designed image compression technique which has far better performance than leaving out every second row and column. Take a look at the figure on the facing page, and compare the quality and size of the images with the previous figure. At the top, we have a JPEG image whose size is 35 kilobytes, and yet it is virtually indistinguishable from the original image. By leaving out more information, but sticking with the JPEG format, we can get down to the 19-kilobyte image in the center, which still has excellent quality although you can see some blurring and loss of detail in the fret-work of the house. But even JPEG suffers from compression artifacts if the compression is too extreme: at the bottom you can see a JPEG image compressed down to only 12 kilobytes, and you'll notice some blocky effects in the sky and some unpleasant blotches in the sky right next to the diagonal line of the house.

  Although the details of JPEG's leave-it-out strategy are too technical to be described completely here, the basic flavor of the technique is fairly straightforward. JPEG first divides the whole image into small squares of 8 pixels by 8 pixels. Each of these squares is compressed separately. Note that without any compression, each square would be represented by 8 × 8 = 64 numbers. (We are assuming that the picture is black-and-white—if it is a color image, then there are three different colors and therefore three times as many numbers, but we won't worry about that detail here.) If the square happens to be all one color, the entire square can be represented by a single number, and the computer can “leave out” 63 numbers. If the square is mostly the same color, with a few very slight differences (perhaps a region of sky that is almost all the same shade of gray), the computer can decide to represent the square by a single number anyway, resulting in good compression for that square with only a small amount of error when it gets decompressed later. In the bottom image of the figure on the previous page, you can actually see some of the 8-by-8 blocks in the sky that have been compressed in exactly this way, resulting in small square blocks of uniform color.

  With lossy compression schemes, higher compression produces lower quality. The same image is shown compressed at three different JPEG quality levels. At the top is the highest quality, which also requires the most storage. At the bottom is the lowest quality, which requires less than half the storage, but now there are noticeable compression artifacts—especially in the sky and along the border of the roof.

  If the 8-by-8 square varies smoothly from one color to another (say, dark gray on the left to light gray on the right), then the 64 numbers might be compressed down to just two: a value for the dark gray and the value for the light gray. The JPEG algorithm does not work exactly like this, but it uses the same ideas: if an 8-by-8 square is close enough to some combination of known patterns like a constant color or a smoothly varying color, then most of the information can be thrown away, and just the level or amount of each pattern is stored.

  JPEG works well for pictures, but how about audio and music files? These are also compressed using lossy compression, and they use the same basic philosophy: leave out information that has little effect on the final product. Popular music compression formats, such as MP3 and AAC, generally use the same high-level approach as JPEG. The audio is divided into chunks, and each chunk is compressed separately. As with JPEG, chunks that vary in a predictable way can be described with only a few numbers. However, audio compression formats can also exploit known facts about the human ear. In particular, certain types of sounds have little or no effect on human listeners and can be eliminated by the compression algorithm without reducing the quality of the output.

  THE ORIGINS OF COMPRESSION ALGORITHMS

  The same-as-earlier trick described in this chapter—one of the main compression methods used in ZIP files—is known to compute
r scientists as the LZ77 algorithm. It was invented by two Israeli computer scientists, Abraham Lempel and Jacob Ziv, and published in 1977.

  To trace the origins of compression algorithms, however, we need to delve three decades further back into scientific history. We have already met Claude Shannon, the Bell Labs scientist who founded the field of information theory with his 1948 paper. Shannon was one of the two main heroes in our story of error-correcting codes (chapter 5), but he and his 1948 paper also figure importantly in the emergence of compression algorithms.

  This is no coincidence. In fact, error-correcting codes and compression algorithms are two sides of the same coin. It all comes down to the notion of redundancy, which featured quite heavily in chapter 5. If a file has redundancy, it is longer than necessary. To repeat a simple example from chapter 5, the file might use the word “five” instead of the numeral “5.” That way, an error such as “fivq” can be easily recognized and corrected. Thus, error-correcting codes can be viewed as a principled way of adding redundancy to a message or file.

  Compression algorithms do the opposite: they remove redundancy from a message or file. It's easy to imagine a compression algorithm that would notice the frequent use of the word “five” in a file and replace this with a shorter symbol (which might even be the symbol “5”), exactly reversing the error-correction encoding process. In practice, compression and error correction do not cancel each other out like this. Instead, good compression algorithms remove inefficient types of redundancy, while error-correction encoding adds a different, more efficient type of redundancy. Thus, it is very common to first compress a message and then add some error correction to it.

 

‹ Prev