Book Read Free

What Just Happened: A Chronicle From the Information Frontier

Page 28

by James Gleick


  Shannon’s formulation of information theory seemed to invite researchers to look in a direction that he himself had not intended. He had declared, “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.” A psychologist could hardly fail to consider the case where the source of the message is the outside world and the receiver is the mind.

  Ears and eyes were to be understood as message channels, so why not test and measure them like microphones and cameras? “New concepts of the nature and measure of information,” wrote Homer Jacobson, a chemist at Hunter College in New York, “have made it possible to specify quantitatively the informational capacity of the human ear,”♦ and he proceeded to do so. Then he did the same for the eye, arriving at an estimate four hundred times greater, in bits per second. Many more subtle kinds of experiments were suddenly fair game, some of them directly suggested by Shannon’s work on noise and redundancy. A group in 1951 tested the likelihood that listeners would hear a word correctly when they knew it was one of just a few alternatives, as opposed to many alternatives.♦ It seemed obvious but had never been done. Experimenters explored the effect of trying to understand two conversations at once. They began considering how much information an ensemble of items contained—digits or letters or words—and how much could be understood or remembered. In standard experiments, with speech and buzzers and key pressing and foot tapping, the language of stimulus and response began to give way to transmission and reception of information.

  For a brief period, researchers discussed the transition explicitly; later it became invisible. Donald Broadbent, an English experimental psychologist exploring issues of attention and short-term memory, wrote of one experiment in 1958: “The difference between a description of the results in terms of stimulus and response, and a description in information theory terms, becomes most marked.… One could no doubt develop an adequate description of the results in S-R terms … but such a description is clumsy compared to the information theory description.”♦ Broadbent founded an applied psychology division at Cambridge University, and a flood of research followed, there and elsewhere, in the general realm of how people handle information: effects of noise on performance; selective attention and filtering of perception; short-term and long-term memory; pattern recognition; problem solving. And where did logic belong? To psychology or to computer science? Surely not just to philosophy.

  An influential counterpart of Broadbent’s in the United States was George Miller, who helped found the Center for Cognitive Studies at Harvard in 1960. He was already famous for a paper published in 1956 under the slightly whimsical title “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.”♦ Seven seemed to be the number of items that most people could hold in working memory at any one time: seven digits (the typical American telephone number of the time), seven words, or seven objects displayed by an experimental psychologist. The number also kept popping up, Miller claimed, in other sorts of experiments. Laboratory subjects were fed sips of water with different amounts of salt, to see how many different levels of saltiness they could discriminate. They were asked to detect differences between tones of varying pitch or loudness. They were shown random patterns of dots, flashed on a screen, and asked how many (below seven, they almost always knew; above seven, they almost always estimated). In one way and another, the number seven kept recurring as a threshold. “This number assumes a variety of disguises,” he wrote, “being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable.”

  Clearly this was a crude simplification of some kind; as Miller noted, people can identify any of thousands of faces or words and can memorize long sequences of symbols. To see what kind of simplification, he turned to information theory, and especially to Shannon’s understanding of information as a selection among possible alternatives. “The observer is considered to be a communication channel,” he announced—a formulation sure to appall the behaviorists who dominated the profession. Information is being transmitted and stored—information about loudness, or saltiness, or number. He explained about bits:

  One bit of information is the amount of information that we need to make a decision between two equally likely alternatives. If we must decide whether a man is less than six feet tall or more than six feet tall and if we know that the chances are 50-50, then we need one bit of information.…

  Two bits of information enable us to decide among four equally likely alternatives. Three bits of information enable us to decide among eight equally likely alternatives … and so on. That is to say, if there are 32 equally likely alternatives, we must make five successive binary decisions, worth one bit each, before we know which alternative is correct. So the general rule is simple: every time the number of alternatives is increased by a factor of two, one bit of information is added.

  The magical number seven is really just under three bits. Simple experiments measured discrimination, or channel capacity, in a single dimension; more complex measures arise from combinations of variables in multiple dimensions—for example, size, brightness, and hue. And people perform acts of what information theorists call “recoding,” grouping information into larger and larger chunks—for example, organizing telegraph dots and dashes into letters, letters into words, and words into phrases. By now Miller’s argument had become something in the nature of a manifesto. Recoding, he declared, “seems to me to be the very lifeblood of the thought processes.”

  The concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects.… Informational concepts have already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look.

  This was the beginning of the movement called the cognitive revolution in psychology, and it laid the foundation for the discipline called cognitive science, combining psychology, computer science, and philosophy. Looking back, some philosophers have called this moment the informational turn. “Those who take the informational turn see information as the basic ingredient in building a mind,” writes Frederick Adams. “Information has to contribute to the origin of the mental.”♦ As Miller himself liked to say, the mind came in on the back of the machine.♦

  Shannon was hardly a household name—he never did become famous to the general public—but he had gained an iconic stature in his own academic communities, and sometimes he gave popular talks about “information” at universities and museums. He would explain the basic ideas; puckishly quote Matthew 5:37, “Let your communication be, Yea, yea; Nay, nay: for whatsoever is more than these cometh of evil” as a template for the notions of bits and of redundant encoding; and speculate about the future of computers and automata. “Well, to conclude,” he said at the University of Pennsylvania, “I think that this present century in a sense will see a great upsurge and development of this whole information business; the business of collecting information and the business of transmitting it from one point to another, and perhaps most important of all, the business of processing it.”♦

  With psychologists, anthropologists, linguists, economists, and all sorts of social scientists climbing aboard the bandwagon of information theory, some mathematicians and engineers were uncomfortable. Shannon himself called it a bandwagon. In 1956 he wrote a short warning notice—four paragraphs: “Our fellow scientists in many different fields, attracted by the fanfare and by the new avenues opened to scientific analysis, are using these ideas in their own problems.… Although this wave of popularity is certainly pleasant and exciting for those of us working in
the field, it carries at the same time an element of danger.”♦ Information theory was in its hard core a branch of mathematics, he reminded them. He, personally, did believe that its concepts would prove useful in other fields, but not everywhere, and not easily: “The establishing of such applications is not a trivial matter of translating words to a new domain, but rather the slow tedious process of hypothesis and experimental verification.” Furthermore, he felt the hard slogging had barely begun in “our own house.” He urged more research and less exposition.

  As for cybernetics, the word began to fade. The Macy cyberneticians held their last meeting in 1953, at the Nassau Inn in Princeton; Wiener had fallen out with several of the group, who were barely speaking to him. Given the task of summing up, McCulloch sounded wistful. “Our consensus has never been unanimous,” he said. “Even had it been so, I see no reason why God should have agreed with us.”♦

  Throughout the 1950s, Shannon remained the intellectual leader of the field he had founded. His research produced dense, theorem-packed papers, pregnant with possibilities for development, laying foundations for broad fields of study. What Marshall McLuhan later called the “medium” was for Shannon the channel, and the channel was subject to rigorous mathematical treatment. The applications were immediate and the results fertile: broadcast channels and wiretap channels, noisy and noiseless channels, Gaussian channels, channels with input constraints and cost constraints, channels with feedback and channels with memory, multiuser channels and multiaccess channels. (When McLuhan announced that the medium was the message, he was being arch. The medium is both opposite to, and entwined with, the message.)

  CLAUDE SHANNON (1963) (Illustration credit 8.3)

  One of Shannon’s essential results, the noisy coding theorem, grew in importance, showing that error correction can effectively counter noise and corruption. At first this was just a tantalizing theoretical nicety; error correction requires computation, which was not yet cheap. But during the 1950s, work on error-correcting methods began to fulfill Shannon’s promise, and the need for them became apparent. One application was exploration of space with rockets and satellites; they needed to send messages very long distances with limited power. Coding theory became a crucial part of computer science, with error correction and data compression advancing side by side. Without it, modems, CDs, and digital television would not exist. For mathematicians interested in random processes, coding theorems are also measures of entropy.

  Shannon, meanwhile, made other theoretical advances that planted seeds for future computer design. One discovery showed how to maximize flow through a network of many branches, where the network could be a communication channel or a railroad or a power grid or water pipes. Another was aptly titled “Reliable Circuits Using Crummy Relays” (though this was changed for publication to “… Less Reliable Relays”).♦ He studied switching functions, rate-distortion theory, and differential entropy. All this was invisible to the public, but the seismic tremors that came with the dawn of computing were felt widely, and Shannon was part of that, too.

  As early as 1948 he completed the first paper on a problem that he said, “of course, is of no importance in itself”♦: how to program a machine to play chess. People had tried this before, beginning in the eighteenth and nineteenth centuries, when various chess automata toured Europe and were revealed every so often to have small humans hiding inside. In 1910 the Spanish mathematician and tinkerer Leonardo Torres y Quevedo built a real chess machine, entirely mechanical, called El Ajedrecista, that could play a simple three-piece endgame, king and rook against king.

  Shannon now showed that computers performing numerical calculations could be made to play a full chess game. As he explained, these devices, “containing several thousand vacuum tubes, relays, and other elements,” retained numbers in “memory,” and a clever process of translation could make these numbers represent the squares and pieces of a chessboard. The principles he laid out have been employed in every chess program since. In these salad days of computing, many people immediately assumed that chess would be solved: fully known, in all its pathways and combinations. They thought a fast electronic computer would play perfect chess, just as they thought it would make reliable long-term weather forecasts. Shannon made a rough calculation, however, and suggested that the number of possible chess games was more than 10120—a number that dwarfs the age of the universe in nanoseconds. So computers cannot play chess by brute force; they must reason, as Shannon saw, along something like human lines.

  He visited the American champion Edward Lasker in his apartment on East Twenty-third Street in New York, and Lasker offered suggestions for improvement.♦ When Scientific American published a simplified version of his paper in 1950, Shannon could not resist raising the question on everyone’s minds: “Does a chess-playing machine of this type ‘think’ ”

  From a behavioristic point of view, the machine acts as though it were thinking. It has always been considered that skillful chess play requires the reasoning faculty. If we regard thinking as a property of external actions rather than internal method the machine is surely thinking.

  Nonetheless, as of 1952 he estimated that it would take three programmers working six months to enable a large-scale computer to play even a tolerable amateur game. “The problem of a learning chess player is even farther in the future than a preprogrammed type. The methods which have been suggested are obviously extravagantly slow. The machine would wear out before winning a single game.”♦ The point, though, was to look in as many directions as possible for what a general-purpose computer could do.

  He was exercising his sense of whimsy, too. He designed and actually built a machine to do arithmetic with Roman numerals: for example, IV times XII equals XLVIII. He dubbed this THROBAC I, an acronym for Thrifty Roman-numeral Backward-looking Computer. He created a “mind-reading machine” meant to play the child’s guessing game of odds and evens. What all these flights of fancy had in common was an extension of algorithmic processes into new realms—the abstract mapping of ideas onto mathematical objects. Later, he wrote thousands of words on scientific aspects of juggling♦—with theorems and corollaries—and included from memory a quotation from E. E. Cummings: “Some son-of-a-bitch will invent a machine to measure Spring with.”

  In the 1950s Shannon was also trying to design a machine that would repair itself.♦ If a relay failed, the machine would locate and replace it. He speculated on the possibility of a machine that could reproduce itself, collecting parts from the environment and assembling them. Bell Labs was happy for him to travel and give talks on such things, often demonstrating his maze-learning machine, but audiences were not universally delighted. The word “Frankenstein” was heard. “I wonder if you boys realize what you’re toying around with there,” wrote a newspaper columnist in Wyoming.

  What happens if you switch on one of these mechanical computers but forget to turn it off before you leave for lunch? Well, I’ll tell you. The same thing would happen in the way of computers in America that happened to Australia with jack rabbits. Before you could multiply 701,945,240 by 879,030,546, every family in the country would have a little computer of their own.…

  Mr. Shannon, I don’t mean to knock your experiments, but frankly I’m not remotely interested in even one computer, and I’m going to be pretty sore if a gang of them crowd in on me to multiply or divide or whatever they do best.♦

  Two years after Shannon raised his warning flag about the bandwagon, a younger information theorist, Peter Elias, published a notice complaining about a paper titled “Information Theory, Photosynthesis, and Religion.”♦ There was, of course, no such paper. But there had been papers on information theory, life, and topology; information theory and the physics of tissue damage; and clerical systems; and psychopharmacology; and geophysical data interpretation; and crystal structure; and melody. Elias, whose father had worked for Edison as an engineer, was himself a serious specialist—a major contributor to coding theory. He mistrusted the
softer, easier, platitudinous work flooding across disciplinary boundaries. The typical paper, he said, “discusses the surprisingly close relationship between the vocabulary and conceptual framework of information theory and that of psychology (or genetics, or linguistics, or psychiatry, or business organization).… The concepts of structure, pattern, entropy, noise, transmitter, receiver, and code are (when properly interpreted) central to both.” He declared this to be larceny. “Having placed the discipline of psychology for the first time on a sound scientific basis, the author modestly leaves the filling in of the outline to the psychologists.” He suggested his colleagues give up larceny for a life of honest toil.

  These warnings from Shannon and Elias appeared in one of the growing number of new journals entirely devoted to information theory.

  In these circles a notorious buzzword was entropy. Another researcher, Colin Cherry, complained, “We have heard of ‘entropies’ of languages, social systems, and economic systems and of its use in various method-starved studies. It is the kind of sweeping generality which people will clutch like a straw.”♦ He did not say, because it was not yet apparent, that information theory was beginning to change the course of theoretical physics and of the life sciences and that entropy was one of the reasons.

 

‹ Prev