Book Read Free

What Just Happened: A Chronicle From the Information Frontier

Page 42

by James Gleick


  A useful term of art emerged from computer science: namespace, a realm within which all names are distinct and unique. The world has long had namespaces based on geography and other namespaces based on economic niche. You could be Bloomingdale’s as long as you stayed out of New York; you could be Ford if you did not make automobiles. The world’s rock bands constitute a namespace, where Pretty Boy Floyd and Pink Floyd and Pink coexist, along with the 13th Floor Elevators and the 99th Floor Elevators and Hamadryad. Finding new names in this space becomes a challenge. The singer and songwriter long called simply “Prince” was given that name at birth; when he tired of it, he found himself tagged with a meta-name, “the Artist Formerly Known as Prince.” The Screen Actors Guild maintains a formal namespace of its own—only one Julia Roberts allowed. Traditional namespaces are overlapping and melting together. And many grow overcrowded.

  Pharmaceutical names are a special case: a subindustry has emerged to coin them, research them, and vet them. In the United States, the Food and Drug Administration reviews proposed drug names for possible collisions, and this process is complex and uncertain. Mistakes cause death. Methadone, for opiate dependence, has been administered in place of Metadate, for attention-deficit disorder, and Taxol, a cancer drug, for Taxotere, a different cancer drug, with fatal results. Doctors fear both look-alike errors and sound-alike errors: Zantac/Xanax; Verelan/Virilon. Linguists devise scientific measures of the “distance” between names. But Lamictal and Lamisil and Ludiomil and Lomotil are all approved drug names.

  In the corporate namespace, signs of overcrowding could be seen in the fading away of what might be called simple, meaningful names. No new company could be called anything like General Electric or First National Bank or International Business Machines. Similarly, A.1. Steak Sauce could only refer to a food product with a long history. Millions of company names exist, and vast sums of money go to professional consultants in the business of creating more. It is no coincidence that the spectacular naming triumphs of cyberspace verge on nonsense: Yahoo!, Google, Twitter.

  The Internet is not just a churner of namespaces; it is also a namespace of its own. Navigation around the globe’s computer networks relies on the special system of domain names, like COCA-COLA.COM. These names are actually addresses, in the modern sense of that word: “a register, location, or a device where information is stored.” The text encodes numbers; the numbers point to places in cyberspace, branching down networks, subnetworks, and devices. Although they are code, these brief text fragments also carry the great weight of meaning in the most vast of namespaces. They blend together features of trademarks, vanity license plates, postal codes, radio-station call letters, and graffiti. Like the telegraph code names, anyone could register a domain name, for a small fee, beginning in 1993. It was first come, first served. The demand exceeds the supply.

  Too much work for short words. Many entities own “apple” trademarks, but there is only one APPLE.COM; when the domains of music and computing collided, so did the Beatles and the computer company. There is only one MCDONALDS.COM, and a journalist named Joshua Quittner registered it first. Much as the fashion empire of Giorgio Armani wanted ARMANI.COM, so did Anand Ramnath Mani of Vancouver, and he got there first. Naturally a secondary market emerged for trade in domain names. In 2006, one entrepreneur paid another entrepreneur $14 million for SEX.COM. By then nearly every word in every well-known language had been registered; so had uncountable combinations of words and variations of words—more than 100 million. It is a new business for corporate lawyers. A team working for DaimlerChrysler in Stuttgart, Germany, managed to wrest back MERCEDESSHOP.COM, DRIVEAMERCEDES.COM, DODGEVIPER.COM, CRYSLER.COM, CHRISLER.COM, CHRYSTLER.COM, and CHRISTLER.COM.

  The legal edifices of intellectual property were rattled. The response was a species of panic—a land grab in trademarks. As recently as 1980, the United States registered about ten thousand a year. Three decades later, the number approached three hundred thousand, jumping every year. The vast majority of trademark applications used to be rejected; now the opposite is true. All the words of the language, in all possible combinations, seem eligible for protection by governments. A typical batch of early twenty-first century United States trademarks: GREEN CIRCLE, DESERT ISLAND, MY STUDENT BODY, ENJOY A PARTY IN EVERY BOWL!, TECHNOLIFT, MEETINGS IDEAS, TAMPER PROOF KEY RINGS, THE BEST FROM THE WEST, AWESOME ACTIVITIES.

  The collision of names, the exhaustion of names—it has happened before, if never on this scale. Ancient naturalists knew perhaps five hundred different plants and, of course, gave each a name. Through the fifteenth century, that is as many as anyone knew. Then, in Europe, as printed books began to spread with lists and drawings, an organized, collective knowledge came into being, and with it, as the historian Brian Ogilvie has shown, the discipline called natural history.♦ The first botanists discovered a profusion of names. Caspar Ratzenberger, a student at Wittenberg in the 1550s, assembled a herbarium and tried to keep track: for one species he noted eleven names in Latin and German: Scandix, Pecten veneris, Herba scanaria, Cerefolium aculeatum, Nadelkrautt, Hechelkam, NadelKoerffel, Venusstrahl, Nadel Moehren, Schnabel Moehren, Schnabelkoerffel.♦ In England it would have been called shepherd’s needle or shepherd’s comb. Soon enough the profusion of species overtook the profusion of names. Naturalists formed a community; they corresponded, and they traveled. By the end of the century a Swiss botanist had published a catalogue of 6,000 plants.♦ Every naturalist who discovered a new one had the privilege and the responsibility of naming it; a proliferation of adjectives and compounds was inevitable, as were duplication and redundancy. To shepherd’s needle and shepherd’s comb were added, in English alone, shepherd’s bag, shepherd’s purse, shepherd’s beard, shepherd’s bedstraw, shepherd’s bodkin, shepherd’s cress, shepherd’s hour-glass, shepherd’s rod, shepherd’s gourd, shepherd’s joy, shepherd’s knot, shepherd’s myrtle, shepherd’s peddler, shepherd’s pouche, shepherd’s staff, shepherd’s teasel, shepherd’s scrip, and shepherd’s delight.

  Carl Linnaeus had yet to invent taxonomy; when he did, in the eighteenth century, he had 7,700 species of plants to name, along with 4,400 animals. Now there are about 300,000, not counting insects, which add millions more. Scientists still try to name them all: there are beetle species named after Barack Obama, Darth Vader, and Roy Orbison. Frank Zappa has lent his name to a spider, a fish, and a jellyfish.

  “The name of a man is like his shadow,”♦ said the Viennese onomatologist Ernst Pulgram in 1954. “It is not of his substance and not of his soul, but it lives with him and by him. Its presence is not vital, nor its absence fatal.” Those were simpler times.

  When Claude Shannon took a sheet of paper and penciled his outline of the measures of information in 1949, the scale went from tens of bits to hundreds to thousands, millions, billions, and trillions. The transistor was one year old and Moore’s law yet to be conceived. The top of the pyramid was Shannon’s estimate for the Library of Congress—one hundred trillion bits, 1014. He was about right, but the pyramid was growing.

  After bits came kilobits, naturally enough. After all, engineers had coined the word kilobuck—“a scientist’s idea of a short way to say ‘a thousand dollars,’”♦ The New York Times helpfully explained in 1951. The measures of information climbed up an exponential scale, as the realization dawned in the 1960s that everything to do with information would now grow exponentially. That idea was casually expressed by Gordon Moore, who had been an undergraduate studying chemistry when Shannon jotted his note and found his way to electronic engineering and the development of integrated circuits. In 1965, three years before he founded the Intel Corporation, Moore was merely, modestly suggesting that within a decade, by 1975, as many as 65,000 transistors could be combined on a single wafer of silicon. He predicted a doubling every year or two—a doubling of the number of components that could be packed on a chip, but then also, as it turned out, the doubling of all kinds of memory capacity and processing speed, a halving
of size and cost, seemingly without end.

  Kilobits could be used to express speed of transmission as well as quantity of storage. As of 1972, businesses could lease high-speed lines carrying data as fast as 240 kilobits per second. Following the lead of IBM, whose hardware typically processed information in chunks of eight bits, engineers soon adopted the modern and slightly whimsical unit, the byte. Bits and bytes. A kilobyte, then, represented 8,000 bits; a megabyte (following hard upon), 8 million. In the order of things as worked out by international standards committees, mega- led to giga-, tera-, peta-, and exa-, drawn from Greek, though with less and less linguistic fidelity. That was enough, for everything measured, until 1991, when the need was seen for the zettabyte (1,000,000,000,000,000,000,000) and the inadvertently comic sounding yottabyte (1,000,000,000,000, 000,000,000,000). In this climb up the exponential ladder information left other gauges behind. Money, for example, is scarce by comparison. After kilobucks, there were megabucks and gigabucks, and people can joke about inflation leading to terabucks, but all the money in the world, all the wealth amassed by all the generations of humanity, does not amount to a petabuck.

  The 1970s were the decade of megabytes. In the summer of 1970, IBM introduced two new computer models with more memory than ever before: the Model 155, with 768,000 bytes of memory, and the larger Model 165, with a full megabyte, in a large cabinet. One of these room-filling mainframes could be purchased for $4,674,160. By 1982 Prime Computer was marketing a megabyte of memory on a single circuit board, for $36,000. When the publishers of the Oxford English Dictionary began digitizing its contents in 1987 (120 typists; an IBM mainframe), they estimated its size at a gigabyte. A gigabyte also encompasses the entire human genome. A thousand of those would fill a terabyte. A terabyte was the amount of disk storage Larry Page and Sergey Brin managed to patch together with the help of $15,000 spread across their personal credit cards in 1998, when they were Stanford graduate students building a search-engine prototype, which they first called BackRub and then renamed Google. A terabyte is how much data a typical analog television station broadcasts daily, and it was the size of the United States government’s database of patent and trademark records when it went online in 1998. By 2010, one could buy a terabyte disc drive for a hundred dollars and hold it in the palm of one hand. The books in the Library of Congress represent about 10 terabytes (as Shannon guessed), and the number is many times more when images and recording music are counted. The library now archives web sites; by February 2010 it had collected 160 terabytes’ worth.

  As the train hurtled onward, its passengers sometimes felt the pace foreshortening their sense of their own history. Moore’s law had looked simple on paper, but its consequences left people struggling to find metaphors with which to understand their experience. The computer scientist Jaron Lanier describes the feeling this way: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”♦

  A more familiar metaphor is the cloud. All that information—all that information capacity—looms over us, not quite visible, not quite tangible, but awfully real; amorphous, spectral; hovering nearby, yet not situated in any one place. Heaven must once have felt this way to the faithful. People talk about shifting their lives to the cloud—their informational lives, at least. You may store photographs in the cloud; Google will manage your business in the cloud; Google is putting all the world’s books into the cloud; e-mail passes to and from the cloud and never really leaves the cloud. All traditional ideas of privacy, based on doors and locks, physical remoteness and invisibility, are upended in the cloud.

  Money lives in the cloud; the old forms are vestigial tokens of knowledge about who owns what, who owes what. To the twenty-first century these will be seen as anachronisms, quaint or even absurd: bullion carried from shore to shore in fragile ships, subject to the tariffs of pirates and the god Poseidon; metal coins tossed from moving cars into baskets at highway tollgates and thereafter trucked about (now the history of your automobile is in the cloud); paper checks torn from pads and signed in ink; tickets for trains, performances, air travel, or anything at all, printed on weighty perforated paper with watermarks, holograms, or fluorescent fibers; and, soon enough, all forms of cash. The economy of the world is transacted in the cloud.

  Its physical aspect could not be less cloudlike. Server farms proliferate in unmarked brick buildings and steel complexes, with smoked windows or no windows, miles of hollow floors, diesel generators, cooling towers, seven-foot intake fans, and aluminum chimney stacks.♦ This hidden infrastructure grows in a symbiotic relationship with the electrical infrastructure it increasingly resembles. There are information switchers, control centers, and substations. They are clustered and distributed. These are the wheel-works; the cloud is their avatar.

  The information produced and consumed by humankind used to vanish—that was the norm, the default. The sights, the sounds, the songs, the spoken word just melted away. Marks on stone, parchment, and paper were the special case. It did not occur to Sophocles’ audiences that it would be sad for his plays to be lost; they enjoyed the show. Now expectations have inverted. Everything may be recorded and preserved, at least potentially: every musical performance; every crime in a shop, elevator, or city street; every volcano or tsunami on the remotest shore; every card played or piece moved in an online game; every rugby scrum and cricket match. Having a camera at hand is normal, not exceptional; something like 500 billion images were captured in 2010. YouTube was streaming more than a billion videos a day. Most of this is haphazard and unorganized, but there are extreme cases. The computer pioneer Gordon Bell, at Microsoft Research in his seventies, began recording every moment of his day, every conversation, message, document, a megabyte per hour or a gigabyte per month, wearing around his neck what he called a “SenseCam” to create what he called a “LifeLog.” Where does it end? Not with the Library of Congress.

  It is finally natural—even inevitable—to ask how much information is in the universe. It is the consequence of Charles Babbage and Edgar Allan Poe saying, “No thought can perish.” Seth Lloyd does the math. He is a moon-faced, bespectacled quantum engineer at MIT, a theorist and designer of quantum computers. The universe, by existing, registers information, he says. By evolving in time, it processes information. How much? To figure that out, Lloyd takes into account how fast this “computer” works and how long it has been working. Considering the fundamental limit on speed, operations per second (“where E is the system’s average energy above the ground state and = 1.0545 × 10−34 joule-sec is Planck’s reduced constant”), and on memory space, limited by entropy to S/kB ln 2 (“where S is the system’s thermodynamic entropy and kB = 1.38 × 10−23 joules/K is Boltzmann’s constant”), along with the speed of light and the age of the universe since the Big Bang, Lloyd calculates that the universe can have performed something on the order of 10120 “ops” in its entire history.♦ Considering “every degree of freedom of every particle in the universe,” it could now hold something like 1090 bits. And counting.

  15 | NEW NEWS EVERY DAY

  (And Such Like)

  Sorry for all the ups and downs of the web site in recent days. The way I understand it, freakish accumulations of ice weigh down the branches of the Internet and trucks carrying packets of information skid all over the place.

  —Andrew Tobias (2007)♦

  AS THE PRINTING PRESS, the telegraph, the typewriter, the telephone, the radio, the computer, and the Internet prospered, each in its turn, people said, as if for the first time, that a burden had been placed on human communication: new complexity, new detachment, and a frightening new excess. In 1962 the president of the American Historical Association, Carl Bridenbaugh, warned his colleagues that human existence was undergoing a “Great Mutation”—so sudden and so radical “that we are now suffering something like historical amnesia.”♦ He lamented the decline of reading; the distancing from nature (which he blamed in
part on “ugly yellow Kodak boxes” and “the transistor radio everywhere”); and the loss of shared culture. Most of all, for the preservers and recorders of the past, he worried about the new tools and techniques available to scholars: “that Bitch-goddess, Quantification”; “the data processing machines”; as well as “those frightening projected scanning devices, which we are told will read documents and books for us.” More was not better, he declared:

  Notwithstanding the incessant chatter about communication that we hear daily, it has not improved; actually it has become more difficult.♦

  These remarks became well known in several iterations: first, the oral address, heard by about a thousand people in the ballroom of Conrad Hilton’s hotel in Chicago on the last Saturday evening on 1962;♦ next, the printed version in the society’s journal in 1963; and then, a generation later, an online version, with its far greater reach and perhaps greater durability as well.

  Elizabeth Eisenstein encountered the printed version in 1963, when she was teaching history as a part-time adjunct lecturer at American University in Washington (the best job she could get, as a woman with a Harvard Ph.D.). Later she identified that moment as the starting point of fifteen years of research that culminated in her landmark of scholarship, two volumes titled The Printing Press as an Agent of Change. Before Eisenstein’s work appeared in 1979, no one had attempted a comprehensive study of printing as the communications revolution essential to the transition from medieval times to modernity. Textbooks, as she noted, tended to slot the printing press somewhere between the Black Death and the discovery of America.♦ She placed Gutenberg’s invention at center stage: the shift from script to print; the rise of printing shops in the cities of fifteenth-century Europe; the transformation in “data collection, storage and retrieval systems and communications networks.”♦ She emphasized modestly that she would treat printing only as an agent of change, but she left readers convinced of its indispensable part in the transformations of early modern Europe: the Renaissance, the Protestant Reformation, and the birth of science. It was “a decisive point of no return in human history.”♦ It shaped the modern mind.

 

‹ Prev