Turing's Cathedral
Page 41
Among those who foresaw the transformation was Swedish astrophysicist Hannes Alfvén, who remained as opposed to nuclear weapons as von Neumann and Teller were enamored of them. He was a founding member, and later president, of the Pugwash disarmament movement founded by Joseph Rotblat—the only Los Alamos physicist to quit work, in late 1944, in response to secret intelligence that the Germans were not making a serious effort to build an atomic bomb.
As a child, Alfvén had been given a copy of Camille Flammarion’s Popular Astronomy, where he learned what was known, and what was not known, about the solar system at the time. He then joined his school’s shortwave radio club, where he began to understand how much of the universe lay beyond the wavelengths of visible light, and how much consisted not of conventional solids, liquids, or gases, but of plasma—a fourth state of matter, where electrons were unbound. He was awarded the Nobel Prize in Physics in 1970 for his work in magnetohydrodynamics, a field he pioneered with a letter to Nature in 1942. The behavior of electromagnetic waves in solid conductors was well understood, while the behavior of electromagnetic waves in ionized plasma remained mysterious, whether within a star or in interstellar space. In any conducting fluid, including plasma, electrodynamics and hydrodynamics were coupled, and Alfvén put this relationship on solid mathematical and experimental ground. “Playing with mercury in the presence of a magnetic field of 10,000 gauss gives the general impression that the magnetic field has completely changed its hydrodynamic properties,” he explained in 1949.2
Alfvén’s cosmos was permeated by magnetohydrodynamic waves—now termed Alfvén waves—rendering “empty” space much less empty, and helping to explain phenomena ranging from the Aurora Borealis to sunspots to cosmic rays. He developed a detailed theory of the formation of the solar system, using electrodynamics to explain how the different planets coalesced. “To trace the origin of the solar system is archaeology, not physics,” he wrote in 1954.3
Alfvén also argued, without convincing the orthodoxy, that the large-scale structure of the universe might be hierarchical to infinity, rather than expanding from a single source. Such a universe—fulfilling Leibniz’s ideal of everything from nothing—would have an average density of zero but infinite mass. According to Alfvén, the “big bang” was based on wishful thinking. “They fight against popular creationism, but at the same time they fight fanatically for their own creationism,” he noted in 1984.4
Alfvén divided his later years between La Jolla, California, where he held a position as professor of physics at UC–San Diego, and the Royal Institute of Technology in Stockholm, where he had been appointed to the School of Electrical Engineering in 1940, just in time to witness the arrival of the computer age firsthand. Sweden’s BESK (Binär Elektronisk Sekvens Kalkylator) was a first-generation copy of the IAS machine, becoming operational in 1953. It had faster memory and arithmetic, partly through clever Swedish engineering (including the use of 400 germanium diodes) and partly through reducing the memory of each Williams tube to 512 bits.
“I saw the Swedish machine,” von Neumann reported to Klári from Stockholm in September 1954. “Very elegant, perhaps average 25% faster than ours, with only 500 words Williams memory and 4000 on a drum (this to be doubled) a Teletype input (fast, an electrical reader) and only a typewriter output (slow).”5 The construction of this machine left an indelible impression on Alfvén, which he eventually set down on paper in The Tale of the Big Computer: A Vision, published in Sweden in 1966 and in the United States in 1968.
“When one of my daughters had given me my first grandchild, she said to me: You are writing so many scientific papers and books, but why don’t you write something more sensible—a fairy tale for this little boy,” Alfvén recalled in 1981. Choosing a “monozygotic relative by the name of Olof Johannesson” as a pseudonym, Alfvén recounted, from an indefinite time in the future, a brief natural history of the origin and development of computers and their subsequent domination of life on Earth. “Life, which evolved into ever more complex structures, was nature’s substitute for directly bred computers,” he wrote. “Yet it was more than a substitute: it was a road—a winding road, yet one which despite all errors and hazards, arrived at last at its destination.”6
“I was Scientific Advisor to the Swedish government, and had access to their plans to restructure Swedish society, which obviously could be made much more efficient with the help of computers, in the same way as earlier inventions had relieved us of heavy physical work,” he added, explaining how he came to write the book. In Alfvén’s vision, computers quickly eliminated two of the world’s greatest threats: nuclear weapons and politicians. “When the computers developed, they would take over a good deal of the burden of the politicians, and sooner or later would also take over their power,” he explained. “This need not be done by an ugly coup d’état; they would simply systematically outwit the politicians. It might even take a long time before the politicians understood that they had been rendered powerless. This is not a threat to us.”7
“Computers are designed to be problem solvers, whereas the politicians have inherited the stone age syndrome of the tribal chieftains, who take for granted that they can rule their people only by making them hate and fight all other tribes,” Alfvén continued. “If we have the choice of being governed by problem generating trouble makers, or by problem solvers, every sensible man of course would prefer the latter.”8
The mathematicians who were designing and programming the growing computer network began to suspect that “the problem of organizing society is so highly complex as to be insoluble by the human brain, or even by many brains working in collaboration.” Their subsequent proof of the “Sociological Complexity Theorem” led to a decision to turn the organization of human society, and the management of its social networks, over to the machines.9 All individuals were issued a device called “teletotal,” connected to a global computer network with features similar to the Google and Facebook of today. “Teletotal threw a bridge between the thought world of the computer—which operated via pulse sequences at the speed of nano-seconds—and the thought world of the human brain, with its electrochemical nerve impulses,” Alfvén explained.10 “Since universal knowledge was stored in the memory units of the computers and was thus easily accessible to one and all, the gap between those who knew and those who did not was closed … and it was quite unnecessary to store any wisdom at all in the human brain.”11
Teletotal was followed by a miniaturized, wireless successor known as “minitotal,” later supplemented by “neurototal,” an implant kept “in permanent contact via VHF with the subject’s minitotal” and surgically inserted into a nerve channel for direct connection to the brain. Human technicians maintained the growing computer network, with the computers, in return, looking after the health and welfare of their human symbionts as carefully as the Swedish government does today. “Health factories” kept human beings in good repair, cities were abandoned in favor of a decentralized, telecommuting life, and “shops became superfluous, for the goods in them could be examined from the customer’s home.… If one wanted to buy something … one pressed the purchase button.”12
Then, one day, the entire system ground to a halt. A small group of humans had conspired to seize control of the network for themselves. “Factions had formed—just how many is unknown—and they fought each other for power,” Alfvén explained. “One group attempted to knock out its rivals by disorganizing their data systems, and was paid back in its own coin. The result was total disruption. How long the battle lasted we do not know. It must have been prepared over a long period, but the conflict itself may have taken less than one second. For computers this is a considerable time.”13
The failure was complete. With the network down, there was no way to distribute the instructions to bring it back up. “The breakdowns seem to have set in almost—or even precisely—at the same time all over the world, and it was evident that the international computer network was dead,” Johannesson re
ported.14 “It was utter disaster. Within less than a year the greater part of the population had perished from hunger and privation.… Museums were plundered of [axes] and other tools.”15
Society was slowly reconstructed from the ruins, and the computer system rebooted from backups preserved by a Martian outpost that had escaped the collapse. This time, the computers were given full control from the start, it being recognized that “Man had to be excluded altogether from the more important organizational tasks.”16 In the new society, the number of human beings was kept small. “A great number of data machines had been destroyed at the time of the disaster, yet their numbers had diminished by nothing compared with the proportion of human casualties.… Thus when they were put into action again, the proportion of computers to people was greatly increased.”17 Once the computers were running again, and equipped with facilities to repair and reproduce themselves, human beings became increasingly superfluous, and the story leaves off with Olof Johannesson wondering how large a human population will be preserved. “It is likely that they will at least reduce their numbers; but will this be done quickly or gradually? Will they retain a human colony and, if so, of what size?”18
Alfvén’s tale is now forgotten, but the future he envisioned has arrived. Data centers and server farms are proliferating in rural areas; “Android” phones with Bluetooth headsets are only one step away from neural implants; unemployment is pandemic among those not working on behalf of the machines. Facebook defines who we are, Amazon defines what we want, and Google defines what we think. Teletotal was the personal computer; minitotal is the iPhone; neurototal will be next. “How much human life can we absorb?” answers one of Facebook’s founders, when asked what the goal of the company really is.19 “We want Google to be the third half of your brain,” says Google cofounder Sergey Brin.20
The ability of computers to predict (and influence) how people will vote, with as much precision as the actual vote can be counted, has rendered politicians subservient to computers, much as Alfvén prescribed. Computers have no need for weapons to enforce their power, since, as Alfvén explained, they “control all production, and this would automatically stop in the event of an attempted revolt. The same is true of communications, so that if anyone should attempt anything so foolish as a revolt against the data machines, it could only be local in character. Lastly, man’s attitude to computers is a very positive one.”21 Recent developments have outpaced what even Alfvén could imagine—from the explosive growth of optical data networks (anticipated in the nineteenth century by optical telegraph networks in Sweden) to the dominance of virtual machines.
The progenitor of virtualization was Turing’s Universal Machine. Two-way translation between logical function and strings of symbols is no longer the mathematical abstraction it was in 1936. A single computer may host multiple, concurrent virtual machines; “apps” are coded sequences that locally implement a specific virtual machine on an individual device; Google’s one million (at last count) servers constitute a collective, metazoan organism whose physical manifestation changes from one instant to the next.
Virtual machines never sleep. Only one-third of a search engine is devoted to fulfilling search requests. The other two-thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). The load shifts freely between the archipelagoes of server farms. Twenty-four hours a day, 365 days a year, algorithms with names such as BigTable, MapReduce, and Percolator are systematically converting the numerical address matrix into a content-addressable memory, effecting a transformation that constitutes the largest computation ever undertaken on planet Earth. We see only the surface of a search engine—by entering a search string and retrieving a list of addresses, with contents, that contain a match. The aggregate of all our random searches for meaningful strings of bits is a continuously updated mapping among content, meaning, and address space: a Monte Carlo process for indexing the matrix that underlies the World Wide Web.
The address matrix that began, in 1951, with a single 40-floor hotel, with 1,024 rooms on every floor, has now expanded to billions of 64-floor hotels with billions of rooms, yet the contents are still addressed by numerical coordinates that have to be specified exactly, or everything comes to a halt. There is, however, another way of addressing memory, and that is to use an identifiable (but not necessarily unique) string within the contents of the specified block of memory as a template-based address.
Given access to content-addressable memory, codes based on instructions that say, “Do this with that”—without having to specify a precise location—will begin to evolve. The instructions may even say, “Do this with something like that”—without the template having to be exact. The first epoch in the digital era began with the introduction of the random-access storage matrix in 1951. The second era began with the introduction of the Internet. With the introduction of template-based addressing, a third era in computation has begun. What was once a cause for failure—not specifying a precise numerical address—will become a prerequisite to real-world success.
The Monte Carlo method was invoked as a means of using statistical, probabilistic tools to identify approximate solutions to physical problems resistant to analytical approach. Since the underlying physical phenomena actually are probabilistic and statistical, the Monte Carlo approximation is often closer to reality than the analytical solutions that Monte Carlo was originally called upon to approximate. Template-based addressing and pulse-frequency coding are similarly closer to the way the world really works and, like Monte Carlo, will outperform methods that require address references or instruction strings to be exact. The power of the genetic code, as both Barricelli and von Neumann immediately recognized, lies in its ambiguity: exact transcription but redundant expression. In this lies the future of digital code.
A fine line separates approximation from simulation, and developing a model is the better part of assuming control. So as not to shoot down commercial airliners, the SAGE (Semi-Automatic Ground Environment) air defense system that developed out of MIT’s Project Whirlwind in the 1950s kept track of all passenger flights, developing a real-time model that led to the SABRE (Semi-Automatic Business-Related Environment) airline reservation system that still controls much of the passenger traffic today. Google sought to gauge what people were thinking, and became what people were thinking. Facebook sought to map the social graph, and became the social graph. Algorithms developed to model fluctuations in financial markets gained control of those markets, leaving human traders behind. “Toto,” said Dorothy in The Wizard of Oz, “I’ve a feeling we’re not in Kansas anymore.”
What the Americans termed “artificial intelligence” the British termed “mechanical intelligence,” a designation that Alan Turing considered more precise. We began by observing intelligent behavior (such as language, vision, goal-seeking, and pattern-recognition) in organisms, and struggled to reproduce this behavior by encoding it into logically deterministic machines. We knew from the beginning that this logical, intelligent behavior evident in organisms was the result of fundamentally statistical, probabilistic processes, but we ignored that (or left the details to the biologists), while building “models” of intelligence—with mixed success.
Through large-scale statistical, probabilistic information processing, real progress is being made on some of the hard problems, such as speech recognition, language translation, protein folding, and stock market prediction—even if only for the next millisecond, now enough time to complete a trade. How can this be intelligence, since we are just throwing statistical, probabilistic horsepower at the problem, and seeing what sticks, without any underlying understanding? There’s no model. And how does a brain do it? With a model? These are not models of intelligent processes. They are intelligent processes.
The behavior of a search engine, when not actively conducting a search, resembles the activity of a dreaming brain. Associations made wh
ile “awake” are retraced and reinforced, while memories gathered while “awake” are replicated and moved around. William C. Dement, who helped make the original discovery of what became known as REM (rapid eye movement) sleep, did so while investigating newborn infants, who spend much of their time in dreaming sleep. Dement hypothesized that dreaming was an essential step in the initialization of the brain. Eventually, if all goes well, awareness of reality evolves from the internal dream—a state we periodically return to during sleep. “The prime role of ‘dreaming sleep’ in early life may be in the development of the central nervous system,” Dement announced in Science in 1966.22
Since the time of Leibniz, we have been waiting for machines to begin to think. Before Turing’s Universal Machines colonized our desktops, we had a less-encumbered view of the form in which true artificial intelligence would first appear. “Is it a fact—or have I dreamed it—that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time?” asked Nathaniel Hawthorne in 1851. “Rather, the round globe is a vast head, a brain, instinct with intelligence! Or, shall we say, it is itself a thought, nothing but thought, and no longer the substance which we deemed it?” In 1950, Turing asked us to “consider the question, ‘Can machines think?’ ”23 Machines will dream first.
What about von Neumann’s question—whether machines would begin to reproduce? We gave digital computers the ability to modify their own coded instructions—and now they are beginning to exercise the ability to modify our own. Are we using digital computers to sequence, store, and better replicate our own genetic code, thereby optimizing human beings, or are digital computers optimizing our genetic code—and our way of thinking—so that we can better assist in replicating them?