Surveillance Valley
Page 5
Philadelphia, PA—The war department tonight unveiled “the world’s fastest calculating machine” and said the robot possibly opened the mathematical way to better living for every man.
Improved industrial products, better communication and transportation, superior weather forecasting and other advances in science and engineering may be made possible, the army said, from the development of “the first all-electronic general-purpose computer.”
The army described the machine as 1,000 times faster than the most advanced calculating machine previously built and declared the apparatus makes it possible “to solve in hours problems which would take years” on any other machine.
Does Everything
The machine, which can add, subtract, multiply, divide and compute square root, as well as do most complex calculations based on those operations, is called the “ENIAC”—short for “electronic numerical integrator and computer.” It also has been nicknamed the “mechanical Einstein.”10
The ENIAC didn’t come fast enough to help with the war, but it stayed in operation for nearly a decade, crunching firing tables, running atomic bomb calculations, and building weather models of the Soviet climate, including mapping the potential spread of fallout from a nuclear war.11 As powerful as it was, the ENIAC wasn’t enough.
To develop the computer and networking technology necessary to power a modern radar defense system, a special research division known as the Lincoln Laboratory was created. Attached to the Massachusetts Institute of Technology and based out of a research campus ten miles east of Cambridge, the Lincoln Lab was a joint project of the navy, air force, army, and IBM. Its sole objective was to build a modern air defense system. An astounding number of resources were thrown at the effort. Thousands of civilian contractors and military personnel were involved over a ten-year period. The software itself took about a thousand man-years to program.12 The entire project cost more than the Manhattan Project, the effort to develop the first atomic weapon.
The Lincoln Lab assembled a monster: the Semi-Automatic Ground Environment, or SAGE. It was the biggest computer system in history and the first real computer network. SAGE was controlled by two dozen “Direction Centers” located strategically around the country. These giant nuclear-proof concrete bunkers housed two IBM computers that together cost $4 billion in today’s dollars, weighed six hundred tons, and took up an acre of floor space; one was always on standby in case the other failed.13 Each control center employed hundreds of people and was connected to land-based and coastal radar arrays, missile silos, and nearby interceptor aircraft bases. The system could track up to four hundred airplanes in real time, scramble fighter jets, launch Nike missiles, and aim antiaircraft cannons.14 SAGE was the eyes, ears, and brains of a massive weapon. It was also the first nationwide computerized surveillance machine—surveillance in the broader sense: a system that collected information from remote sensors, analyzed it, and allowed the military to act on the intelligence it produced.
SAGE was an incredibly sophisticated machine, but in practice it was outdated before it was ever turned on. It went online in the early 1960s, more than three years after the Soviet Union had launched the Sputnik and thereby demonstrated its long-range intercontinental missiles capability. The Soviets could shoot a nuclear payload into space and have it come down anywhere in the United States, and no fancy radar defense system could do anything about it.
On the surface, SAGE was a boondoggle. But in a bigger historical sense it was a phenomenal success. MIT Lincoln Laboratory—with its top-notch engineering talent and nearly limitless resources directed at a narrow set of problems—became more than just a research and development center for a single military project. It turned into a training ground for a new engineering elite: a multidisciplinary group of scientists, academics, government officials, businessmen, and mathematicians who would go on to create the modern computer industry and build the Internet.
And J. C. R. Licklider was at the center of it all. At the Lincoln Laboratory, he worked on the human side of this vast radar computer system and helped develop the system’s graphic display, which had to integrate data from multiple radars and to display real-time heading and speed information that could then be used to guide aircraft interceptors. It was a small but vital component of SAGE, and the work opened his eyes to the possibilities of building tools that integrated people and computers into one continuous system: a man-machine that broke through human physical limitations and created powerful new hybrid beings.
Cyborgs and Cybernetics
The Massachusetts Institute of Technology was ground zero for a new science called cybernetics. Developed by MIT professor Norbert Wiener, cybernetics defined the world as a giant computational machine. It offered a conceptual and mathematical framework for thinking about and designing complex information systems.
Wiener was an odd and brilliant man. He was short, pudgy, with a meaty round head and thick glasses. In his later years, he looked a bit like Hans Moleman from The Simpsons. He was also a true wunderkind. The son of a strict and ambitious academic and Slavic scholar, Wiener was forced to memorize entire books and recite them from memory and to perform complex algebra and trigonometry in his head.15 “My father would be doing his homework for Harvard and I had to stand beside him and recite my lessons by memory, even in Greek, at six years old, and he would ignore me until I made the simplest mistake, then he would verbally reduce me to dust,” he recalled in his autobiography.16
With this kind of training, Wiener went to college at the age of eleven—the “infant prodigy of Boston” one newspaper called him—earned a PhD in mathematics by age eighteen, and, rejected from a job at Harvard, started teaching at MIT. His life of frantic study and pitiless criticism from his father didn’t prepare him for the social dimension of life: he was clumsy, couldn’t talk to women, had few true friends, was depressive, and could barely take care of himself.
His parents arranged his marriage to Margaret Engemann, an immigrant from Germany who had had trouble finding a husband. They had two normal daughters, and the marriage seemed fine, except for one little detail: Margaret was a steadfast supporter of Adolf Hitler and forced their daughters to read Mein Kampf. “One day she told us that the members of her family in Germany had been certified as Judenrein—‘free of Jewish taint.’ She thought we’d be pleased to know,” recalled her daughter. “She said I should not feel sorry for the Jews of Germany because they were not very nice people.” During a Christmas party, she tried to convince guests that Aryan lineage stretched back to the son of God himself. “Jesus was the son of a German mercenary stationed in Jerusalem, and this had been scientifically proven.” It was an awkward situation given that her husband was a Jew of German descent, and her daughters were thereby half Jewish. But this was no ordinary household.
Wiener’s mind was perpetually hungry, devouring everything in its path. He crossed just about every disciplinary boundary, cutting through philosophy, mathematics, engineering, linguistics, physics, psychology, evolutionary biology, neurobiology, and computer science. During World War II, Wiener met a problem that tested the limits of his brilliant multidisciplinary brain. He was recruited to work on a quixotic top-secret venture to build an automatic aiming-targeting mechanism that could increase the effectiveness of ground-to-air antiaircraft cannons. All through the war, he worked on a specialized computer apparatus that used microwave radar to watch, pinpoint, and then predict a plane’s future position on the basis of its pilot’s actions in order to more effectively blast it out of the sky. It was a machine that studied the actions of a human being and responded dynamically to them. While building it, he had a profound insight about the nature of information. He began to see that the communication of information wasn’t just an abstract or ephemeral act but had a powerful physical property to it. Like an invisible force, it could be relied on to trigger a reaction. He also made another simple but profound leap: he realized that communication and transmission of messages were not limited
to humans but pervaded all living organisms and could be designed into the mechanical world as well.
Wiener published these ideas in a dense 1948 tract called Cybernetics: Control and Communication in the Animal and the Machine. What was cybernetics? The concept was slippery and maddeningly difficult to define. In simple terms, he described cybernetics as the idea that the biological nervous system and the computer or automatic machine were basically the same thing. They were “devices which make decisions on the basis of decisions they have made in the past,” he explained.17 To Wiener, people and the entire living world could be seen as one giant interlocking information machine, everything responding to everything else in an intricate system of cause, effect, and feedback. He predicted that our lives would increasingly be mediated and enhanced by computers and integrated to the point that there would cease to be any difference between us and the larger cybernetic machine in which we lived.
Despite being full of incomprehensible mathematical proofs and jargon, the book excited the public’s imagination and became an instant best seller. Military circles received it as a revolutionary work as well. What Karl Marx’s Das Kapital did for nineteenth-century socialists, Wiener’s Cybernetics did for America’s anticommunist Cold Warriors. On a very basic level, cybernetics posited that human beings, like all living things, were information processing machines. We were all computers—highly complex, but computers nonetheless. That meant that the military could construct machines that could think like people and act like people: scan for enemy planes and ships, transcribe enemy radio communications, spy on subversives, analyze foreign news for hidden meaning and secret messages—all without needing sleep or food or rest. With computer technology like this, America’s dominance was guaranteed. Cybernetics triggered an elusive decades-long quest by the military to fulfill this particular vision of cybernetics, an effort to create computers with what we now call artificial intelligence.18
Cybernetic concepts, backed by huge amounts of military funding, began to pervade academic disciplines: economics, engineering, psychology, political science, biology, and environmental studies. Neoclassical economists integrated cybernetics into their theories and began looking at markets as distributed information machines.19 Ecologists began to look at the earth itself as a self-regulating computational “bio system,” and cognitive psychologists and cognitive scientists approached the study of the human brain as if it were literally a complex digital computer.20 Political scientists and sociologists began to dream of using cybernetics to create a controlled utopian society, a perfectly well-oiled system where computers and people were integrated into a cohesive whole, managed and controlled to ensure security and prosperity.21 “Put most clearly: in the 1950s both the military and U.S. industry explicitly advocated a messianic understanding of computing, in which computation was the underlying matter of everything in the social world, and could therefore be brought under state-capitalist military control—centralized, hierarchical control,” writes historian David Golumbia in The Cultural Logic of Computation, a groundbreaking study of computational ideology.22
In a big way, this intermeshing of cybernetics and big power was what caused Norbert Wiener to turn against cybernetics almost as soon as he introduced it to the world. He saw scientists and military men taking the narrowest possible interpretation of cybernetics to create better killing machines and more efficient systems of surveillance and control and exploitation. He saw giant corporations using his ideas to automate production and cut labor in their quest for greater wealth and economic power. He began to see that in a society mediated by computer and information systems those who controlled the infrastructure wielded ultimate power.
Wiener envisioned a bleak future and realized that he himself was culpable, comparing his work on cybernetics to that of the world’s greatest scientists who unleashed the destructive power of atomic weapons. In fact, he saw cybernetics in even starker terms than nukes. “The impact of the thinking machine will be a shock certainly of comparable order to that of the atomic bomb,” he said in a 1949 interview. The replacement of human labor with machines—and the social destabilization, mass unemployment, and concentrated economic power, that such change would cause—is what worried Wiener the most.23 “Let us remember that the automatic machine, whatever we think of any feelings it may have or may not have, is the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic conditions of slave labor. It is perfectly clear that this will produce an unemployment situation, in comparison with which the present recession and even the depression of the thirties will seem a pleasant joke,” Wiener wrote in a dark and prescient follow-up book, The Human Use of Human Beings: Cybernetics and Society.24
The destruction would be political and economic.
After popularizing cybernetics, Wiener became a kind of labor and antiwar activist. He reached out to unions to warn them of the danger of automation and the need to take the threat seriously. He turned down offers from giant corporations that wanted help automating their assembly lines according to his cybernetic principles, and refused to work on military research projects. He was against the massive peacetime arms buildup taking place after World War II and publicly lashed out at colleagues for working to help the military build bigger, more efficient tools of destruction. He increasingly hinted at his insider knowledge that a “colossal state machine” was being constructed by government agencies “for the purposes of combat and domination,” a computerized information system that was “sufficiently extensive to include all civilian activities during war, before war and possibly even between wars,” as he described it in The Human Use of Human Beings.
Wiener’s vocal support of labor and his public opposition to corporate and military work made him a pariah among his military contractor–engineer colleagues.25 It also earned him a spot on J. Edgar Hoover’s FBI subversive surveillance list. For years, he was suspected of having communist sympathies, his life documented in a thick FBI file that was closed upon his death in 1964.26
Of Mice and Keyboards
J. C. R. Licklider interacted with Norbert Wiener at MIT and participated in conferences and dinner parties where cybernetic ideas were hashed out, debated, and discussed. He was radicalized by Wiener’s cybernetic vision. Where Wiener saw danger, Lick saw opportunity. He had no qualms about putting this technology in the service of US corporate and military power.
Though most computer engineers thought of computers as little more than oversized calculators, Lick saw them as extensions of the human mind, and he became obsessed with designing machines that could be seamlessly coupled to human beings. In 1960, he published a paper that outlined his vision for the coming “man-computer symbiosis” and described in simple terms the kinds of computer components that needed to be invented to make it happen. The paper essentially described a modern multipurpose computer, complete with a display, keyboard, speech recognition software, networking capabilities, and applications that could be used in real time for a variety of tasks.27 It seems obvious to us now, but back then Lick’s ideas were visionary. His paper was widely circulated in defense circles and earned him an invitation by the Pentagon to do a series of lectures on the topic.28
“My first experience with computers had been listening to a talk by [mathematician John] von Neumann in Chicago back in nineteen forty-eight. It sounded like science fiction then: a machine that could carry out algorithms automatically,” recalled Charles Herzfeld, a physicist who would go on to serve as the director of ARPA in the mid-1960s.29 “But the next big shock was Lick: not only could we use these machines for massive calculations, but we could make them useful in our everyday lives. I listened. I got very excited. And in a very real sense, I became a disciple from then on.”
Indeed, Lick’s papers and interviews show that he thought almost any problem could be solved with the right application of computers. He even came up with a plan to end poverty and “stimulate young ghetto blacks” by having th
em tinker with computers. He called the process “dynamations,” a 1960s version of an idea that is very popular in Silicon Valley even today, fifty years later: the belief that teaching poor kids to code will somehow magically lift them out of poverty and boost global literacy and education rates.30 “What is difficult to convey in a few words is the almost messianic view carried by Licklider of the potential for advances in the use of computers, the way people could relate to computers, and the resultant impact on how people would come to make decisions,” explained an internal declassified ARPA report.31 Lick infected everyone with his enthusiasm for the coming computer revolution, including top people at ARPA, who were also on a quest to leverage computers to boost military effectiveness.
In 1962, after a brief job interview at the Pentagon, Lick moved his family from Boston to Washington, DC, and went to work building ARPA’s Command and Control Research program from scratch.32
At the time, computers were giant metal monsters that occupied entire basements and were attended by multiple technicians. Despite their complexity and size, they were primitive and had less computational power than a 1990s graphic calculator. They also ran one program at a time, and each one had to be fed in by hand using punch cards. “Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this,” Lick explained in his 1960 paper. “You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun.”33