VI. Created by the Byte Shop’s owner Paul Terrell, who had launched the Apple I by ordering the first fifty for his store.
VII. The one written by Bill Gates.
VIII. Gates donated to computer buildings at Harvard, Stanford, MIT, and Carnegie Mellon. The one at Harvard, cofunded with Steve Ballmer, was named Maxwell Dworkin, after their mothers.
IX. The Oxford English Dictionary added google as a verb in 2006.
CHAPTER TWELVE
* * *
ADA FOREVER
LADY LOVELACE’S OBJECTION
Ada Lovelace would have been pleased. To the extent that we are permitted to surmise the thoughts of someone who’s been dead for more than 150 years, we can imagine her writing a proud letter boasting about her intuition that calculating devices would someday become general-purpose computers, beautiful machines that can not only manipulate numbers but make music and process words and “combine together general symbols in successions of unlimited variety.”
Machines such as these emerged in the 1950s, and during the subsequent thirty years there were two historic innovations that caused them to revolutionize how we live: microchips allowed computers to become small enough to be personal appliances, and packet-switched networks allowed them to be connected as nodes on a web. This merger of the personal computer and the Internet allowed digital creativity, content sharing, community formation, and social networking to blossom on a mass scale. It made real what Ada called “poetical science,” in which creativity and technology were the warp and woof, like a tapestry from Jacquard’s loom.
Ada might also be justified in boasting that she was correct, at least thus far, in her more controversial contention: that no computer, no matter how powerful, would ever truly be a “thinking” machine. A century after she died, Alan Turing dubbed this “Lady Lovelace’s Objection” and tried to dismiss it by providing an operational definition of a thinking machine—that a person submitting questions could not distinguish the machine from a human—and predicting that a computer would pass this test within a few decades. But it’s now been more than sixty years, and the machines that attempt to fool people on the test are at best engaging in lame conversation tricks rather than actual thinking. Certainly none has cleared Ada’s higher bar of being able to “originate” any thoughts of its own.
* * *
Ever since Mary Shelley conceived her Frankenstein tale during a vacation with Ada’s father, Lord Byron, the prospect that a man-made contraption might originate its own thoughts has unnerved generations. The Frankenstein motif became a staple of science fiction. A vivid example was Stanley Kubrick’s 1968 movie, 2001: A Space Odyssey, featuring the frighteningly intelligent computer HAL. With its calm voice, HAL exhibits attributes of a human: the ability to speak, reason, recognize faces, appreciate beauty, show emotion, and (of course) play chess. When HAL appears to malfunction, the human astronauts decide to shut it down. HAL becomes aware of the plan and kills all but one of them. After a lot of heroic struggle, the remaining astronaut gains access to HAL’s cognitive circuits and disconnects them one by one. HAL regresses until, at the end, it intones “Daisy Bell”—an homage to the first computer-generated song, sung by an IBM 704 at Bell Labs in 1961.
Artificial intelligence enthusiasts have long been promising, or threatening, that machines like HAL would soon emerge and prove Ada wrong. Such was the premise of the 1956 conference at Dartmouth organized by John McCarthy and Marvin Minsky, where the field of artificial intelligence was launched. The conferees concluded that a breakthrough was about twenty years away. It wasn’t. Decade after decade, new waves of experts have claimed that artificial intelligence was on the visible horizon, perhaps only twenty years away. Yet it has remained a mirage, always about twenty years away.
John von Neumann was working on the challenge of artificial intelligence shortly before he died in 1957. Having helped devise the architecture of modern digital computers, he realized that the architecture of the human brain is fundamentally different. Digital computers deal in precise units, whereas the brain, to the extent we understand it, is also partly an analog system, which deals with a continuum of possibilities. In other words, a human’s mental process includes many signal pulses and analog waves from different nerves that flow together to produce not just binary yes-no data but also answers such as “maybe” and “probably” and infinite other nuances, including occasional bafflement. Von Neumann suggested that the future of intelligent computing might require abandoning the purely digital approach and creating “mixed procedures” that include a combination of digital and analog methods. “Logic will have to undergo a pseudomorphosis to neurology,” he declared, which, roughly translated, meant that computers were going to have to become more like the human brain.1
In 1958 a Cornell professor, Frank Rosenblatt, attempted to do this by devising a mathematical approach for creating an artificial neural network like that of the brain, which he called a Perceptron. Using weighted statistical inputs, it could, in theory, process visual data. When the Navy, which was funding the work, unveiled the system, it drew the type of press hype that has accompanied many subsequent artificial intelligence claims. “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence,” the New York Times reported. The New Yorker was equally enthusiastic: “The Perceptron, . . . as its name implies, is capable of what amounts to original thought. . . . It strikes us as the first serious rival to the human brain ever devised.”2
That was almost sixty years ago. The Perceptron still does not exist.3 Nevertheless, almost every year since then there have been breathless reports about some marvel on the horizon that would replicate and surpass the human brain, many of them using almost the exact same phrases as the 1958 stories about the Perceptron.
* * *
Discussion about artificial intelligence flared up a bit, at least in the popular press, after IBM’s Deep Blue, a chess-playing machine, beat the world champion Garry Kasparov in 1997 and then Watson, its natural-language question-answering computer, won at Jeopardy! against champions Brad Rutter and Ken Jennings in 2011. “I think it awakened the entire artificial intelligence community,” said IBM CEO Ginni Rometty.4 But as she was the first to admit, these were not true breakthroughs of humanlike artificial intelligence. Deep Blue won its chess match by brute force; it could evaluate 200 million positions per second and match them against 700,000 past grandmaster games. Deep Blue’s calculations were fundamentally different, most of us would agree, from what we mean by real thinking. “Deep Blue was only intelligent the way your programmable alarm clock is intelligent,” Kasparov said. “Not that losing to a $10 million alarm clock made me feel any better.”5
Likewise, Watson won at Jeopardy! by using megadoses of computing power: it had 200 million pages of information in its four terabytes of storage, of which the entire Wikipedia accounted for merely 0.2 percent. It could search the equivalent of a million books per second. It was also rather good at processing colloquial English. Still, no one who watched would bet on its passing the Turing Test. In fact, the IBM team leaders were afraid that the show’s writers might try to turn the game into a Turing Test by composing questions designed to trick a machine, so they insisted that only old questions from unaired contests be used. Nevertheless, the machine tripped up in ways that showed it wasn’t human. For example, one question was about the “anatomical oddity” of the former Olympic gymnast George Eyser. Watson answered, “What is a leg?” The correct answer was that Eyser was missing a leg. The problem was understanding oddity, explained David Ferrucci, who ran the Watson project at IBM. “The computer wouldn’t know that a missing leg is odder than anything else.”6
John Searle, the Berkeley philosophy professor who devised the “Chinese room” rebuttal to the Turing Test, scoffed at the notion that Watson represented even a glimmer of artificial intelligence. “Watson did not understand th
e questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything,” Searle contended. “IBM’s computer was not and could not have been designed to understand. Rather, it was designed to simulate understanding, to act as if it understood.”7
Even the IBM folks agreed with that. They never held Watson out to be an “intelligent” machine. “Computers today are brilliant idiots,” said the company’s director of research, John E. Kelly III, after the Deep Blue and Watson victories. “They have tremendous capacities for storing information and performing numerical calculations—far superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans.”8
Rather than demonstrating that machines are getting close to artificial intelligence, Deep Blue and Watson actually indicated the contrary. “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence,” argued Professor Tomaso Poggio, director of the Center for Brains, Minds, and Machines at MIT. “We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”9
Douglas Hofstadter, a professor at Indiana University, combined the arts and sciences in his unexpected 1979 best seller, Gödel, Escher, Bach. He believed that the only way to achieve meaningful artificial intelligence was to understand how human imagination worked. His approach was pretty much abandoned in the 1990s, when researchers found it more cost-effective to tackle complex tasks by throwing massive processing power at huge amounts of data, the way Deep Blue played chess.10
This approach produced a peculiarity: computers can do some of the toughest tasks in the world (assessing billions of possible chess positions, finding correlations in hundreds of Wikipedia-size information repositories), but they cannot perform some of the tasks that seem most simple to us mere humans. Ask Google a hard question like “What is the depth of the Red Sea?” and it will instantly respond, “7,254 feet,” something even your smartest friends don’t know. Ask it an easy one like “Can a crocodile play basketball?” and it will have no clue, even though a toddler could tell you, after a bit of giggling.11
At Applied Minds near Los Angeles, you can get an exciting look at how a robot is being programmed to maneuver, but it soon becomes apparent that it still has trouble navigating an unfamiliar room, picking up a crayon, and writing its name. A visit to Nuance Communications near Boston shows the wondrous advances in speech-recognition technologies that underpin Siri and other systems, but it’s also apparent to anyone using Siri that you still can’t have a truly meaningful conversation with a computer, except in a fantasy movie. At the Computer Science and Artificial Intelligence Laboratory of MIT, interesting work is being done on getting computers to perceive objects visually, but even though the machine can discern pictures of a girl with a cup, a boy at a water fountain, and a cat lapping up cream, it cannot do the simple abstract thinking required to figure out that they are all engaged in the same activity: drinking. A visit to the New York City police command system in Manhattan reveals how computers scan thousands of feeds from surveillance cameras as part of a Domain Awareness System, but the system still cannot reliably identify your mother’s face in a crowd.
All of these tasks have one thing in common: even a four-year-old can do them. “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard,” according to Steven Pinker, the Harvard cognitive scientist.12 As the futurist Hans Moravec and others have noted, this paradox stems from the fact that the computational resources needed to recognize a visual or verbal pattern are huge.
* * *
Moravec’s paradox reinforces von Neumann’s observations from a half century ago about how the carbon-based chemistry of the human brain works differently from the silicon-based binary logic circuits of a computer. Wetware is different from hardware. The human brain not only combines analog and digital processes, it also is a distributed system, like the Internet, rather than a centralized one, like a computer. A computer’s central processing unit can execute instructions much faster than a brain’s neuron can fire. “Brains more than make up for this, however, because all the neurons and synapses are active simultaneously, whereas most current computers have only one or at most a few CPUs,” according to Stuart Russell and Peter Norvig, authors of the foremost textbook on artificial intelligence.13
So why not make a computer that mimics the processes of the human brain? “Eventually we’ll be able to sequence the human genome and replicate how nature did intelligence in a carbon-based system,” Bill Gates speculates. “It’s like reverse-engineering someone else’s product in order to solve a challenge.”14 That won’t be easy. It took scientists forty years to map the neurological activity of the one-millimeter-long roundworm, which has 302 neurons and 8,000 synapses.I The human brain has 86 billion neurons and up to 150 trillion synapses.15
At the end of 2013, the New York Times reported on “a development that is about to turn the digital world on its head” and “make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control.” The phrases were reminiscent of those used in its 1958 story on the Perceptron (“will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”). Once again, the strategy was to replicate the way the human brain’s neural networks operate. As the Times explained, “the new computing approach is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information.”16 IBM and Qualcomm each disclosed plans to build “neuromorphic,” or brainlike, computer processors, and a European research consortium called the Human Brain Project announced that it had built a neuromorphic microchip that incorporated “fifty million plastic synapses and 200,000 biologically realistic neuron models on a single 8-inch silicon wafer.”17
Perhaps this latest round of reports does in fact mean that, in a few more decades, there will be machines that think like humans. “We are continually looking at the list of things machines cannot do—play chess, drive a car, translate language—and then checking them off the list when machines become capable of these things,” said Tim Berners-Lee. “Someday we will get to the end of the list.”18
These latest advances may even lead to the singularity, a term that von Neumann coined and the futurist Ray Kurzweil and the science fiction writer Vernor Vinge popularized, which is sometimes used to describe the moment when computers are not only smarter than humans but also can design themselves to be even supersmarter, and will thus no longer need us mortals. Vinge says this will occur by 2030.19
On the other hand, these latest stories might turn out to be like the similarly phrased ones from the 1950s, glimpses of a receding mirage. True artificial intelligence may take a few more generations or even a few more centuries. We can leave that debate to the futurists. Indeed, depending on your definition of consciousness, it may never happen. We can leave that debate to the philosophers and theologians. “Human ingenuity,” wrote Leonardo da Vinci, whose Vitruvian Man became the ultimate symbol of the intersection of art and science, “will never devise any inventions more beautiful, nor more simple, nor more to the purpose than Nature does.”
There is, however, yet another possibility, one that Ada Lovelace would like, which is based on the half century of computer development in the tradition of Vannevar Bush, J. C. R. Licklider, and Doug Engelbart.
HUMAN-COMPUTER SYMBIOSIS: “WATSON, COME HERE”
“The Analytical Engine has no pretensions whatever to originate anything,” Ada Lovelace declared. “It can do whatever we know how to order it to perform.” In her mind, machines would not replace humans but instead become their partners. Wha
t humans would bring to this relationship, she said, was originality and creativity.
This was the idea behind an alternative to the quest for pure artificial intelligence: pursuing instead the augmented intelligence that occurs when machines become partners with people. The strategy of combining computer and human capabilities, of creating a human-computer symbiosis, turned out to be more fruitful than the pursuit of machines that could think on their own.
Licklider helped chart that course back in 1960 in his paper “Man-Computer Symbiosis,” which proclaimed: “Human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”20 His ideas built on the memex personal computer that Vannevar Bush had imagined in his 1945 essay, “As We May Think.” Licklider also drew on his work designing the SAGE air defense system, which required an intimate collaboration between humans and machines.
The Bush-Licklider approach was given a friendly interface by Engelbart, who in 1968 demonstrated a networked computer system with an intuitive graphical display and a mouse. In a manifesto titled “Augmenting Human Intellect,” he echoed Licklider. The goal, Engelbart wrote, should be to create “an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with . . . high-powered electronic aids.” Richard Brautigan, in his poem “All Watched Over by Machines of Loving Grace,” expressed that dream a bit more lyrically: “a cybernetic meadow / where mammals and computers / live together in mutually / programming harmony.”
The Innovators Page 53