Book Read Free

Machines of Loving Grace

Page 19

by John Markoff


  While Grudin has oscillated back and forth between the AI and IA worlds throughout his career, Terry Winograd became the first high-profile deserter from the world of AI. He chose to walk away from the field after having created one of the defining software programs of the early artificial intelligence era and has devoted the rest of his career to human-centered computing, or IA. He crossed over.

  Winograd’s interest in computing was sparked while he was a junior studying math at Colorado College, when a professor of medicine asked his department for help doing radiation therapy calculations.6 The computer available at the medical center was a piano-sized Control Data minicomputer, the CDC 160A, one of Seymour Cray’s first designs. One person at a time used it, feeding in programs written in Fortran by way of a telex-like punched paper tape. On one of Winograd’s first days using the machine, it was rather hot so there was a fan sitting behind the desk that housed the computer terminal. He managed to feed his paper tape into the computer and then, by mistake, right into the fan.7

  Terry Winograd was a brilliant young graduate student at MIT who developed an early program capable of processing natural language. Years later he rejected artificial intelligence research in favor of human-centered software design. (Photo courtesy of Terry Winograd)

  In addition to his fascination with computing, Winograd had become intrigued by some of the early papers about artificial intelligence. As a math whiz with an interest in linguistics, the obvious place for graduate studies was MIT. When he arrived, at the height of the Vietnam War, Winograd discovered there was a deep gulf between the rival fiefdoms of Marvin Minsky and Noam Chomsky, leaders in the respective fields of artificial intelligence and linguistics. The schism was so deep that when Winograd would bump into Chomsky’s students at parties and mention that he was in the AI Lab, they would turn and walk away.

  Winograd tried to bridge the gap by taking a course from Chomsky, but he received a C on a paper in which he argued for the AI perspective. Despite the conflict, it was a heady time for AI research. The Vietnam War had opened the Pentagon’s research coffers and ARPA was essentially writing blank checks to researchers at the major research laboratories. As at Stanford, at MIT there was a clear sense of what “serious” research in computer science was about. Doug Engelbart came around on a tour and showed a film demonstration of his NLS system. The researchers at the MIT AI Lab belittled his accomplishments. After all, they were building systems that would soon have capabilities matching those of humans, and Engelbart was showing off a computer editing system that seemed to do little more than sort grocery lists.

  At the time Winograd was very much within the mainstream of computing, and as the zeitgeist pointed toward artificial intelligence, he followed. Most believed that it wouldn’t be long before machines would see, hear, speak, move, and otherwise perform humanlike tasks. Winograd was soon encouraged to pursue linguistic research by Minsky, who was eager to prove that his students could do as well or better at “language” than Chomsky’s. That challenge was fine with Winograd, who was interested in studying how language worked by using computing as a simulation tool.

  As a teenager growing up in Colorado, Winograd, like many of his generation, had discovered Mad magazine. The irreverent—and frequently immature—satire journal would play a small role in naming SHRDLU, a program he wrote as a graduate student at MIT in the late 1960s that “understood” natural language and responded to commands. It has remained one of the most influential artificial intelligence programs.

  Winograd had set out to build a system that could respond to typed commands in natural language and perform useful tasks in response. By this time there had already been a wave of initial experiments in building conversational programs. Eliza, written by MIT computer scientist Joseph Weizenbaum in 1964 and 1965, was named after Eliza Doolittle, who learned proper English in Shaw’s Pygmalion and the musical My Fair Lady. Eliza had been a groundbreaking experiment in the study of human interaction with machines: it was one of the first programs to provide users the opportunity to have a humanlike conversation with a computer. In order to skirt the need for real-world knowledge, Eliza parroted a Rogerian therapist and frequently reframed users’ statements as questions. The conversation was mostly one-sided because Eliza was programmed simply to respond to certain key words and phrases. This approach led to wild non sequiturs and bizarre detours. For example, Eliza would respond to a user’s statement about their mother with: “You say your mother?” Weizenbaum later said that he was stunned to discover Eliza users became deeply engrossed in conversations with the program, and even revealed intimate personal details. It was a remarkable insight not into the nature of machines but rather into human nature. Humans, it turns out, have a propensity to find humanity in almost everything they interact with, ranging from inanimate objects to software programs that offer the illusion of human intelligence.

  Was it possible that in the cyber-future, humans, increasingly isolated from each other, would remain in contact with some surrogate computer intelligence? What kind of world did that foretell? Perhaps it was the one described in the movie Her, released in 2013, in which a shy guy connects with a female AI. Today, however, it is still unclear whether the emergence of cyberspace is a huge step forward for humanity as described by cyber-utopians such as Grateful Dead lyricist John Perry Barlow in his 1996 Wired manifesto, “A Declaration of the Independence of Cyberspace,” or the much bleaker world described by Sherry Turkle in her book Alone Together: Why We Expect More from Technology and Less from Each Other. For Barlow, cyberspace would become a utopian world free from crime and degradation of “meatspace.” In contrast, Turkle describes a world in which computer networks increasingly drive a wedge between humans, leaving them lonely and isolated. For Weizenbaum, computing systems risked fundamentally diminishing the human experience. In very much the same vein that Marxist philosopher Herbert Marcuse attacked advanced industrial society, he was concerned that the approaching Information Age might bring about a “One-Dimensional Man.”

  In the wake of the creation of Eliza, a group of MIT scientists, including information theory pioneer Claude Shannon, met in Concord, Massachusetts, to discuss the social implications of the phenomenon.8 The seductive quality of the interactions with Eliza concerned Weizenbaum, who believed that an obsessive reliance on technology was indicative of a moral failing in society, an observation rooted in his experiences as a child growing up in Nazi Germany. In 1976, he sketched out a humanist critique of computer technology in his book Computer Power and Human Reason: From Judgment to Calculation. The book did not argue against the possibility of artificial intelligence but rather was a passionate indictment of computerized systems that substituted automated decision-making for the human mind. In the book, he argued that computing served as a conservative force in society by propping up bureaucracies as well as by reductively redefining the world as a narrow and more sterile place by restricting the potential of human relationships.

  Weizenbaum’s criticism largely fell on deaf ears in the United States. Years later his ideas would receive a more positive reception in Europe, where he moved at the end of his life. At the time, however, in the United States, where the new computing technologies were taking root, there was more optimism about artificial intelligence.

  In the late 1960s as a graduate student, Winograd was immersed in the hothouse world of the MIT AI Lab, the birthplace of the computing hacker culture, which would lead both to personal computing and the “information wants to be free” ideology that would later become the foundation of the open-source computing movement of the 1990s. Many at the lab staked their careers on the faith that cooperative and autonomous intelligent machines would soon be a reality. Eliza, and then several years later Winograd’s SHRDLU, were the direct predecessors of the more sophisticated computerized personal assistants that would follow in the coming decades. There had been earlier efforts at MIT to build microworlds or “block worlds,” which were restricted, simulated environments
in which AI researchers would create programs capable of reasoning about their surroundings and planning. Some of those environments had used real robot arms and blocks. When Winograd began working on his project, another student was already building a system that could book airline reservations, but that was less interesting to Winograd. Instead, he set out to build a constrained world to explore and rationalize and chose to create his system in the form of a virtual computer world.

  He built a computer simulation as a world populated by colored blocks that could be explored and manipulated by an artificial intelligence programming language named MicroPlanner, based on the work of Carl Hewitt, another MIT graduate student. Given the relatively primitive state of computing at the time, much was left to the imagination. There was no fancy graphical animation of the blocks world. The user simply sat at a Teletype terminal and entered questions at the keyboard and the computer responded, in natural language. Winograd chose the name SHRDLU because no better one came to mind. He later said the sequence SHRDLU was probably hidden in his subconscious. The sequence “etaoin shrdlu,” representing the most frequently used letters in English, are on the left-hand side of the Linotype keyboard, as “1qaz 2wsx” appear on the typewriter—and as a consequence the string of characters showed up occasionally in Mad magazine, as an inside joke on the accidental appearance of the character string in print. The Linotype, which cast an entire line of type from molten metal, had no backspace. Because it was faster to cast an erroneous line than correct it, typesetters would “run down” the rest of the line with easy-to-type nonsense, later removing the entire line after it had cooled down, or if they forgot, hope a proofreader caught it.9

  He wasn’t concerned at the time about any ethical implications involved in building a natural language processing system that could “understand” and respond in a virtual world. In SHRDLU “understanding” meant that the program analyzed the structure of the typed questions and attempted to answer them and respond to the commands. It was an early effort at disambiguation, a thorny problem for natural language processing even today. For example, in the sentence “he put the glass on the table and it broke,” does “it” refer to the glass or the table? Without more context, neither a human nor an AI program could decide.

  Winograd’s system used its general knowledge of the microworld to answer and respond to various questions. He illustrated the conversation as follows in his dissertation:

  Person: Pick up a big red block.

  Computer: OK.

  Person: Grasp the pyramid.

  Computer: I don’t understand which pyramid you mean.

  Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.

  Computer: By “it,” I assume you mean the block which is taller than the one I am holding.

  Computer: OK.

  Person: What does the box contain?

  Computer: The blue pyramid and the blue block.

  Person: What is the pyramid supported by?

  Computer: The box.

  Person: How many blocks are not in the box?

  Computer: Four of them.

  Person: Is at least one of them narrower than the one which I told you to pick up?

  Computer: Yes, the red cube.10

  In 1970, when it was completed, SHRDLU proved to be one of the most successful demonstrations of AI’s potential—perhaps too successful. Years later, Winograd’s tour de force would be blamed for helping generate the optimistic view that it would be possible to “scale up” similar programs to deal with real-world complexity. For example, during the 1980s and 1990s the AI research community widely accepted that it would be possible to build a machine with the reasoning power at least of a kindergartener—by simply accumulating a vast number of common-sense rules.

  The attack on the AI optimists, however, had begun even before Winograd built SHRDLU. Although Weizenbaum’s critique was about the morality of building intelligent machines, the more heated debate was over whether such machines were even possible. Seymour Papert, Winograd’s thesis advisor, had become engaged in a bitter debate with Hubert Dreyfus, a philosopher and Heidegger acolyte, who, just one decade after McCarthy had coined the term, would ridicule the field in a scathing paper entitled “Alchemy and Artificial Intelligence,” published in 1965 by the RAND Corporation.11 (Years later, in the 2014 movie remake of RoboCop, the fictional U.S. senator who sponsors legislation banning police robots is named Hubert Dreyfus in homage.)

  Dreyfus ran afoul of AI researchers in the early sixties when they showed up in his Heidegger course and belittled philosophers for failing to understand human intelligence after studying it for centuries.12 It was a slight he would not forget. For the next four decades, Dreyfus would become the most pessimistic critic of the possibility of work-as-promised artificial intelligence, summing up his argument in an attack on two Stanford AI researchers: “Feigenbaum and Feldman claim that tangible progress is indeed being made, and they define progress very carefully as ‘displacement toward the ultimate goal.’ According to this definition, the first man to climb a tree could claim tangible progress toward flight to the moon.”13 Three years later, Papert fired back in “The Artificial Intelligence of Hubert L. Dreyfus, A Budget of Fallacies”: “The perturbing observation is not that Dreyfus imports metaphysics into engineering but that his discussion is irresponsible,” he wrote. “His facts are almost always wrong; his insight into programming is so poor that he classifies as impossible programs a beginner could write; and his logical insensitivity allows him to take his inability to imagine how a particular algorithm can be carried out, as reason to believe no algorithm can achieve the desired purpose.”14

  Winograd would eventually break completely with Papert, but this would not happen for many years. He came to Stanford as a professor in 1973, when his wife, a physician, accepted an offer as a medical resident in the Bay Area. It was just two years after Intel had introduced the first commercial 4004 microprocessor chip, and trade journalist Don Hoefler settled on “Silicon Valley U.S.A.” as shorthand for the region in his newsletter Microelectronics News. Winograd continued to work for several years on the problem of machine understanding of natural language very much in the original tradition of SHRDLU. Initially he spent almost half his time at Xerox Palo Alto Research Center working with Danny Bobrow, another AI researcher interested in natural language understanding. Xerox had opened a beautiful new building in March 1975 in a location next to Stanford, as it gave the “document company” easy access to the best computer scientists. Later Winograd would tell friends, “You know all the famous personal computing technology that was invented at PARC? Well, that’s not what I worked on.”

  Instead he spent his time trying to elaborate and expand on the research he pursued at MIT, research that would bear fruit almost four decades later. During the 1970s, however, it seemed to present an impossible challenge, and many started to wonder how, or even if, science could come to understand how humans process language. After spending a half decade on language-related computing Winograd found himself growing more and more skeptical that real progress in AI would be possible. In addition to making little headway, he rejected artificial intelligence in part because of the influence of a new friendship with a Chilean political refugee named Fernando Flores, and in part because of his recent engagement with a group of Berkeley philosophers, led by Dreyfus, intent on stripping away the hype around the new AI industry now emerging. Flores, a bona fide technocrat who had been finance minister during the Allende government, barely escaped his office in the palace when it was bombed during the coup. He spent three years in prison before arriving in the United States, his release coming in response to political pressure by Amnesty International. Stanford had appointed Flores as a visiting scholar in computer science, but he left Palo Alto instead to pursue a Ph.D. at Berkeley under the guidance of a quartet of anti-AI philosophers: Hubert and Stuart Dreyfus, John Searle, and Ann Markussen.

  Winogr
ad thought Flores was one of the most impressive intellectuals he had ever met. “We started talking in a casual way, then he handed me a book on philosophy of science and said, ‘You should read this.’ I read it, and we started talking about it, and we decided to write a paper about it, that turned into a monograph, and that turned into a book. It was a gradual process of finding him interesting, and finding the stuff we were talking about intellectually stimulating,” Winograd recalled.15 The conversations with Flores put the young computer scientist “in touch” with the ways in which he was unhappy with what he thought of as the “ideology” of AI. Flores aligned himself with the charismatic Werner Erhard, whose cultlike organization EST (Erhard Seminars Training) had a large following in the Bay Area during the 1970s. (At Stanford Research Institute, Engelbart sent the entire staff of his lab through EST training and joined the board of the organization.)

  Although the computing world was tiny at the time, the tensions between McCarthy and Minsky’s AI design approach and Engelbart’s IA approach were palpable around Stanford. PARC was inventing the personal computer; the Stanford AI Lab was doing research on everything from robot arms to mobile robots to chess-playing AI systems. At the recently renamed SRI (which changed its name from Stanford Research Institute due to student antiwar protests) researchers were working on projects that ranged from Engelbart’s NLS system to Shakey the robot, as well as early speech recognition research and “smart” weapons. Winograd would visit Berkeley for informal lunchtime discussions with Searle and Dreyfus, the Berkeley philosophers, their grad students, and Fernando Flores. While Hubert Dreyfus objected to the early optimistic predictions by AI researchers, it was John Searle who raised the stakes and asked one of the defining philosophical questions of the twentieth century: Is it possible to build an intelligent machine?

  Searle, a dramatic lecturer with a flair for showmanship, was never one to avoid an argument. Before teaching philosophy he had been a political activist. While at the University of Wisconsin in the 1950s he had been a member of Students Against Joseph McCarthy, and in 1964 he would become the first tenured Berkeley faculty to join the Free Speech Movement. As a young philosopher Searle had been drawn to the interdisciplinary field of cognitive science. At the time, the core assumption of the field was that the biological mind was analogous to the software that animated machines. If this was the case, then understanding the processes of human thought would merely be a matter of teasing out the program inside the intertwined billions of neurons making up the human brain.

 

‹ Prev