The Innovators

Home > Memoir > The Innovators > Page 54
The Innovators Page 54

by Walter Isaacson


  * * *

  The teams that built Deep Blue and Watson have adopted this symbiosis approach rather than pursue the objective of the artificial intelligence purists. “The goal is not to replicate human brains,” says John Kelly, the director of IBM Research. Echoing Licklider, he adds, “This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership.”21

  An example of the power of this human-computer symbiosis arose from a realization that struck Kasparov after he was beaten by Deep Blue. Even in a rule-defined game such as chess, he came to believe, “what computers are good at is where humans are weak, and vice versa.” That gave him an idea for an experiment: “What if instead of human versus machine we played as partners?” When he and another grandmaster tried that, it created the symbiosis that Licklider had envisioned. “We could concentrate on strategic planning instead of spending so much time on calculations,” Kasparov said. “Human creativity was even more paramount under these conditions.”

  A tournament along these lines was held in 2005. Players could work in teams with computers of their choice. Many grandmasters entered the fray, as did the most advanced computers. But neither the best grandmaster nor the most powerful computer won. Symbiosis did. “The teams of human plus machine dominated even the strongest computers,” Kasparov noted. “Human strategic guidance combined with the tactical acuity of a computer was overwhelming.” The final winner was not a grandmaster nor a state-of-the-art computer, nor even a combination of both, but two American amateurs who used three computers at the same time and knew how to manage the process of collaborating with their machines. “Their skill at manipulating and coaching their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants,” according to Kasparov.22

  In other words, the future might belong to people who can best partner and collaborate with computers.

  In a similar fashion, IBM decided that the best use of Watson, the Jeopardy!-playing computer, would be for it to collaborate with humans rather than try to top them. One project involved using the machine to work in partnership with doctors on cancer treatment plans. “The Jeopardy! challenge pitted man against machine,” said IBM’s Kelly. “With Watson and medicine, man and machine are taking on a challenge together—and going beyond what either could do on its own.”23 The Watson system was fed more than 2 million pages from medical journals and 600,000 pieces of clinical evidence, and could search up to 1.5 million patient records. When a doctor put in a patient’s symptoms and vital information, the computer provided a list of recommendations ranked in order of its confidence.24

  In order to be useful, the IBM team realized, the machine needed to interact with human doctors in a manner that made collaboration pleasant. David McQueeney, the vice president of software at IBM Research, described programming a pretense of humility into the machine: “Our early experience was with wary physicians who resisted by saying, ‘I’m licensed to practice medicine, and I’m not going to have a computer tell me what to do.’ So we reprogrammed our system to come across as humble and say, ‘Here’s the percentage likelihood that this is useful to you, and here you can look for yourself.’ ” Doctors were delighted, saying that it felt like a conversation with a knowledgeable colleague. “We aim to combine human talents, such as our intuition, with the strengths of a machine, such as its infinite breadth,” said McQueeney. “That combination is magic, because each offers a piece that the other one doesn’t have.”25

  That was one of the aspects of Watson that impressed Ginni Rometty, an engineer with a background in artificial intelligence who took over as CEO of IBM at the beginning of 2012. “I watched Watson interact in a collegial way with the doctors,” she said. “It was the clearest testament of how machines can truly be partners with humans rather than try to replace them. I feel strongly about that.”26 She was so impressed that she decided to launch a new IBM division based on Watson. It was given a $1 billion investment and a new headquarters in the Silicon Alley area near Manhattan’s Greenwich Village. Its mission was to commercialize “cognitive computing,” meaning computing systems that can take data analysis to the next level by teaching themselves to complement the thinking skills of the human brain. Instead of giving the new division a technical name, Rometty simply called it Watson. It was in honor of Thomas Watson Sr., the IBM founder who ran the company for more than forty years, but it also evoked Sherlock Holmes’s companion Dr. John (“Elementary, my dear”) Watson and Alexander Graham Bell’s assistant Thomas (“Come here, I want to see you”) Watson. Thus the name helped to convey that Watson the computer should be seen as a collaborator and companion, not a threat like 2001’s HAL.

  * * *

  Watson was a harbinger of a third wave of computing, one that blurred the line between augmented human intelligence and artificial intelligence. “The first generation of computers were machines that counted and tabulated,” Rometty says, harking back to IBM’s roots in Herman Hollerith’s punch-card tabulators used for the 1890 census. “The second generation involved programmable machines that used the von Neumann architecture. You had to tell them what to do.” Beginning with Ada Lovelace, people wrote algorithms that instructed these computers, step by step, how to perform tasks. “Because of the proliferation of data,” Rometty adds, “there is no choice but to have a third generation, which are systems that are not programmed, they learn.”27

  But even as this occurs, the process could remain one of partnership and symbiosis with humans rather than one designed to relegate humans to the dustbin of history. Larry Norton, a breast cancer specialist at New York’s Memorial Sloan-Kettering Cancer Center, was part of the team that worked with Watson. “Computer science is going to evolve rapidly, and medicine will evolve with it,” he said. “This is coevolution. We’ll help each other.”28

  This belief that machines and humans will get smarter together is a process that Doug Engelbart called “bootstrapping” and “coevolution.”29 It raises an interesting prospect: perhaps no matter how fast computers progress, artificial intelligence may never outstrip the intelligence of the human-machine partnership.

  Let us assume, for example, that a machine someday exhibits all of the mental capabilities of a human: giving the outward appearance of recognizing patterns, perceiving emotions, appreciating beauty, creating art, having desires, forming moral values, and pursuing goals. Such a machine might be able to pass a Turing Test. It might even pass what we could call the Ada Test, which is that it could appear to “originate” its own thoughts that go beyond what we humans program it to do.

  There would, however, be still another hurdle before we could say that artificial intelligence has triumphed over augmented intelligence. We can call it the Licklider Test. It would go beyond asking whether a machine could replicate all the components of human intelligence to ask whether the machine accomplishes these tasks better when whirring away completely on its own or when working in conjunction with humans. In other words, is it possible that humans and machines working in partnership will be indefinitely more powerful than an artificial intelligence machine working alone?

  If so, then “man-computer symbiosis,” as Licklider called it, will remain triumphant. Artificial intelligence need not be the holy grail of computing. The goal instead could be to find ways to optimize the collaboration between human and machine capabilities—to forge a partnership in which we let the machines do what they do best, and they let us do what we do best.

  SOME LESSONS FROM THE JOURNEY

  Like all historical narratives, the story of the innovations that created the digital age has many strands. So what lessons, in addition to the power of human-machine symbiosis just discussed, might be drawn from the tale?

  First and foremost is that creativity i
s a collaborative process. Innovation comes from teams more often than from the lightbulb moments of lone geniuses. This was true of every era of creative ferment. The Scientific Revolution, the Enlightenment, and the Industrial Revolution all had their institutions for collaborative work and their networks for sharing ideas. But to an even greater extent, this has been true of the digital age. As brilliant as the many inventors of the Internet and computer were, they achieved most of their advances through teamwork. Like Robert Noyce, some of the best of them tended to resemble Congregational ministers rather than lonely prophets, madrigal singers rather than soloists.

  Twitter, for example, was invented by a team of people who were collaborative but also quite contentious. When one of the cofounders, Jack Dorsey, started taking a lot of the credit in media interviews, another cofounder, Evan Williams, a serial entrepreneur who had previously created Blogger, told him to chill out, according to Nick Bilton of the New York Times. “But I invented Twitter,” Dorsey said.

  “No, you didn’t invent Twitter,” Williams replied. “I didn’t invent Twitter either. Neither did Biz [Stone, another cofounder]. People don’t invent things on the Internet. They simply expand on an idea that already exists.”30

  Therein lies another lesson: the digital age may seem revolutionary, but it was based on expanding the ideas handed down from previous generations. The collaboration was not merely among contemporaries, but also between generations. The best innovators were those who understood the trajectory of technological change and took the baton from innovators who preceded them. Steve Jobs built on the work of Alan Kay, who built on Doug Engelbart, who built on J. C. R. Licklider and Vannevar Bush. When Howard Aiken was devising his digital computer at Harvard, he was inspired by a fragment of Charles Babbage’s Difference Engine that he found, and he made his crew members read Ada Lovelace’s “Notes.”

  The most productive teams were those that brought together people with a wide array of specialties. Bell Labs was a classic example. In its long corridors in suburban New Jersey, there were theoretical physicists, experimentalists, material scientists, engineers, a few businessmen, and even some telephone-pole climbers with grease under their fingernails. Walter Brattain, an experimentalist, and John Bardeen, a theorist, shared a workspace, like a librettist and a composer sharing a piano bench, so they could perform a call-and-response all day about how to make what became the first transistor.

  Even though the Internet provided a tool for virtual and distant collaborations, another lesson of digital-age innovation is that, now as in the past, physical proximity is beneficial. There is something special, as evidenced at Bell Labs, about meetings in the flesh, which cannot be replicated digitally. The founders of Intel created a sprawling, team-oriented open workspace where employees from Noyce on down all rubbed against one another. It was a model that became common in Silicon Valley. Predictions that digital tools would allow workers to telecommute were never fully realized. One of Marissa Mayer’s first acts as CEO of Yahoo! was to discourage the practice of working from home, rightly pointing out that “people are more collaborative and innovative when they’re together.” When Steve Jobs designed a new headquarters for Pixar, he obsessed over ways to structure the atrium, and even where to locate the bathrooms, so that serendipitous personal encounters would occur. Among his last creations was the plan for Apple’s new signature headquarters, a circle with rings of open workspaces surrounding a central courtyard.

  Throughout history the best leadership has come from teams that combined people with complementary styles. That was the case with the founding of the United States. The leaders included an icon of rectitude, George Washington; brilliant thinkers such as Thomas Jefferson and James Madison; men of vision and passion, including Samuel and John Adams; and a sage conciliator, Benjamin Franklin. Likewise, the founders of the ARPANET included visionaries such as Licklider, crisp decision-making engineers such as Larry Roberts, politically adroit people handlers such as Bob Taylor, and collaborative oarsmen such as Steve Crocker and Vint Cerf.

  Another key to fielding a great team is pairing visionaries, who can generate ideas, with operating managers, who can execute them. Visions without execution are hallucinations.31 Robert Noyce and Gordon Moore were both visionaries, which is why it was important that their first hire at Intel was Andy Grove, who knew how to impose crisp management procedures, force people to focus, and get things done.

  Visionaries who lack such teams around them often go down in history as merely footnotes. There is a lingering historical debate over who most deserves to be dubbed the inventor of the electronic digital computer: John Atanasoff, a professor who worked almost alone at Iowa State, or the team led by John Mauchly and Presper Eckert at the University of Pennsylvania. In this book I give more credit to members of the latter group, partly because they were able to get their machine, ENIAC, up and running and solving problems. They did so with the help of dozens of engineers and mechanics plus a cadre of women who handled programming duties. Atanasoff’s machine, by contrast, never fully worked, partly because there was no team to help him figure out how to make his punch-card burner operate. It ended up being consigned to a basement, then discarded when no one could remember exactly what it was.

  Like the computer, the ARPANET and Internet were designed by collaborative teams. Decisions were made through a process, begun by a deferential graduate student, of sending around proposals as “Requests for Comments.” That led to a weblike packet-switched network, with no central authority or hubs, in which power was fully distributed to every one of the nodes, each having the ability to create and share content and route around attempts to impose controls. A collaborative process thus produced a system designed to facilitate collaboration. The Internet was imprinted with the DNA of its creators.

  The Internet facilitated collaboration not only within teams but also among crowds of people who didn’t know each other. This is the advance that is closest to being revolutionary. Networks for collaboration have existed ever since the Persians and Assyrians invented postal systems. But never before has it been easy to solicit and collate contributions from thousands or millions of unknown collaborators. This led to innovative systems—Google page ranks, Wikipedia entries, the Firefox browser, the GNU/Linux software—based on the collective wisdom of crowds.

  * * *

  There were three ways that teams were put together in the digital age. The first was through government funding and coordination. That’s how the groups that built the original computers (Colossus, ENIAC) and networks (ARPANET) were organized. This reflected the consensus, which was stronger back in the 1950s under President Eisenhower, that the government should undertake projects, such as the space program and interstate highway system, that benefited the common good. It often did so in collaboration with universities and private contractors as part of a government-academic-industrial triangle that Vannevar Bush and others fostered. Talented federal bureaucrats (not always an oxymoron), such as Licklider, Taylor, and Roberts, oversaw the programs and allocated public funds.

  Private enterprise was another way that collaborative teams were formed. This happened at the research centers of big companies, such as Bell Labs and Xerox PARC, and at entrepreneurial new companies, such as Texas Instruments and Intel, Atari and Google, Microsoft and Apple. A key driver was profits, both as a reward for the players and as a way to attract investors. That required a proprietary attitude to innovation that led to patents and intellectual property protections. Digital theorists and hackers often disparaged this approach, but a private enterprise system that financially rewarded invention was a component of a system that led to breathtaking innovation in transistors, chips, computers, phones, devices, and Web services.

  Throughout history, there has been a third way, in addition to government and private enterprises, that collaborative creativity has been organized: through peers freely sharing ideas and making contributions as part of a voluntary common endeavor. Many of the advances that c
reated the Internet and its services occurred in this fashion, which the Harvard scholar Yochai Benkler has labeled “commons-based peer production.”32 The Internet allowed this form of collaboration to be practiced on a much larger scale than before. The building of Wikipedia and the Web were good examples, along with the creation of free and open-source software such as Linux and GNU, OpenOffice and Firefox. As the technology journalist Steven Johnson has noted, “their open architecture allows others to build more easily on top of existing ideas, just as Berners-Lee built the Web on top of the Internet.”33 This commons-based production by peer networks was driven not by financial incentives but by other forms of reward and satisfaction.

  The values of commons-based sharing and of private enterprise often conflict, most notably over the extent to which innovations should be patent-protected. The commons crowd had its roots in the hacker ethic that emanated from the MIT Tech Model Railroad Club and the Homebrew Computer Club. Steve Wozniak was an exemplar. He went to Homebrew meetings to show off the computer circuit he built, and he handed out freely the schematics so that others could use and improve it. But his neighborhood pal Steve Jobs, who began accompanying him to the meetings, convinced him that they should quit sharing the invention and instead build and sell it. Thus Apple was born, and for the subsequent forty years it has been at the forefront of aggressively patenting and profiting from its innovations. The instincts of both Steves were useful in creating the digital age. Innovation is most vibrant in the realms where open-source systems compete with proprietary ones.

  Sometimes people advocate one of these modes of production over the others based on ideological sentiments. They prefer a greater government role, or exalt private enterprise, or romanticize peer sharing. In the 2012 election, President Barack Obama stirred up controversy by saying to people who owned businesses, “You didn’t build that.” His critics saw it as a denigration of the role of private enterprise. Obama’s point was that any business benefits from government and peer-based community support: “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges.” It was not the most elegant way for him to dispel the fantasy that he was a closet socialist, but it did point to a lesson of modern economics that applies to digital-age innovation: that a combination of all of these ways of organizing production—governmental, market, and peer sharing—is stronger than favoring any one of them.

 

‹ Prev