Book Read Free

The Big Nine

Page 3

by Amy Webb


  Leibniz’s theoretical step reckoner laid the groundwork for more theories, which included the notion that if logical thought could be reduced to symbols and as a result could be analyzed as a computational system, and if geometric problems could be solved using symbols and numbers, then everything could be reduced to bits—including human behavior. It was a significant split from the earlier philosophers: future machines could replicate human thinking processes without infringing on divine providence. Thinking did not necessarily require perception, senses, or soul. Leibniz imagined a computer capable of solving general problems, even nonmathematical ones. And he hypothesized that language could be reduced to atomic concepts of math and science as part of a universal language translator.11

  Do Mind and Machine Simply Follow an Algorithm?

  If Leibniz was correct—that humans were machines with souls and would someday invent soulless machines capable of untold, sophisticated thought—then there could be a binary class of machines on earth: us and them. But the debate had only started.

  In 1738, Jacques de Vaucanson, an artist and inventor, constructed a series of automata for the French Academy of Science that included a complex and lifelike duck. It not only imitated the motions of a live duck, flapping its wings and eating grain, but it could also mimic digestion. This offered the philosophers food for thought: If it looked like a duck, and quacked like a duck, was it really a duck? If we perceive the duck to have a soul of a different kind, would that be enough to prove that the duck was aware of itself and all that implied?

  Scottish philosopher David Hume rejected the idea that acknowledgement of existence was itself proof of awareness. Unlike Descartes, Hume was an empiricist. He developed a new scientific framework based on observable fact and logical argument. While de Vaucanson was showing off his digesting duck—and well before anyone was talking about artificial intelligence—Hume wrote in A Treatise of Human Nature, “Reason is, and ought only to be, the slave of the passions.” In this case, Hume intended “passions” to mean “nonrational motivations” and that incentives, not abstract logic, drive our behavior. If impressions are simply our perception of something we can see, touch, feel, taste, and smell, and ideas are perceptions of things that we don’t come into direct contact with, Hume believed that our existence and understanding of the world around us was based on a construct of human perception.

  With advanced work on automata, which were becoming more and more realistic, and more serious thought given to computers as thinking machines, French physician and philosopher Julien Offray de La Mettrie undertook a radical—and scandalous—study of humans, animals, and automata. In a 1747 paper he first published anonymously, La Mettrie argued humans are remarkably similar to animals, and an ape could learn a human language if it “were properly trained.” La Mettrie also concluded that humans and animals are merely machines, driven by instinct and experience. “The human body is a machine which winds its own springs;… the soul is but a principle of motion or a material and sensible part of the brain.”12

  The idea that humans are simply matter-driven machines—cogs and wheels performing a set of functions—implied that we were not special or unique. It also implied that perhaps we were programmable. If this was true, and if we had until this point been capable of creating lifelike ducks and tiny monks, then it should follow that someday, humans could create replicas of themselves—and build a variety of intelligent, thinking machines.

  Could a Thinking Machine Be Built?

  By the 1830s, mathematicians, engineers, and scientists had started tinkering, hoping to build machines capable of doing the same calculations as human “computers.” English mathematician Ada Lovelace and scientist Charles Babbage invented a machine called the “Difference Engine” and then later postulated a more advanced “Analytical Engine,” which used a series of predetermined steps to solve mathematical problems. Babbage hadn’t conceived that the machine could do anything beyond calculating numbers. It was Lovelace who, in the footnotes of a scientific paper she was translating, went off on a brilliant tangent speculating that a more powerful version of the Engine could be used in other ways.13 If the machine could manipulate symbols, which themselves could be assigned to different things (such as musical notes), then the Engine could be used to “think” outside of mathematics. While she didn’t believe that a computer would ever be able to create original thought, she did envision a complex system that could follow instructions and thus mimic a lot of what everyday people did. It seemed unremarkable to some at the time, but Ada had written the first complete computer program for a future, powerful machine—decades before the light bulb was invented.

  A hundred miles north from where Lovelace and Babbage were working at Cambridge University, a young self-trained mathematician named George Boole was walking across a field in Doncaster and had a sudden burst of inspiration, deciding to dedicate his life to explaining the logic of human thought.14 That walk produced what we know today as Boolean algebra, which is a way of simplifying logical expressions (e.g. “and,” “or,” and “not”) by using symbols and numbers. So for example, computing “true and true” would result “true,” which would correspond to physical switches and gates in a computer. It would take two decades for Boole to formalize his ideas. And it would take another 100 years for someone to realize that Boolean logic and probability could help computers evolve from automating basic math to more complex thinking machines. There wasn’t a way to build a thinking machine—the processes, materials, and power weren’t yet available—and so the theory couldn’t be tested.

  The leap from theoretical thinking machines to computers that began to mimic human thought happened in the 1930s with the publication of two seminal papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits” and Alan Turing’s “On Computable Numbers, with an Application to the Entscheidungsproblem.” As an electrical engineering student at MIT, Shannon took an elective course in philosophy—an unusual diversion. Boole’s An Investigation of the Laws of Thought became the primary reference for Shannon’s thesis. His advisor, Vannevar Bush, encouraged him to map Boolean logic to physical circuits. Bush had built an advanced version of Lovelace and Babbage’s Analytical Engine—his prototype was called the “Differential Analyzer”—and its design was somewhat ad hoc. At that time, there was no systematic theory dictating electrical circuit design. Shannon’s breakthrough was mapping electrical circuits to Boole’s symbolic logic and then explaining how Boolean logic could be used to create a working circuit for adding 1s and 0s. Shannon had figured out that computers had two layers: physical (the container) and logical (the code).

  While Shannon was working to fuse Boolean logic onto physical circuits, Turing was testing Leibniz’s universal language translator that could represent all mathematical and scientific knowledge. Turing aimed to prove what was called the Entscheidungsproblem, or the “decision problem.” Roughly, the problem goes like this: no algorithm can exist that determines whether an arbitrary mathematical statement is true or false. The answer would be negative. Turing was able to prove that no algorithm exists, but as a byproduct, he found a mathematical model of an all-purpose computing machine.15

  And that changed everything. Turing figured out that a program and the data it used could be stored inside a computer—again, this was a radical proposition in the 1930s. Until that point, everyone agreed that the machine, the program, and the data were each independent. For the first time, Turing’s universal machine explained why all three were intertwined. From a mechanical standpoint, the logic that operated circuits and switches could also be encoded into the program and data. Think about the significance of these assertions. The container, the program, and the data were part of a singular entity—not unlike humans. We too are containers (our bodies), programs (autonomous cellular functions), and data (our DNA combined with indirect and direct sensory information).

  Meanwhile, that long tradition of automata, which began 400 years earlier with a t
iny walking, praying monk, at last crossed paths with Turing and Shannon’s work. The American manufacturing company Westinghouse built a relay-based robot named the Elektro the Moto-Man for the 1939 World’s Fair. It was a crude, gold-colored giant with wheels beneath its feet. It had 48 electrical relays that worked on a telephone relay system. Elektro responded, via prerecorded messages on a record player, to voice commands spoken through a telephone handset. It was an anthropomorphized computer capable of making rudimentary decisions—like what to say—without direct, real-time human involvement.

  Judging by the newspaper headlines, science fiction short stories, and newsreels from that time, it’s clear that people were caught off guard, shocked, and concerned about all of these developments. To them it felt as though “thinking machines” had simply arrived, fully formed, overnight. Science fiction writer Isaac Asimov published “Liar!,” a prescient short story in the May 1941 issue of Astounding Science Fiction. It was a reaction to the research he was seeing on the fringes, and in it he made an argument for his Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

  Later, Asimov added what he called the “Zeroth Law” to govern all others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

  But Would a Thinking Machine Actually Think?

  In 1943, University of Chicago psychiatry researchers Warren McCulloch and Walter Pitts published their important paper “A Logical Calculus of the Ideas Immanent in Nervous Activity,” which described a new kind of system modeling biological neurons into simple neural network architecture for intelligence. If containers, programs, and data were intertwined, as Turing had argued, and if humans were similarly elegantly designed containers capable of processing data, then it followed that building a thinking machine might be possible if modeled using the part of humans responsible for thinking—our brains. They posited a modern computational theory of mind and brain, a “neural network.” Rather than focusing on the machine as hardware and the program as software, they imagined a new kind of symbiotic system capable of ingesting vast amounts of data, just like we humans do. Computers weren’t yet powerful enough to test this theory—but the paper did inspire others to start working toward a new kind of intelligent computer system.

  The link between intelligent computer systems and autonomous decision-making became clearer once John von Neumann, the Hungarian-American polymath with specializations in computer science, physics, and math, published a massive treatise of applied math. Cowritten with Princeton economist Oskar Morgenstern in 1944, the 641-page book explained, with painstaking detail, how the science of game theory revealed the foundation of all economic decisions. It is this work that led to von Neumann’s collaborations with the US Army, which had been working on a new kind of electric computer called the Electronic Numerical Integrator and Computer, or ENIAC for short. Originally, the instructions powering ENIAC were hardwired into the system, which meant that with each new program, the whole system would have to be rewired. Inspired by Turing, McCulloch, and Pitts, von Neumann developed a way of storing programs on the computer itself. This marked the transition from the first era of computing (tabulation) to a new era of programmable systems.

  Turing himself was now working on a concept for a neural network, made up of computers with stored-program machine architecture. In 1949, The London Times quoted Turing: “I do not see why it (the machine) should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.” A year later, in a paper published in the philosophy journal Mind, Turing addressed the questions raised by Hobbes, Descartes, Hume, and Leibniz. In it, he proposed a thesis and a test: If someday, a computer was able to answer questions in a manner indistinguishable from humans, then it must be “thinking.” You’ve likely heard of the paper by another name: the Turing test.

  The paper began with a now-famous question, one asked and answered by so many philosophers, theologians, mathematicians, and scientists before him: “Can machines think?” But Turing, sensitive to the centuries-old debate about mind and machine, dismissed the question as too broad to ever yield meaningful discussion. “Machine” and “think” were ambiguous words with too much room for subjective interpretation. (After all, 400 years’ worth of papers and books had already been written about the meaning of those words.)

  The game was built on deception and “won” once a computer successfully passed as a human. The test goes like this: there is a person, a machine, and in a separate room, an interrogator. The object of the game is for the interrogator to figure out which answers come from the person and which come from the machine. At the beginning of the game, the interrogator is given labels, X and Y, but doesn’t know which one refers to the computer and is only allowed to ask questions like “Will X please tell me whether X plays chess?” At the end of the game, the interrogator has to figure out who was X and who was Y. The job of the other person is to help the interrogator identify the machine, and the job of the machine is to trick the interrogator into believing that it is actually the other person. About the game, Turing wrote: “I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”16

  But Turing was a scientist, and he knew that his theory could not be proven, at least not within his lifetime. As it happened, the problem wasn’t with Turing’s lack of empirical evidence proving that machines would someday think, and it wasn’t even in the timing—Turing said that it would probably take until the end of the 20th century to ever be able to run his test. “We may hope that machines will eventually compete with men in all purely intellectual fields,” Turing wrote. The real problem was taking the leap necessary to believe that machines might someday see, reason, and remember—and that humans might get in the way of that progress. This would require his fellow researchers to observe cognition without spiritualism and to believe in the plausibility of intelligent machines that, unlike people, would make decisions in a nonconscious way.

  The Summer and Winter of AI

  In 1955, professors Marvin Minsky (mathematics and neurology) and John McCarthy (mathematics), along with Claude Shannon (a mathematician and cryptographer at Bell Labs) and Nathaniel Rochester (a computer scientist at IBM), proposed a two-month workshop to explore Turing’s work and the promise of machine learning. Their theory: if it was possible to describe every feature of human intelligence, then a machine could be taught to simulate it.17 But it was going to take a broad, diverse group of experts in many different fields. They believed that a significant advance could be made by gathering an interdisciplinary group of researchers and working intensively, without any breaks, over the summer.

  Curating the group was critically important. This would become the network of rarified engineers, social scientists, computer scientists, psychologists, mathematicians, physicists, and cognitive specialists who would ask and answer fundamental questions about what it means to “think,” how our “minds” work, and how to teach machines to learn the same way we humans do. The intention was that this diverse network would continue to collaborate on research and on building this new field into the future. Because it would be a new kind of interdisciplinary approach to building machines that think, they needed a new name to describe their activities. They landed on something ambiguous but el
egant: artificial intelligence.

  McCarthy created a preliminary list of 47 experts he felt needed to be there to build the network of people and set the foundation for all of the research and prototyping that would follow. It was a tense process, determining all of the key voices who absolutely had to be in the room as AI was being conceptualized and built in earnest. Minsky, especially, was concerned that the meeting would miss two critical voices—Turing, who’d died two years earlier, and von Neumann, who was in the final stages of terminal cancer.18

  Yet for their great efforts in curating a diverse group with the best possible mix of complementary skills, they had a glaring blind spot. Everyone on that list was white, even though there were many brilliant creative people of color working throughout the very fields McCarthy and Minsky wanted to bring together. Those who made the list hailed from the big tech giants at the time (IBM, Bell Labs) or from a small handful of universities. Even though there were plenty of brilliant women already making significant contributions in engineering, computer science, mathematics, and physics, they were excluded.19 The invitees were all men, save for Marvin Minsky’s wife, Gloria. Without awareness of their own biases, these scientists—hoping to understand how the human mind works, how we think, and how machines might learn from all of humanity—had drastically limited their pool of data to those who look and sound just like them.

 

‹ Prev