Book Read Free

The Big Nine

Page 2

by Amy Webb


  Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception: What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party? The answer is reflected in the future opportunities we have, the ways in which we are denied access, the social conventions within our societies, the rules by which our economies operate, and even the way we relate to other people.

  This is not a book about the usual AI debates. It is both a warning and a blueprint for a better future. It questions our aversion to long-term planning in the US and highlights the lack of AI preparedness within our businesses, schools, and government. It paints a stark picture of China’s interconnected geopolitical, economic, and diplomatic strategies as it marches on toward its grand vision for a new world order. And it asks for heroic leadership under extremely challenging circumstances. Because, as you’re about to find out, our futures need a hero.

  What follows is a call to action written in three parts. In the first, you’ll learn what AI is and the role the Big Nine have played in developing it. We will also take a deep dive into the unique situations faced by America’s Big Nine members and by Baidu, Alibaba, and Tencent in China. In Part II, you’ll see detailed, plausible futures over the next 50 years as AI advances. The three scenarios you’ll read range from optimistic to pragmatic and catastrophic, and they will reveal both opportunity and risk as we advance from artificial narrow intelligence to artificial general intelligence to artificial superintelligence. These scenarios are intense—they are the result of data-driven models, and they will give you a visceral glimpse at how AI might evolve and how our lives will change as a result. In Part III, I will offer tactical and strategic solutions to all the problems identified in the scenarios along with a concrete plan to reboot the present. Part III is intended to jolt us into action, so there are specific recommendations for our governments, the leaders of the Big Nine, and even for you.

  Every person alive today can play a critical role in the future of artificial intelligence. The decisions we make about AI now—even the seemingly small ones—will forever change the course of human history. As the machines awaken, we may realize that in spite of our hopes and altruistic ambitions, our AI systems turned out to be catastrophically bad for humanity.

  But they don’t have to be.

  The Big Nine aren’t the villains in this story. In fact, they are our best hope for the future.

  Turn the page. We can’t sit around waiting for whatever might come next. AI is already here.

  PART I

  Ghosts in the Machine

  CHAPTER ONE

  MIND AND MACHINE: A VERY BRIEF HISTORY OF AI

  The roots of modern artificial intelligence extend back hundreds of years, long before the Big Nine were building AI agents with names like Siri, Alexa, and their Chinese counterpart Tiān Māo. Throughout that time, there has been no singular definition for AI, like there is for other technologies. When it comes to AI, describing it concretely isn’t as easy, and that’s because AI represents many things, even as the field continues to grow. What passed as AI in the 1950s—a calculator capable of long division—hardly seems like an advanced piece of technology today. This is what’s known as the “odd paradox”—as soon as new techniques are invented and move into the mainstream, they become invisible to us. We no longer think of that technology as AI.

  In its most basic form, artificial intelligence is a system that makes autonomous decisions. The tasks AI performs duplicate or mimic acts of human intelligence, like recognizing sounds and objects, solving problems, understanding language, and using strategy to meet goals. Some AI systems are enormous and perform millions of computations quickly—while others are narrow and intended for a single task, like catching foul language in emails.

  We’ve always circled back to the same set of questions: Can machines think? What would it mean for a machine to think? What does it mean for us to think? What is thought? How could we know—definitively, and without question—that we are actually thinking original thoughts? These questions have been with us for centuries, and they are central to both AI’s history and future.

  The problem with investigating how both machines and humans think is that the word “think” is inextricably connected to “mind.” The Merriam-Webster Dictionary defines “think” as “to form or have in the mind,” while the Oxford Dictionary explains that it means to “use one’s mind actively to form connected ideas.” If we look up “mind,” both Merriam-Webster and Oxford define it within the context of “consciousness.” But what is consciousness? According to both, it’s the quality or state of being aware and responsive. Various groups—psychologists, neuroscientists, philosophers, theologians, ethicists, and computer scientists—all approach the concept of thinking using different approaches.

  When you use Alexa to find a table at your favorite restaurant, you and she are both aware and responsive as you discuss eating, even though Alexa has never felt the texture of a crunchy apple against her teeth, the effervescent prickles of sparkling water against her tongue, or the gooey pull of peanut butter against the roof of her mouth. Ask Alexa to describe the qualities of these foods, and she’ll offer you details that mirror your own experiences. Alexa doesn’t have a mouth—so how could she perceive food the way that you do?

  You are a biologically unique person whose salivary glands and taste buds aren’t arranged in exactly the same order as mine. Yet we’ve both learned what an apple is and the general characteristics of how an apple tastes, what its texture is, and how it smells. During our lifetimes, we’ve learned to recognize what an apple is through reinforcement learning—someone taught us what an apple looked like, its purpose, and what differentiates it from other fruit. Then, over time and without conscious awareness, our autonomous biological pattern recognition systems got really good at determining something was an apple, even if we only had a few of the necessary data points. If you see a black-and-white, two-dimensional outline of an apple, you know what it is—even though you’re missing the taste, smell, crunch, and all the other data that signals to your brain this is an apple. The way you and Alexa both learned about apples is more similar than you might realize.

  Alexa is competent, but is she intelligent? Must her machine perception meet all the qualities of human perception for us to accept her way of “thinking” as an equal mirror to our own? Educational psychologist Dr. Benjamin Bloom spent the bulk of his academic career researching and classifying the states of thinking. In 1956, he published what became known as Bloom’s Taxonomy, which outlined learning objectives and levels of achievement observed in education. The foundational layer is remembering facts and basic concepts, followed in order by understanding ideas; applying knowledge in new situations; analyzing information by experimenting and making connections; evaluating, defending, and judging information; and finally, creating original work. As very young children, we are focused first on remembering and understanding. For example, we first need to learn that a bottle holds milk before we understand that that bottle has a front and back, even if we can’t see it.

  This hierarchy is present in the way that computers learn, too. In 2017, an AI system called Amper composed and produced original music for an album called I AM AI. The chord structures, instrumentation, and percussion were developed by Amper, which used initial parameters like genre, mood, and length to generate a full-length song in just a few minutes. Taryn Southern, a human artist, collaborated with Amper to create the album—and the result included a moody, soulful ballad called “Break Free” that counted more than 1.6 million YouTube views and was a hit on traditional radio. Before Amper could create that song, it had to first learn the qualitative elements of a big ballad, along with quantitative data, like how to calculate the value of notes and beats and how
to recognize thousands of patterns in music (e.g., chord progressions, harmonic sequences, and rhythmic accents).

  Creativity, the kind demonstrated by Amper, is the pinnacle of Bloom’s Taxonomy, but was it merely a learned mechanical process? Was it an example of humanistic creativity? Or creativity of an entirely different kind? Did Amper think about music, the same way that a human composer might? It could be argued that Amper’s “brain”—a neural network using algorithms and data inside a container—is maybe not that different from Beethoven’s brain, made up of organic neurons using data and recognizing patterns inside the container that is his head. Was Amper’s creative process truly different than Beethoven’s when he composed his Symphony no. 5, the one which famously begins da-da-da-DUM, da-da-da-DUM before switching from a major to a minor key? Beethoven didn’t invent the entire symphony—it wasn’t completely original. Those first four notes are followed by a harmonic sequence, parts of scales, arpeggios, and other common raw ingredients that make up any composition. Listen closely to the scherzo, before the finale, and you’ll hear obvious patterns borrowed from Mozart’s 40th Symphony, written 20 years earlier, in 1788. Mozart was influenced by his rival Antonio Salieri and friend Franz Joseph Hayden, who were themselves influenced by the work of earlier composers like Johann Sebastian Bach, Antonio Vivaldi, and Henry Purcell, who were writing music from the mid-17th to the mid-18th centuries. You can hear threads of even earlier composers from the 1400s to the 1600s, like Jacques Arcadelt, Jean Mouton, and Johannes Ockeghem, in their music. They were influenced by the earliest medieval composers—and we could continue the pattern of influence all the way back to the very first written composition, called the “Seikilos epitaph,” which was engraved on a marble column to mark a Turkish gravesite in the first century. And we could keep going even further back in time, to when the first primitive flutes made out of bone and ivory were likely carved 43,000 years ago. Even before then, researchers believe that our earliest ancestors probably sang before they spoke.1

  Our human wiring is the result of millions of years of evolution. The wiring of modern AI is similarly based on a long evolutionary trail extending back to ancient mathematicians, philosophers, and scientists. While it may seem as though humanity and machinery have been traveling along disparate paths, our evolution has always been intertwined. Homo sapiens learned from their environments, passed down traits to future generations, diversified, and replicated because of the invention of advanced technologies, like agriculture, hunting tools, and penicillin. It took 11,000 years for the world’s 6 million inhabitants during the Neolithic period to propagate into a population of 7 billion today.2 The ecosystem inhabited by AI systems—the inputs for learning, data, algorithms, processors, machines, and neural networks—is improving and iterating at exponential rates. It will take only decades for AI systems to propagate and fuse into every facet of daily life.

  Whether Alexa perceives an apple the same way we do, and whether Amper’s original music is truly “original,” are really questions about how we think about thinking. Present-day artificial intelligence is an amalgam of thousands of years of philosophers, mathematicians, scientists, roboticists, artists, and theologians. Their quest—and ours, in this chapter—is to understand the connection between thinking and containers for thought. What is the connection between the human mind and—or in spite of—machines being built by the Big Nine in China and the United States?

  Is the Mind Inside a Machine?

  The foundational layer of AI can be traced back to ancient Greece and to the origins of philosophy, logic, and math. In many of Plato’s writings, Socrates says, “Know thyself,” and he meant that in order to improve and make the right decisions, you first had to know your own character. Among his other work, Aristotle invented syllogistic logic and our first formal system of deductive reasoning. Around the same time, the Greek mathematician Euclid devised a way for finding the greatest common divisor of two numbers and, as a result, created the first algorithm. Their work was the beginning of two important new ideas: that certain physical systems can operate as a set of logical rules and that human thinking itself might be a symbolic system. This launched hundreds of years of inquiry among philosophers, theologians, and scientists. Was the body a complex machine? A unified whole made up of hundreds of other systems all working together, just like a grandfather clock? But what of the mind? Was it, too, a complex machine? Or something entirely different? There was no way to prove or disprove a divine algorithm or the connection between the mind and the physical realm.

  In 1560, a Spanish clockmaker named Juanelo Turriano created a tiny mechanical monk as an offering to the church, on behalf of King Philipp II of Spain, whose son had miraculously recovered from a head injury.3 This monk had startling powers—it walked across the table, raised a crucifix and rosary, beat its chest in contrition, and moved its lips in prayer. It was the first automaton—a mechanical representation of a living thing. Although the word “robot” didn’t exist yet, the monk was a remarkable little invention, one that must have shocked and confused onlookers. It probably never occurred to anyone that a tiny automaton might someday in the distant future not just mimic basic movements but could stand in for humans on factory floors, and in research labs, and in kitchen conversations.

  The tiny monk inspired the first generation of roboticists, whose aim was to create ever more complex machines that mirrored humans: automata were soon capable of writing, dancing, and painting. And this led a group of philosophers to start asking questions about what it means to be human. If it was possible to build automata that mimicked human behavior, then were humans divinely built automata? Or were we complex systems capable of reason and original thought?

  The English political philosopher Thomas Hobbes described human reasoning as computation in De Corpore, part of his great trilogy on natural sciences, psychology, and politics. In 1655, he wrote: “By reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.”4 But how would we know whether we had free will during the process?

  While Hobbes was writing the first part of his trilogy, French philosopher René Descartes published Meditations on First Philosophy, asking whether we can know for certain that what we perceive is real. How could we verify our own consciousness? What proof would we need to conclude that our thoughts are our own and that the world around us is real? Descartes was a rationalist, believing that facts could be acquired through deduction. Famously, he put forward a thought experiment. He asked readers to imagine a demon purposely creating an illusion of their world. If the reader’s physical, sensory experience of swimming in a lake was nothing more than the demon’s construct, then she couldn’t really know that she was swimming. But in Descartes’s view, if the reader had self-awareness of her own existence, then she had met the criteria for knowledge. “I am, I exist, whenever it is uttered from me, or conceived by the mind, necessarily is true,” he wrote.5 In other words, the fact of our existence is beyond doubt, even if there is a deceptive demon in the midst. Or, I think, therefore I am.

  Later, in his Traité de l’homme (Treatise of Man) Descartes argued that humans could probably make an automaton—in this case, a small animal—that would be indistinguishable from the real thing. But even if we someday created a mechanized human, it would never pass as real, Descartes argued, because it would lack a mind and therefore a soul. Unlike humans, a machine could never meet the criteria for knowledge—it could never have self-awareness as we do. For Descartes, consciousness occurred internally—the soul was the ghost in the machines that are our bodies.6

  A few decades later, German mathematician and philosopher Gottfried Wilhelm von Leibniz examined the idea that the human soul was itself programmed, arguing that the mind itself was a container. God created the soul and body to naturally harmonize. The body may be a complex machine,
but it is one with a set of divine instructions. Our hands move when we decide to move them, but we did not create or invent all of the mechanisms that allow for the movement. If we are aware of pain or pleasure, those sensations are the result of a preprogrammed system, a continual line of communication between the mind and the body.

  Leibniz developed his own thought experiment to illustrate the point that thought and perception were inextricably tied to being human. Imagine walking into a mill. The building is a container housing machines, raw materials, and workers. It’s a complex system of parts working harmoniously toward a singular goal, but it could never have a mind. “All we would find there are cogs and levers pushing one another, and never anything to account for a perception,” Leibniz wrote. “So perception must be sought in simple substances, and never in composite things like machines.” The argument he was making was that no matter how advanced the mill, machinery, or automata, humans could never construct a machine capable of thinking or perceiving.7

  Yet Leibniz was fascinated with the notion of replicating facets of thought. A few decades earlier, a little-known English writer named Richard Braithwaite, who wrote a few books about social conduct, passively referenced human “computers” as highly trained, fast, accurate people good at making calculations.8 Meanwhile French mathematician and inventor Blaise Pascal, who laid the foundation for what we know today as probability, concerned himself with automating computational tasks. Pascal watched his father tediously calculating taxes by hand and wanted to make the process easier for him. So Pascal began work on an automatic calculator, one with mechanical wheels and movable dials.9 The calculator worked, and it inspired Leibniz to refine his thinking: machines would never have souls; however, it would someday be possible to build a machine capable of human-level logical thinking. In 1673, Leibniz described his “step reckoner,” a new kind of calculating machine that made decisions using a binary system.10 The machine was sort of like a billiards table, with balls, holes, sticks, and canals, and the machine opened the holes using a series of 1s (open) and 0s (closed).

 

‹ Prev