The Big Nine

Home > Other > The Big Nine > Page 15
The Big Nine Page 15

by Amy Webb


  With AI, anyone can build a new product or service, but they can’t easily deploy it without the help of the G-MAFIA. They must use Google’s TensorFlow, Amazon’s various recognition algorithms, Microsoft’s Azure for hosting, IBM’s chip technology, or any of the other AI frameworks, tools, and services that make the ecosystem hum. In practice, the future of AI isn’t really dictated by the terms of a truly open market in America.

  There is a reason for this concentration of power: it’s taken several decades of R&D and investment to get AI where it is today. Our government ought to have been funding basic research into AI at much higher levels since the 1980s, and it should have been supporting our universities as they prepared for the third era of computing. Unlike China, the American government hasn’t pushed a top-down AI agenda with hundreds of billions of dollars and coordinated national policies—instead, progress has organically bubbled up from the commercial sector. This means that, implicitly, we have asked and allowed the G-MAFIA to make serious and significant decisions that impact the future of our workforce, our national security, our economic growth, and our individual opportunities.

  Meanwhile, China’s version of communism—market socialism combined with clear standards for social rule—might theoretically encourage harmony and political stability, raise its median income level, and keep a billion people from rising up. In practice, it’s meant heavy-handed rule from the top. For AI, that results in a coordinated effort to collect amazing amounts of citizen data, support the BAT, and spread the Chinese Communist Party’s influence globally.

  It’s difficult to wrap our heads around potential crises and opportunities before they’ve already happened, and that’s why we tend to stick to our existing narratives. That’s why we reference killer robots rather than paper cuts. Why we fetishize the future of AI rather than fearing the many algorithms that learn from our data. I’ve only described two warning signs, and there are far more to consider. We have opportunity to acknowledge both the tremendous benefits and the plausible risks associated with our current developmental track of AI. More importantly, we have an obligation to address warning signs in the present. We do not want to find ourselves having to make excuses and apologies for AI as we did after Flint, the shuttle Columbia, and Fukushima.

  We must actively hunt for warning signs and build alternate stories about AI’s trajectory to help us anticipate risk and—hopefully—avoid catastrophe. At the moment, there is no probabilistic method that can accurately predict the future. That’s because we humans are capricious, we cannot really account for chaos and chance, and at any given time there are ever more data points to consider. As a professional futurist who makes heavy use of quantitative data in my research, I know that while it’s possible to predict the outcome of an event with a discrete set of information (like an election), when it comes to artificial intelligence, there is an incomprehensibly large number of invisible variables to detect. There are too many individual people making decisions in meetings, as they code, and when choosing which algorithms to train on which data sets; too many daily micro-breakthroughs that don’t get published in peer-reviewed journals; too many alliances, acquisitions and hires made by the Big Nine; too many research projects undertaken at universities. Not even AI could tell us exactly what AI will look like in the farther future. While we cannot make predictions about artificial intelligence, we can certainly make connections between warning signs, weak signals, and other information in the present.

  I developed a methodology to model deep uncertainty. It’s a six-step process that surfaces emerging trends, identifies commonalities and connections between them, maps their trajectories over time, describes plausible outcomes, and ultimately builds a strategy to achieve a desired future. The first half of the methodology explains the what, while the second half describes the what-if. That second half, more formally, is called “scenario planning” and develops scenarios about the future using a wide variety of data across numerous sources: statistics, patent filings, academic and archival research, policy briefings, conference papers, structured interviews with lots of people, and even critical design and speculative fiction.

  Scenario planning originated at the start of the Cold War, in the 1950s. Herman Kahn, a futurist at the RAND Corporation, was given the job of researching nuclear warfare, and he knew that raw data alone wouldn’t provide enough context for military leaders. So instead, he created something new, which he called “scenarios.” They would fill in the descriptive detail and narration needed to help those in charge of creating military strategy understand the plausible outcomes—that is, what could happen if a certain set of actions were taken. Simultaneously in France, the futurists Bertrand de Jouvenel and Gaston Berger developed and used scenarios to describe preferred outcomes—what should happen, given the current circumstances. Their work forced the military and our elected leaders into, as Kahn put it, “thinking about the unthinkable” and the aftermath of nuclear war. It was such a successful exercise that other governments and companies around the world adopted their approaches. The Royal Dutch Shell company popularized scenario planning when it revealed that scenarios had led managers to anticipate the global energy crisis (1973 and 1979) and the collapse of the market in 1986 and to mitigate risk in advance of their competition.8 Scenarios are such a powerful tool that Shell still, 45 years later, employs a large, dedicated team to researching and writing them.

  I’ve prepared risk and opportunity scenarios for the future of AI across many industries and fields and for a varied group of organizations. Scenarios are a tool to help us cope with a cognitive bias behavioral economics and legal scholar Cass Sunstein calls “probability neglect.”9 Our human brains are bad at assessing risk and peril. We assume that common activities are safer than novel or uncommon activities. For example, most of us feel completely safe driving our cars compared to flying on a commercial airline, yet air travel is the safest mode of transportation. Americans have a 1-in-114 chance of dying in a car crash, compared with a 1-in-9,821 chance of being killed on a plane.10, 11 We’re bad at assessing the risk of driving, which is why so many people text and drink behind the wheel. We’re similarly bad at assessing the risk of AI because we mindlessly use it every single day, as we like and share stories, send emails and texts, speak to machines, and allow ourselves to be nudged. Any risk we’ve imagined comes from science fiction: AI as fantastical androids who hunt humans and disembodied voices who psychologically torture us. We don’t naturally think about the future of AI within the realms of capitalism, geopolitics, and democracy. We don’t imagine our future selves and how autonomous systems might affect our health, relationships, and happiness.

  We need a set of public-facing scenarios that describe all the ways in which AI and the Big Nine could affect us collectively as AI progresses from narrow applications to generally intelligent systems and beyond. We are beyond the point of inaction. Think of it this way: There is lead in the water. The O-rings are faulty. There are cracks in the reactor shrouds. The current state of AI has inculcated fundamental problems for which there are warning signs, and we need to address those issues now. If we take the right actions today, there are tremendous opportunities waiting for us in the future.

  In the following chapters, I will detail three scenarios—optimistic, pragmatic, and catastrophic—that I’ve modeled using data and details from the present day. They veer into fiction but are all based in fact. The purpose of these scenarios is to make something that seems distant and fantastical feel more urgent and real. Because we can’t easily see AI in action, we only take notice of outcomes when they’re negative—and by then, everyday people don’t have much recourse.

  The Road from ANI to ASI

  The first part of this book was primarily concerned with artificial narrow intelligence, or ANI, and its automation of millions of everyday tasks—from identifying check fraud to evaluating job candidates to setting the price for airline tickets. But to paraphrase IBM’s famed computer architect Frederic
k Brooks, you can’t build increasingly complex software programs simply by throwing more people at the problem. Adding more developers tends to put projects further behind.12 At the moment, humans have to architect systems and write code to advance various AI applications, and like any research, there’s a considerable learning curve involved. That’s partially why the rapid advancement to the next stage of AI’s development is so attractive to the Big Nine. Systems that are capable of programming themselves could harness far more data, build and test new models, and self-improve without the need for direct human involvement.

  Artificial intelligence is typically defined using three broad categories: artificial narrow or weak intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI). The Big Nine are currently moving swiftly toward building and deploying AGI systems, which they hope will someday be able to reason, solve problems, think in abstraction, and make choices as easily as we can, with equal or better results. Applied AGI would mean exponentially faster research breakthroughs in addition to things like better medical diagnoses and new ways to solve tough engineering problems. Improvements to AGI should, eventually, bring us to the third category: artificial superintelligence. ASI systems range from being slightly more capable at performing human cognitive tasks than we are to AIs that are literally trillions of times generally smarter than humans in every way.

  Getting from where we are today to widespread AGI means making use of “evolutionary algorithms,” a field of research that was inspired by Charles Darwin’s work on natural selection. Darwin discovered that the strongest members of a species survive over time, and their genetic code goes on to dominate the population. Over time the species becomes better suited to its environment. So it is with artificial intelligence. Initially, a system starts with a very large semirandom or random set of possibilities (we’re talking billions or trillions of inputs) and runs simulations. Since the initial solutions generated are random, they’re not really useful in the real world; however, some might be marginally better than others. The system will strip out the weak and keep the strong and then create a new combination. Sometimes, new combinations will generate crossover solutions, which are also included. And sometimes, a random tweak will cause a mutation—which is what happens as any organic species evolves. The evolutionary algorithm will keep generating, discarding, and promoting solutions millions of times, producing thousands or even millions of offspring, until eventually it determines that no more improvement is possible. Evolutionary algorithms with the power to mutate will help advance AI on its own, and that’s a tempting possibility, but one with a cost: how the resulting solution works, and the process used to get there, could be too complex for even our brightest computer scientists to interpret and understand.

  This is why it’s important—even though it may seem fantastical—to include machines in any conversation about the evolution of our human species. Until now, we’ve thought about the evolution of life on Earth using a limited scope. Hundreds of millions of years ago, single-cell organisms engulfed other organisms and became new life-forms. The process continued until early humans gained the ability to stand upright, mutated to have broad knee joints, adapted to bipedal walking, grew longer thigh bones, figured out how make hand axes and to control fire, grew bigger brains, and eventually—after millions of Darwinian natural selections—built the first thinking machines. Like robots, our bodies, too, are mere containers for elaborate algorithms. So we must think about the evolution of life as the evolution of intelligence: human intelligence and AI have been moving along parallel tracks at a pace that has preserved our perch at the top of the intelligence ladder. That’s in spite of the age-old criticism that future generations will become dumber because of technology. I vividly remember my high school calculus teacher raging against the graphing calculator, which had only hit the market five years earlier and which he argued was already making my generation simple-minded and lazy. While we argue that future generations are likely to be dumber because of technology, we never consider that we humans might someday find ourselves dumber than technology. It’s an inflection point we are nearing, and it has to do with our respective evolutionary limitations.

  Most often, human intelligence is measured using a scoring method developed in 1912 by German psychologist William Stern. You know it as the “intelligence quotient,” or IQ. The score is calculated by dividing the result of an intelligence test by your chronological age and then multiplying the answer by 100. About 2.5% of the population scores above 130 and are considered elite thinkers, while 2.5% fall below 70 and are categorized as having learning or other mental disabilities. Even with a few standard deviation points for wiggle room, two-thirds of the population scores between 85 and 115 on the scale. And yet, we are quite a bit smarter than we used to be. Since the early 20th century, the average human’s IQ scores have been rising at a rate of three points per decade, probably because of improved nutrition, better education, and environmental complexity.13 Humanity’s general level of intelligence has shifted right on the bell curve as a result. If the trend continues, we should have many more geniuses by the end of the century. In the meantime, our biological evolution will have crossed paths with AI’s.

  As our intellectual ability improves, so will AI’s—but we can’t score AI using the IQ scale. Instead, we measure the power of a computer using operations (also calculations) per second, or ops, which we can still compare to the human brain. Depending on who you talk to, the maximum operations per second our human brains can perform is one exaflop, which is roughly a billion-billion operations per second, and those ops account for lots of activities that happen without our direct notice: the micro-movements we make when we breathe, the continual visual processing that occurs when our eyes are open, and the like. Launched in 2010, China’s Tianhe-1 was the world’s fastest and most powerful supercomputer, built entirely with Chinese microprocessors and having a theoretical peak of 1.2 petaflops. (A petaflop is one thousand trillion operations per second.) That’s fast—but not human-brain fast. Then in June 2018, IBM and the US Department of Energy debuted Summit, which clocked 200 petaflops, and it was built specifically for AI.14 Which means that we are getting closer to a thinking machine with more measurable compute power than we have biologically, even if it can’t yet pass the Turing test and fool us into believing it’s human.

  But speed isn’t the only metric that matters. If we were to speed up the brain of a dog to 10 quadrillion ops, he wouldn’t suddenly be able to sort out differential equations—he’d just run around the yard sniffing and chasing a lot more things. The human brain is built with more complex architecture than a dog’s: we have more connections between our nerve cells, special proteins, and sophisticated cognitive nodes.15 Even so, AI is extensible in ways that humans aren’t without changing the core architecture of our brains. Moore’s law, which holds that the number of components on integrated circuits would double every two years as the size of transistors shrink, has continued to prove reliable and tells us that computer advancement grows exponentially. Ever more data is becoming available, along with new kinds of algorithms, advanced components, new ways to connect neural nets. All of this leads to more power. Unlike computers, we can’t easily change the structure of our brains and the architecture of human intelligence. It would require us to (1) completely understand how our brains work, (2) modify the architecture and chemicals of our brains with changes that could be passed down to future generations, and (3) wait the many years it takes for us to produce offspring.

  At our current rate, it will take humans 50 years of evolution to notch 15 points higher on the IQ scale. And to us, 15 points will feel noticeable. The difference between a 119 “high average” brain and a 134 “gifted” brain would mean significantly greater cognitive ability—making connections faster, mastering new concepts more easily, and thinking more efficiently. But within that same timeframe, AI’s cognitive ability will not only supersede us—it could become wholly
unrecognizable to us, because we do not have the biological processing power to understand what it is. For us, encountering a superintelligent machine would be like a chimpanzee sitting in on a city council meeting. The chimp might recognize that there are people in the room and that he can sit down on a chair, but a long-winded argument about whether to add bike lanes to a busy intersection? He wouldn’t have anywhere near the cognitive ability to decipher the language being used, let alone the reasoning and experience to grok why bike lanes are so controversial. In the long evolution of intelligence and our road to ASI, we humans are analogous to the chimpanzee.

  A superintelligent AI isn’t necessarily dangerous, and it doesn’t necessarily obviate the role we play in civilization. However, superintelligent AI would likely make decisions in a nonconscious way using logic that’s alien to us. Oxford University philosopher Nick Bostrom explains the plausible outcomes of ASI using a parable about paperclips. If we asked a superintelligent AI to make paperclips, what would happen next? The outcomes of every AI, including those we have now, are determined by values and goals. It’s possible that an ASI could invent a new, better paperclip that holds a stack of paper together so that even if dropped, the pages would always stay collated in order. It’s possible that if we aren’t capable of explaining how many paperclips we actually want, an ASI could go on making paperclips forever, filling our homes and offices with them as well as our hospitals and schools, rivers and lakes, sewage systems, and on and on until mountains of paperclips covered the planet. Or an ASI using efficiency as its guiding value could decide that humans were getting in the way of paperclips, so it would terraform Earth into a paperclip-making factory, making our kind go extinct in the process.16 Here’s what has so many AI experts, myself included, worried: if ASI’s cognitive abilities are orders of magnitude better than ours (remember, we’re just a few clicks above chimpanzees), then it would be impossible for us to imagine the consequences such powerful machines might have on our civilization.

 

‹ Prev