by Amy Webb
This is why the word “explosion” gets used a lot among AI researchers. It was first coined by British mathematician and cryptologist I. J. Good in a 1965 essay: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”17
The Big Nine are building frameworks and systems that—they hope—will someday encourage an explosion, making room for entirely new solutions, strategies, concepts, frameworks, and approaches that even our smartest computer scientists never considered. This would lead to ever faster breakthroughs, opportunities, and business growth. In technical terms, this is called “recursive self-improvement,” and it refers to a cycle in which AI makes itself better, faster, and smarter quickly by modifying its capabilities. This would enable AIs to take control of and plan their own destiny. The rate of self-improvement could be hourly, or even instantaneous.
The coming “intelligence explosion” describes not just the speed of supercomputers or power of algorithms, but the vast proliferation of smart thinking machines bent on recursive self-improvement. Imagine a world in which systems far more advanced than AlphaGo Zero and NASNet not only make strategic decisions autonomously but also work collaboratively and competitively as part of a global community. A world in which they are asked to evolve, primarily to help us humans out—writing new generations of code, mutating, and self-improving—but at a breakneck pace. The resulting AIs would create new agents, programming them with a purpose and set of tasks, and that cycle would repeat again and again, trillions of times, resulting in both tiny and tremendous changes. The only other time in history we’ve witnessed such an evolutionary cataclysm was approximately 542 million years ago during the Cambrian period, when the rapid diversification of our biome led to all kinds of new complex life-forms and transformed our planet. Former DARPA program manager Gill Pratt argues that we’re in the midst of a Cambrian explosion right now—a period in which AI learns from the experience of all AIs, after which our life on Earth could look dramatically different than it does today.18
This is why the Big Nine, its investors and shareholders, our government agencies and elected officials, researchers in the trenches, and (importantly) you need to recognize the warning signs and to think more critically not just about the ANI that’s being created right now but also about the AGI and ASI that are on our horizon. The evolution of intelligence is a continuum on which both humans and machines coexist. The Big Nine’s values are already deeply encoded into our existing algorithms, systems, and frameworks. Those values will be passed along to millions of new generations of AIs that evolve, and soon to generally intelligent thinking machines.
The transition from ANI to ASI will likely span the next 70 years. At the moment, it’s difficult to define exact milestone dates because the rate of progress in AI depends on a number of factors and people: new members admitted to AI’s tribes, strategic decisions made at the Big Nine, trade wars and geopolitical scuffles, not to mention chance and chaotic events. In my own models, I would currently put the advent of AGI in the 2040s. This sounds like the distant future, so let me contextualize. We will have had three or four American presidents in the White House by then. (Barring health issues, Chinese president Xi Jinping will still be in power.) I’ll be 65 once AGI systems start to do their own AI research. My second-grader will be 30, and by then she may be reading a New York Times bestseller written entirely by a machine. My dad will be in his late 90s, and all of his medical specialists (cardiologists, nephrologists, radiologists) will be AGIs, directed and managed by a highly trained general practitioner, who is both an MD and a data scientist. The advent of ASI could follow soon or much longer after, between the 2040s and 2060s. It doesn’t mean that by 2070 superintelligent AIs will have crushed all life on Earth under the weight of quintillions of paperclips. But it doesn’t mean they won’t have either.
The Stories We Must Tell Ourselves
Planning for the futures of AI requires us to build new narratives using data from the real world. If we agree that AI will evolve as it emerges, then we must create scenarios that describe the intersection of the Big Nine, the economic and political forces guiding them, and the ways humanity factors in as AI transitions from narrow applications to generally intelligent and ultimately superintelligent thinking machines.
Because the future hasn’t happened yet, we cannot know for certain all of the possible outcomes of our actions in the present. For that reason, the scenarios that follow in the coming chapters are written using different emotive framings describing the next 50 years. First is an optimistic scenario asking what happens if the Big Nine decide to champion sweeping changes to ensure AI benefits all of us. There’s an important distinction to note: “optimistic” scenarios are not necessarily buoyant or upbeat. They do not always lead to utopia. In an optimistic scenario, we’re assuming that the best possible decisions are made and that any barriers to success are surmounted. For our purposes, this means that the Big Nine shift course on AI, and because they make the best decisions at the right time, we’re all much better off in the future. It’s a scenario I’d be content living in, and it’s a future we can achieve if we work together.
Next is a pragmatic scenario describing how the future would look if the Big Nine only make minor improvements in the short term. We assume that while all of the key stakeholders acknowledge AI is probably not on the right path, there is no collaboration to create lasting, meaningful change. A few universities introduce mandatory ethics classes; the G-MAFIA form industry partnerships to tackle risk but don’t evolve their own company cultures; our elected officials focus on their next election cycles and lose sight of China’s grand plans. A pragmatic scenario doesn’t hope for big changes—it recognizes the ebb and flow of our human drive to improve. It also acknowledges that in business and governing, leaders are all too willing to give short shrift to the future for immediate, near-term gains.
Finally, the catastrophic scenario explains what happens if all of the signals are missed, the warning signs are ignored, we fail to actively plan for the future, and the Big Nine continue to compete against themselves. If we choose to double down on the status quo, where could that take us? What happens if AI continues along its existing track in the United States and China? Creating systematic change—which is what avoiding the catastrophic scenario requires—is difficult, time-consuming work that doesn’t end at a finish line. This is what makes the catastrophic scenario truly frightening, and the detail in it so disturbing. Because at the moment, the catastrophic scenario is the one we seem destined to realize.
I’ve researched, modeled, and written these three scenarios to describe what-if outcomes, beginning with the year 2029. Anchoring the scenarios are a handful of key themes, including economic opportunity and mobility, workforce productivity, improvement on social structures, the power dynamics of the Big Nine, the relationship between the United States and China, and the global retraction/spread of democracy and communism. I show how our social and cultural values might shift as AI matures: how we define creativity, the ways in which we relate to each other, and our thinking on life and death. Because the goal of scenarios is to help us understand what life might look like during our transition from ANI and ASI, I’ve included examples from home, work, education, health care, law enforcement, our cities and towns, local infrastructure, national security, and politics.
One probable near-term outcome of AI and a through-line in all three of the scenarios is the emergence of what I’ll call a “personal data record,” or PDR. This is a single unifying ledger that includes all of the data we create as a result of our digital usage (think internet and mobile phones), but it would also include other sources of information: our school and work histories (diplomas, previous and current employers);
our legal records (marriages, divorces, arrests); our financial records (home mortgages, credit scores, loans, taxes); travel (countries visited, visas); dating history (online apps); health (electronic health records, genetic screening results, exercise habits); and shopping history (online retailers, in-store coupon use). In China, a PDR would also include all the social credit score data described in the last chapter. AIs, created by the Big Nine, would both learn from your personal data record and use it to automatically make decisions and provide you with a host of services. Your PDR would be heritable—a comprehensive record passed down to and used by your children—and it could be temporarily managed, or permanently owned, by one of the Big Nine. PDRs play a featured role in the scenarios you’re about to read.
PDRs don’t yet exist, but from my vantage point there are already signals that point to a future in which all the myriad sources of our personal data are unified under one record provided and maintained by the Big Nine. In fact, you’re already part of that system, and you’re using a proto-PDR now. It’s your email address.
The average person’s email address has been repurposed as a login; their mobile phone number is used to authenticate transactions; and their smartphone is used to locate them in the physical world. If you are a Gmail user, Google—and by extension its AIs—knows you better than your spouse or partner. It knows the names and email addresses of everyone you talk to, along with their demographic information (e.g., age, gender, location). Google knows when you tend to open email and under what circumstances. From your email, it knows your travel itineraries, your financial records, and what you buy. If you take photos with your Android phone, it knows the faces of your friends and family members, and it can detect anomalies to make inferences: for example, sudden new pics of the same person might indicate a new girlfriend (or an affair). It knows all of your meetings, doctor appointments, and plans to hit the gym. It knows whether you observe Ramadan or Rosh Hashanah, whether you’re a churchgoer, or whether you practice no religion at all. It knows where you should be on a given Tuesday afternoon, even if you’re somewhere else. It knows what you search for, using your fingers and your voice, and so it knows whether you’re miscarrying for the first time, learning how to make paella, struggling with your sexual identity or gender assignment, considering giving up meat, or looking for a new job. It cross-links all this data, learning from it and productizing and monetizing it as it nudges you in predetermined directions.
Right now, Google knows all of this information because you’ve voluntarily linked it all to just one record—your Gmail address—which, by the way, you’ve probably also used to buy stuff on Amazon and to log into Facebook. This isn’t a complaint; it’s a fact of modern life. As AI advances, a more robust personal data record will afford greater efficiencies to the Big Nine, and so they will nudge us to accept and adopt PDRs, even if we don’t entirely understand the implications of using them. Of course, in China, PDRs are already being piloted under the auspices of its social credit score.
“We tell ourselves stories in order to live,” Joan Didion wrote in The White Album. “We interpret what we see, select the most workable of the multiple choices.” We all have choices to make about AI. It’s time we use the information we have available to tell ourselves stories—scenarios that describe how we might all live alongside our thinking machines.
CHAPTER FIVE
THRIVING IN THE THIRD AGE OF COMPUTING: THE OPTIMISTIC SCENARIO
It is the year 2023, and we’ve made the best possible decisions about AI—we’ve shifted AI’s developmental track, we are collaborating on the future, and we’re already seeing positive, durable change. AI’s tribes, universities, the Big Nine, government agencies, investors, researchers, and everyday people heeded those early warning signs.
We understand that there is no single change that will fix the problems we’ve already created and that the best strategy now involves adjusting our expectations for the future of AI. We acknowledge that AI isn’t just a product made in Silicon Valley, something to be monetized while the market is hot.
First and foremost, we recognize why China has invested strategically in AI and how AI’s developmental track fits in to China’s broader narrative about its future place in the world. China isn’t trying to tweak the trade balance; it is seeking to gain an absolute advantage over the United States in economic power, workforce development, geopolitical influence, military might, social clout, and environmental stewardship. With this realization, our elected officials, with the full support of the G-MAFIA and AI’s tribes, build an international coalition to protect and preserve AI as a public good. That coalition exacts pressure on China and uses economic levers to fight back against AI’s use as a tool of surveillance and an enabler of communism.
With the recognition that China is leveraging AI to fulfill its economic and military goals as it spreads the seeds of communism and tightens its reins on society, the US government dedicates vast federal funding to support AI’s development, which relieves pressure on the G-MAFIA to earn profit fast. Using our 1950s space race as precedent, it’s evident how easily America could be passed over by other countries without coordination at a national level. It’s also abundantly clear how much influence America can exert in science and technology when we have a coordinated national strategy—we have the federal government to thank for GPS and the internet.
Neither AI nor its funding is politicized, and everyone agrees that regulating the G-MAFIA and AI is the wrong course of action. Heavy-handed, binding regulations would be outdated the moment they went into effect; they would stifle innovation, and they’d be difficult to enforce. With bipartisan support, Americans unite behind increased federal spending on AI across the board using China’s public road map as inspiration. Funding flows to R&D, economic and workforce impact studies, social impact studies, diversity programs, medical and public health initiatives, and infrastructure and to making America’s public education great again, with attractive salaries for teachers and a curriculum that prepares everyone for a more automated future. We stop assuming that the G-MAFIA can serve its DC and Wall Street masters equally and that free markets and our entrepreneurial spirit will produce the best possible outcomes for AI and humanity.
With a national strategy and funding in place, the newly formed G-MAFIA Coalition formalizes itself with multilateral agreements to collaborate on the future of AI. The G-MAFIA Coalition defines and adopts standards that, above all else, prioritize a developmental track for AI that serves the best interests of democracy and society. It agrees to unify AI technologies. Collaboration yields superior chipsets, frameworks, and network architectures rather than competing AI systems and a bifurcated developer community. It also means that researchers can pursue mapping opportunities so that everyone wins.
The G-MAFIA Coalition adopts transparency as a core value, and it radically rewrites service agreements, rules, and workflows in favor of understanding and education. It does this voluntarily and therefore avoids regulation. The data sets, training algorithms, and neural network structures are made transparent in a way that protects only those trade secrets and proprietary information that could, if divulged, cause one of the coalition members economic harm. The G-MAFIA’s individual legal teams don’t spend years looking for and debating loopholes or prolonging the adoption of transparency measures.
Knowing that automation is on the horizon, the G-MAFIA help us think through unemployment scenarios and help prepare our workforce for the third era of computing. With their help, we don’t fear AI but rather see it as a huge opportunity for economic growth and individual prosperity. The G-MAFIA’s thought leadership cuts through the hype and shines a light on better approaches to training and education for our emerging jobs of the future.
America’s national strategy and the formation of our G-MAFIA Coalition inspires the leaders of other democracies around the world to support the global development of AI for the good of all. Dartmouth University, in a gathering similar to the o
ne that took place the summer of 1956, hosts the inaugural intergovernmental forum, with a diverse cross section of leaders from the world’s most advanced economies: secretaries, ministers, prime ministers, and presidents from the United States, United Kingdom, Japan, France, Canada, Germany, Italy, and others from the European Union, as well as AI researchers, sociologists, economists, game theorists, futurists, political scientists, and others. Unlike the homogenous group of men from similar backgrounds who made up the first Dartmouth workshop, this time around the leaders and experts include a wide spectrum of people and worldviews. Standing on the very same, hallowed ground where modern artificial intelligence was born, those leaders agree to facilitate and cooperate on shared AI initiatives and policies. Taking inspiration from Greek mythology and the ancestral mother of Earth, they form GAIA: the Global Alliance on Intelligence Augmentation.