Book Read Free

The Big Nine

Page 24

by Amy Webb


  The GAA eventually form a coalition with the US government and what remains of its allies. With China’s economic and travel restrictions imposed, there is little money available to come up with a workable solution. A decision is made to develop an AGI that can solve our China problem for us. But the system only sees two possible solutions: give in to China or pare down the human race.

  2069: Digital Annihilation

  While China was focused on long-term planning and a national strategy for AI, the United States was instead concerned with devices and dollars.

  China no longer needs the United States as a trade partner, and it doesn’t need our intellectual property. China has built a network of more than 150 countries that operate under the guiding principles of the Global One China Policy. In return for their obedience, these countries have network access, the ability to trade, and a stable financial system backed by Beijing. Their citizens are free to move throughout One China countries, providing they have earned a high enough social credit score.

  The ability to travel—a freedom Americans used to take for granted—has never been missed so greatly. That’s because America, like many countries, is experiencing a population squeeze. The global population of Earth has surpassed 10 billion. We gave birth too often and too quickly, and we insisted on extending our lifespans past 120 years of age.

  Our global population is a problem because we didn’t take action on climate change quickly enough, not even after China took up the mantle of sustainability and environmental protection. We have lost two-thirds of the Earth’s arable land. While we made great efforts to build underground farms in America, we cannot grow food quickly enough to feed our local populations. Global sanctions have blocked trade routes and have cut us and our allies off from food-producing nations, but even China and its One China nations are struggling.

  One day, Apple families suffer from what appears to be a mysterious illness. Their PDRs show an anomaly but offer no detail or specifics. At first, we think that this latest version of nanobots are defective, so product managers rush to develop patch AGIs. Then the illness hits Google homes—not just in America but in every single home outside the One China border. The mystery illness worsens quickly.

  China has built an ASI, and it has one purpose: to exterminate the populations of America and our allies. One China nations need what’s left of Earth’s resources, and Beijing has calculated that the only way to survive is take those resources from us.

  What you witness is worse than any bomb ever created. Bombs are immediate and exacting. Annihilation by AI is slow and unstoppable. You sit helpless as your children’s bodies go limp in your arms. You watch your coworkers collapse at their desks. You feel a sharp pain. You are lightheaded. You take your last quick, shallow breath.

  It is the end of America.

  It is the end of America’s allies.

  It is the end of democracy.

  The Réngōng Zhìnéng Dynasty ascends. It is brutal, irreversible, and absolute.

  There are signals in the present pointing to all three scenarios. Now we need to make a choice. You need to make a choice. I am asking you to choose the optimistic scenario and to build a better future for AI and for humanity.

  PART III

  Solving the Problems

  CHAPTER EIGHT

  PEBBLES AND BOULDERS: HOW TO FIX AI’S FUTURE

  The conclusion of the last chapter may sound extreme and unlikely. But there are already signals telling us that unless we embrace a future in which the Big Nine are incentivized to collaborate in the best interests of humanity, it’s very possible we could wind up living in a world that resembles the Réngōng Zhìnéng Dynasty.

  I believe that the optimistic scenario—or something close to it—is within our reach. It is possible for artificial intelligence to fulfill its greatest aspirational purpose and potential, benefitting all of AI’s tribes and all of us in the process. As it evolves, AI can absolutely serve the people of both China and the United States, as well as all of our allies. It can help us live healthier lives, shrink economic divides, and make us safer in our cities and homes. AI can empower us to unlock and answer the greatest mysteries of humankind, like where and how life originated. And in the process, AI can dazzle and entertain us, too, creating virtual worlds we’ve never imagined, writing songs that inspire us, and designing new experiences that are fun and fulfilling. But none of that will happen without planning, a commitment to difficult work, and courageous leadership within all of AI’s stakeholder groups.

  Safe, beneficial technology isn’t the result of hope and happenstance. It is the product of courageous leadership and of dedicated, ongoing collaborations. The Big Nine are under intense pressure—from Wall Street in the United States and Beijing in China—to fulfill shortsighted expectations, even at great cost to our futures. We must empower and embolden the Big Nine to shift the trajectory of artificial intelligence, because without a groundswell of support from us, they cannot and will not do it on their own.

  Vint Cerf, who codesigned the early protocols and architecture for our modern internet, uses a parable to explain why courageous leadership is vitally important in the wake of emerging technologies like artificial intelligence.1 Imagine that you are living in a tiny community at the base of a valley that’s surrounded by mountains. At the top of a distant mountain is a giant boulder. It’s been there for a long time and has never moved, so as far as your community is concerned, it just blends into the rest of the landscape. Then one day, you notice that the giant boulder looks unstable—that it’s in position to roll down the mountain, gaining speed and power as it moves, and it will destroy your community and everyone in it. In fact, you realize that perhaps you’ve been blind to its motion your entire life. That giant boulder has always been moving, little by little, but you’ve never had your eyes fully open to the subtle, minute changes happening daily: a tiny shift in the shadow it casts, the visual distance between it and next mountain over, the nearly imperceptible sound it makes as the ground crunches beneath it. You realize that as just one person, you can’t run up the mountain and stop the giant boulder on your own. You’re too small, and the boulder is too large.

  But then you realize that if you can find a pebble and put it in the right spot, it will slow the boulder’s momentum and divert it just a bit. Just one pebble won’t stop the boulder from destroying the village, so you ask your entire community to join you. Pebbles in hand, every single person ascends the mountain and is prepared for the boulder—there is collaboration, and communication, and a plan to deal with the boulder as it makes its way down. People and their pebbles—not a bigger boulder—make all the difference.

  What follows is a series of pebbles. I’ll begin very broadly by outlining the case for a global commission to oversee AI’s trajectory and our immediate need for norms and standards. Then I’ll explain what specific changes the US and Chinese governments must make. Next, I’ll narrow the aperture further and describe how the Big Nine must reform its practices. I’ll then focus just on AI’s tribes and the universities where they form and will detail exactly what changes be made right now. Finally, I’ll explain the role that you, personally, can play in shaping AI’s future.

  The future we all want to live in won’t just show up, fully formed. We need to be courageous. We must take responsibility for our actions.

  Worldwide Systemic Change: The Case for Creating GAIA

  In the optimistic scenario, a diverse mix of leaders from the world’s most advanced economies join forces with the G-MAFIA to form the Global Alliance on Intelligence Augmentation, or GAIA. The international body includes AI researchers, sociologists, economists, game theorists, futurists, and political scientists from all member countries. GAIA members reflect socioeconomic, gender, race, religious, political, and sexual diversity. They agree to facilitate and cooperate on shared AI initiatives and policies, and over time they exert enough influence and control that an apocalypse—either because of AGI, ASI, or China’s
use of AI to oppress citizens—is prevented.

  The best way to engineer systematic change is to see the creation of GAIA as soon as possible, and it should be physically located on neutral ground near an existing AI hub. The best possible placement for GAIA is Montreal, Canada. First, Montreal is home to a concentration of deep-learning researchers and labs. If we assume that the transition from ANI to AGI will include deep learning and deep neural nets, it follows that GAIA should be centered within the place where so much of that next-generation work is taking place. Second, under Prime Minister Justin Trudeau the Canadian government has already committed people and funding to explore the future of AI. During 2017 and 2018, Trudeau didn’t just talk about AI; he positioned Canada to help shape the rules and principles that guide the development of artificial intelligence. Third, Canada is neutral geopolitical territory for AI—it’s far away from both Silicon Valley and from Beijing.

  It may seem impossible to unite the governments of the world around a central cause given the political rancor and geopolitical uneasiness we’ve experienced in the past few years. But there is precedent. In the aftermath of World War II, when tensions were still high, hundreds of delegates from all Allied nations gathered together in Bretton Woods, New Hampshire, to build the financial structures that enabled the global economy to move forward. That collaboration was human-centered—it resulted in a future where people and nations could rebuild and seek out prosperity. GAIA nations should collaborate on frameworks, standards, and best practices for AI. While it is unlikely that China would join, an invitation should be extended for CCP leaders and for the BAT to join.

  First and foremost, GAIA must establish a way to guarantee basic human rights in an age of AI. When we talk about AI and ethics, we tend to think of Isaac Asimov’s Three Laws of Robotics, which he published in a 1942 short story called “Runaround.”2 It was a story about a humanoid computer, not AI. And yet those laws are what have inspired our thinking on ethics all these years later. As discussed in Chapter 1, Asimov’s rules are: (1) robots must not injure a human being or, though inaction, allow humans to be harmed; (2) robots must obey orders unless the orders conflict with the first law; and (3) robots must protect their own existence unless protecting conflicts with laws one or two. When Asimov later published a collection of short stories in a book called I, Robot, he added a Zeroth Law to precede the first three: (0) robots may not harm humanity. Asimov was a talented, prescient writer—but his laws of robotics are too general to serve as guiding principles for the future of AI.

  Instead, GAIA should create a new social contract between citizens and the Big Nine (defined broadly as the G-MAFIA and BAT, as well as all of their partners, investors, and subsidiaries). It should be based on trust and collaboration. GAIA members should formally agree that AI must empower a maximum number of people around the world. The Big Nine should prioritize our human rights first and should not view us as resources to be mined for either profit or political gain. The economic prosperity AI promises and the Big Nine delivers should broadly benefit everyone.

  It therefore follows that our personal data records should be interoperable and should be owned by us—not by individual companies or conglomerates or nations. GAIA can begin exploring how to do this today, because the PDRs you read about in the scenarios already exist in primordial form right now. They’re called “personally identifiable information,” or PIIs. It’s our individual PIIs that power the apps in our smartphones, the advertising networks on websites, and recommendations that nudge us on our screens. PIIs are fed into systems that are used to identify and locate us. How they are used is entirely up to the whims of the companies and government agencies accessing them.

  Before a new social contract is developed, GAIA must decide how our PDRs can be used to help train machine-learning algorithms, and it must define what constitutes basic values in an age of automation. Clearly defining values is critically important because those values are ultimately encoded into the training data, real-world data, learning systems, and applications that make up the AI ecosystem.

  To catalog our basic values, GAIA should create a Human Values Atlas, which would define our unique values across cultures and countries. This atlas would not, and should not, be static. Because our values change over time, the atlas would need to be updated by member nations. We can look to the field of biology for precedent: the Human Cell Atlas is a global collaboration among the scientific community, which includes thousands of experts in varied fields (including genomics, AI, software engineering, data visualization, medicine, chemistry, and biology).3 The project is cataloging every single cell type in the human body, mapping cell types to their locations, tracing the history of cells as they evolve, and capturing the characteristics of cells during their lifetimes. This effort—expensive, complicated, time-consuming, and perpetual—will make it possible for researchers to make bold advances, and it’s only possible because of a massive, worldwide collaboration. We should create a similar atlas for human values, which would include academics, cultural anthropologists, sociologists, psychologists, and everyday people, too. Creating the Human Values Atlas would be cumbersome, expensive, and challenging—and it would likely be full of contradictions, since what some cultures value would run counter to others. However, without a framework and set of basic standards in place, we are asking the Big Nine and AI’s tribes to do something they simply cannot—that is, consider all of our perspectives and all of the possible outcomes on disparate groups within society and within every country of the world.

  GAIA should consider a framework of rights that balances individual liberties with the greater, global good. It would be better to establish a framework that’s strong on ideals but can be more flexible in interpretation as AI matures. Member organizations would have to demonstrate they are in compliance or face being removed from GAIA. Any framework should include the following principles:

  1. Humanity should always be at the center of AI’s development.

  2. AI systems should be safe and secure. We should be able to independently verify their safety and security.

  3. The Big Nine—including its investors, employees, and the governments it works within—must prioritize safety above speed. Any team working on an AI system—even those outside the Big Nine—must not cut corners in favor of speed. Safety must be demonstrated and discernible by independent observers.

  4. If an AI system causes harm, it should be able to report out what went wrong, and there should be a governance process in place to discuss and mitigate damage.

  5. AI should be explainable. Systems should carry something akin to a nutritional label, detailing the training data used, the processes used for learning, the real-world data being used in applications and the expected outcomes. For sensitive or proprietary systems, trusted third parties should be able to assess and verify an AI’s transparency.

  6. Everyone in the AI ecosystem—Big Nine employees, managers, leaders, and board members; startups (entrepreneurs and accelerators); investors (venture capitalists, private equity firms, institutional investors, and individual shareholders); teachers and graduate students; and anyone else working on AI—must recognize that that they are making ethical decisions all the time. They should be prepared to explain all of the decisions they’ve made during the development, testing, and deployment process.

  7. The Human Values Atlas should be adhered to for all AI projects. Even narrow AI applications should demonstrate that the atlas has been incorporated.

  8. There should be a published, easy-to-find code of conduct governing all people who work on AI and its design, build, and deployment. The code of conduct should also govern investors.

  9. All people should have the right to interrogate AI systems. What an AI’s true purpose is, what data it uses, how it reaches its conclusions, and who sees results should be made fully transparent in a standardized format.

  10. The terms of service for an AI application—or any service that uses AI—should be writte
n in language plain enough that a third grader can comprehend it. It should be available in every language as soon as the application goes live.

  11. PDRs should be opt-in and developed using a standardized format, they should be interoperable, and individual people should retain full ownership and permission rights. Should PDRs become heritable, individual people should be able to decide the permissions and uses of their data.

  12. PDRs should be decentralized as much as possible, ensuring that no one party has complete control. The technical group that designs our PDRs should include legal and nonlegal experts alike: white hat (good) hackers, civil rights leaders, government agents, independent data fiduciaries, ethicists, and other professionals working outside of the Big Nine.

  13. To the extent possible, PDRs should be protected against enabling authoritarian regimes.

  14. There must be a system of public accountability and an easy method for people to receive answers to questions about their data and how it is mined, refined, and used throughout AI systems.

  15. All data should be treated fairly and equally, regardless of nationality, race, religion, sexual identity, gender, political affiliations, or other unique beliefs.

  GAIA members should voluntarily submit to random inspections by other members or by an agency within GAIA to ensure that the framework is being fully observed. All of the details—like what, exactly, a system of public accountability looks like and how it functions in the real world—would be continually revisited and improved, in order to keep pace with developments in AI. This process would most assuredly slow down some progress, and that’s by design.

 

‹ Prev