The Big Nine

Home > Other > The Big Nine > Page 14
The Big Nine Page 14

by Amy Webb


  We cannot approach the creation of a shared system of AI’s values the same way we’d approach writing a company’s code of conduct or the rules for banking regulation. The reason is simple: our human values tend to change in response to technology and to other external factors, like political movements and economic forces. Just take a look at this poem by Alfred Lord Tennyson, which describes what Victorian England valued in its citizens:

  Man for the field and woman for the hearth;

  for the sword he, and for the needle she;

  Man with the head, and women with the heart;

  Man to command, and woman to obey;

  All else is confusion.

  Our cherished beliefs are in constant flux. In 2018, as I was writing this book, it had become socially acceptable for national leaders to hurl offensive, hate-filled social media posts at each other and for pundits to spew polarizing, incendiary commentary on video, in blog posts, and even in traditional news publications. It’s nearly impossible now to imagine the discretion and respect for privacy during FDR’s presidency, when the press took great care never to mention or show his paralysis.

  Since AI isn’t being taught to make perfect decisions, but rather to optimize, our response to changing forces in society matter a lot. Our values are not immutable. This is what makes the problem of AI’s values so vexing. Building AI means predicting the values of the future. Our values aren’t static. So how do we teach machines to reflect our values without influencing them?

  Optimizing AI for Humans

  Some members of AI’s tribe believe that a shared set of guiding principles is a worthy goal and the best way to achieve it is to feed literature, news stories, opinion pieces and editorials, and articles from credible news courses into AI systems to help them learn about us. It involves crowdsourcing, where AI would learn from the collected wisdom of people. That’s a terrible approach, because it would only offer the system a snapshot in time, and curating what cultural artifacts got included could not, in any meaningful way, represent the sum total of the human condition. If you’ve ever made a time capsule, you’ll immediately know why. The decisions you made then about what to include are probably not the same decisions you’d make today, with hindsight on your side.

  The rules—the algorithm—by which every culture, society, and nation lives, and has ever lived, were always created by just a few people. Democracy, communism, socialism, religion, veganism, nativism, colonialism—these are constructs we’ve developed throughout history to help guide our decisions. Even in the best cases, they aren’t future-proof. Technological, social, and economic forces always intervene and cause us to adapt. The Ten Commandments make up an algorithm intended to create a better society for humans alive more than 5,000 years ago. One of the commandments is to take a full day of rest a week and not to do any work at all that day. In modern times, most people don’t work the exact same days or hours from week to week, so it would be impossible not to break the rule. As a result, people who follow the Ten Commandments as a guiding principle are flexible in their interpretation, given the realities of longer workdays, soccer practice, and email. Adapting is fine—it works really well for us, and for our societies, allowing us to stay on track. Agreeing on a basic set of guidelines allows us to optimize for ourselves.

  There would be no way to create a set of commandments for AI. We couldn’t write out all of the rules to correctly optimize for humanity, and that’s because while thinking machines may be fast and powerful, they lack flexibility. There isn’t an easy way to simulate exceptions, or to try and think through every single contingency in advance. Whatever rules might get written, there would always be a circumstance in the future in which some people might want to interpret the rules differently, or to ignore them completely, or to create amendments in order to manage an unforeseen circumstance.

  Knowing that we cannot possibly write a set of strict commandments to follow, should we, instead, focus our attention on the humans building the systems? These people—AI’s tribes—should be asking themselves uncomfortable questions, beginning with:

  • What is our motivation for AI? Is it aligned with the best long-term interests of humanity?

  • What are our own biases? What ideas, experiences, and values have we failed to include in our tribe? Who have we overlooked?

  • Have we included people unlike ourselves for the purpose of making the future of AI better—or have we simply included diversity on our team to meet certain quotas?

  • How can we ensure that our behavior is inclusive?

  • How are the technological, economic, and social implications of AI understood by those involved in its creation?

  • What fundamental rights should we have to interrogate the data sets, algorithms, and processes being used to make decisions on our behalf?

  • Who gets to define the value of human life? Against what is that value being weighed?

  • When and why do those in AI’s tribes feel that it’s their responsibility to address social implications of AI?

  • Does the leadership of our organization and our AI tribes reflect many different kinds of people?

  • What role do those commercializing AI play in addressing the social implications of AI?

  • Should we continue to compare AI to human thinking, or is it better for us to categorize it as something different?

  • Is it OK to build AI that recognizes and responds to human emotion?

  • Is it OK to make AI systems capable of mimicking human emotion, especially if it’s learning from us in real time?

  • What is the acceptable point at which we’re all OK with AI evolving without humans directly in the loop?

  • Under what circumstances could an AI simulate and experience common human emotions? What about pain, loss, and loneliness? Are we OK causing that suffering?

  • Are we developing AI to seek a deeper understanding of ourselves? Can we use AI to help humanity live a more examined life?

  The G-MAFIA has started to address the problem of guiding principles through various research and study groups. Within Microsoft is a team called FATE—for Fairness, Accountability, Transparency, and Ethics in AI.23 In the wake of the Cambridge Analytica scandal, Facebook launched an ethics team that was developing software to make sure that its AI systems avoided bias. (Notably, Facebook did not go so far as to create an ethics board focused on AI.) DeepMind created an ethics and society team. IBM publishes regularly about ethics and AI. In the wake of a scandal at Baidu—the search engine prioritized misleading medical claims from a military-run hospital, where a treatment resulted in the death of a 21-year-old student—CEO Robin Li admitted that employees had made compromises for the sake of Baidu’s earnings growth and promised to focus on ethics in the future.24 The Big Nine produces ethics studies and white papers, it convenes experts to discuss ethics, and it hosts panels about ethics—but that effort is not intertwined enough with the day-to-day operations of the various teams working on AI.

  The Big Nine’s AI systems are increasingly accessing our real-world data to build products that show commercial value. The development cycles are quickening to keep pace with investors’ expectations. We’ve been willing—if unwitting—participants in a future that’s being created hastily and without first answering all those questions. As AI systems advance and more of everyday life gets automated, the less control we actually have over the decisions being made about and for us.

  This, in turn, has a compounding effect on the future of many other technologies adjacent to or directly intersecting with AI: autonomous vehicles, CRISPR and genomic editing, precision medicine, home robotics, automated medical diagnoses, green- and geoengineering technologies, space travel, cryptocurrencies and blockchain, smart farms and agricultural technologies, the Internet of Things, autonomous factories, stock-trading algorithms, search engines, facial and voice recognition, banking technologies, fraud and risk detection, policing and judicial technologies… I could make a list that
spans dozens of pages. There isn’t a facet of your personal or professional life that won’t be impacted by AI. What if, in a rush to get products to market or to please certain government officials, your values aren’t reflected not just in AI but in all of the systems it touches? How comfortable are you now knowing that the BAT and G-MAFIA are making decisions affecting all of our futures?

  The current developmental track of AI prioritizes automation and efficiency, which necessarily means we have less control and choice over the thousands of our everyday activities, even those that are seemingly insignificant. If you drive a newer car, your stereo likely adjusts the volume down every time you back up—and there’s no way to override that decision. Human error is the overwhelming cause of car accidents—and there’s no exception for me, even though I’ve never come close to running into or over something when backing into my garage. Even so, I can no longer listen to Soundgarden at full volume when I back into my garage at home. AI’s tribes have overridden my ability to choose, optimizing for what they perceive to be a personal shortcoming.

  What’s not on the table, at the G-MAFIA or BAT, is optimizing for empathy. Take empathy out of the decision-making process, and you take away our humanity. Sometimes what might make no logical sense at all is the best possible choice for us at a particular moment. Like blowing off work to spend time with a sick family member, or helping someone out of a burning car, even if that action puts your own life in jeopardy.

  Our future living with AI begins with a loss of control over the little things: not being able to listen to Chris Cornell screech “Black Hole Sun” as I pull into my garage. Seeing your name appear in an online ad for arrest records. Watching your market value erode just a bit after an embarrassing chatbot mishap. These are the tiny paper cuts that at the moment don’t seem significant, but will, over the next 50 years, amount to a lot of pain. We’re not heading toward a single catastrophe but rather the steady erosion of the humanity we take for granted today.

  It’s time to see what happens as we transition away from artificial narrow intelligence to artificial general intelligence—and what life will look like during the next 50 years as humanity cedes control to thinking machines.

  PART II

  Our Futures

  “The holy man is he who takes your soul and will and makes them his. When you choose your holy man, you surrender your will. You give it to him in utter submission, in full renunciation.”

  —FEODOR DOSTOYEVSKY, THE BROTHERS KARAMAZOV

  CHAPTER FOUR

  FROM HERE TO ARTIFICIAL SUPERINTELLIGENCE: THE WARNING SIGNS

  The evolution of artificial intelligence, from robust systems capable of completing narrow tasks to general thinking machines, is now underway. At this moment in time, AI can recognize patterns and make decisions quickly, find hidden regularities in big data sets, and make accurate predictions. And it’s becoming clear with each new milestone achieved—like AlphaGo Zero’s ability to train itself and win matches using a superior strategy it developed on its own—that we are entering a new phase of AI, one in which theoretical thinking machines become real and approach our human level of cognition. Already AI’s tribes, on behalf of and within the Big Nine, are building conceptual models of reality to help train their systems—models that do not and cannot reflect an accurate picture of the real world. It is upon these models that future decisions will be made: about us, for us, and on behalf of us.1

  Right now, the Big Nine are building the legacy code for all generations of humans to come, and we do not have the benefit of hindsight yet to determine how their work has benefitted or compromised society. Instead, we must project into the future, doing our best to imagine the good, neutral, and ill effects AI might plausibly cause as it evolves from simple programs to complex systems with decision-making authority over the many facets of our everyday life. Mapping out the potential impacts of AI now gives us agency in determining where human society goes from here: we can choose to maximize the good and minimize harm, but we cannot do this in reverse.

  Most often we do our critical thinking after a crisis as we try to reverse-engineer poor decisions, figure out how warning signs were missed, and find people and institutions to blame. That kind of inquiry feeds public anger, indulging our sense of righteous indignation, but it does not change the past. When we learned that officials in Flint, Michigan, knowingly exposed 9,000 children under the age of six to dangerously high levels of lead in the city’s drinking water supply—which will likely result in decreased IQs, learning disabilities, and hearing loss—Americans demanded to know how local government officials had failed. Space Shuttle Columbia vaporized during reentry into Earth’s atmosphere in 2003, killing all seven crewmembers. Once it was discovered that the disaster resulted from known vulnerabilities, we demanded explanations for NASA’s complacency. In the aftermath of the Fukushima Daiichi Nuclear Power Plant meltdown, which killed more than 40 people and forced thousands from their homes in 2011, everyone wanted to know why Japanese officials failed to prevent the disaster.2 In all three cases, there were abundant warning signs in advance.

  With regards to AI, there are now clear warning signs portending future crises, even if those signals are not immediately obvious. While there are several, here are two examples worth your consideration along with their potential consequences:

  Warning #1: We mistakenly treat artificial intelligence like a digital platform—similar to the internet—with no guiding principles or long-term plans for its growth. We have failed to recognize that AI has become a public good. When economists talk about a “public good,” they use a very strict definition: it must be nonexcludable, meaning it’s impossible to exclude someone from using it because to do so would be impossible, and it must be nonrivalrous, meaning that when one person uses it, another can use it too. Government services, like national defense, fire service, and trash pickup, are public goods. But public goods can also be created in markets, and as time wears on, market-borne public goods can produce unintended consequences. We’re living with one great example of what happens when we generalize technology as a platform: the internet.

  The internet began as a concept—a way to improve communication and work that would ultimately benefit society. Our modern-day web evolved from a 20-year collaboration between many different researchers: in the earliest days as a packet-switching network developed by the Department of Defense and then as a wider academic network for researchers to share their work. Tim Berners-Lee, a software engineer based at CERN, wrote a proposal that expanded the network using a new set of technologies and protocols that would allow others to contribute: the uniform resource locator (URL), hypertext markup language (HTML), and hypertext transfer protocol (HTTP). The World Wide Web began to grow as more people used it; because it was decentralized, it was open to anyone who had access to a computer, and new users didn’t prevent existing users from creating new pages.

  The internet certainly wasn’t imagined as a public good, nor was it originally intended for everyone on the planet to be able to use and abuse like we do today. Since it was never formally defined and adopted as a public good, it was continually subjected to the conflicting demands and desires of for-profit companies, government agencies, universities, military units, news organizations, Hollywood executives, human rights activists, and everyday people all around the world. That, in turn, created both tremendous opportunities and untenable outcomes. This year—2019—is the 50th anniversary of the first two computers sending packets between each other on a wide area network, and in the haze of Russia hacking an American presidential election and Facebook submitting 700,000 people to psychological experimentation without their knowledge, some of the internet’s original architects are wishing they’d made better decisions decades ago.3 Berners-Lee has issued a call to arms, urging us all to fix the unforeseen problems caused by the internet’s evolution.4

  While plenty of smart people advocate AI for the public good, we are not yet discussing artificial intellig
ence as a public good. This is a mistake. We are now at the beginning of AI’s modern evolution, and we cannot continue to think of it as a platform built by the Big Nine for digital commerce, communications, and cool apps. Failing to treat AI as a public good—the way we do our breathable air—will result in serious, insurmountable problems. Treating AI as a public good does not preclude the G-MAFIA from earning revenue and growing. It just means shifting our thinking and expectations. Someday, we will not have the luxury of debating and discussing automation within the context of human rights and geopolitics. AI will have become too complex for us to untangle and shape into something we prefer.

  Warning #2: AI is rapidly concentrating power among the few, even as we view AI as an open ecosystem with few barriers. The future of AI is being built by two countries—America and China—with competing geopolitical interests, whose economies are closely intertwined, and whose leaders are often at odds with each other. As a result, the future of AI is a tool of both explicit and soft power, and it—along with AI’s tribes—is being manipulated for economic gain and strategic leverage. The governing frameworks of our respective countries, at least on paper, might initially seem right for the future of thinking machines. In the real world, they create risk.

  America’s open-market philosophy and entrepreneurial spirit don’t always lead to unfettered opportunity and absolute growth. As with every other industry—telecommunications, health care, auto manufacturing—over time, we in the United States wind up with less competition, more consolidation, and fewer choices as an industry’s ecosystem matures. We have two mobile operating system choices: Apple’s iOS, which accounts for 44% of market share in the US, and Google Android, which is 54% and climbing. (Less than 1% of Americans use Microsoft and Blackberry.)5 Americans do have options when it comes to personal email providers, but 61% of people aged 19–34 use Gmail, and the rest use Yahoo and Hotmail (19% and 14%, respectively).6 We can shop anywhere online we want, yet Amazon accounts for 50% of the entire US e-commerce market. Its closest competitors—Walmart, Best Buy, Macy’s, Costco, and Wayfair—have a combined market share of less than 8%.7

 

‹ Prev