Book Read Free

The Big Nine

Page 25

by Amy Webb


  Member organizations and countries should collaborate and share their findings, which would include vulnerabilities and security risks. This would help GAIA members keep an advantage over bad actors who might try to develop hazardous capabilities for AI, such as autonomous hacking systems. While it seems unlikely that the Big Nine might be willing to share trade secrets, here too there is precedent: the World Health Organization coordinates global health responses in times of crisis, while a group called the Advanced Cyber Security Center mobilizes law enforcement, university researchers, and government departments around cyberthreats. This would also allow GAIA members to develop a series of sentinel AIs, which at first would identify whether an AI system is behaving as intended—not just its code, but its use of our data and its interaction with the hardware systems it touches. Sentinel AIs would formally prove that AI systems are performing as intended, and as the AI ecosystem matures toward AGI, any changes made autonomously that might alter a system’s existing goals would be reported before any self-improvement could be made. For example, a sentinel AI—a system designed to monitor and report on the other AIs—could review inputs into a general adversarial network, which was detailed in the earlier scenario chapters, and ensure it is acting as intended. Once we transition from ANI to AGI, sentinel systems would continue to report and verify—but they would not be programmed to autonomously act.

  Once we’re nearing AGI, the Big Nine and all those in the AI ecosystem should agree to constraining AI to test environments and simulate risk before deploying them in the real world. What I’m proposing is vastly different from the current practice of product testing, which mainly looks to see whether a system is performing its functions as designed. Because we cannot know all of the possible ways in which a technology might evolve or be repurposed in the real world before actually deploying it, we must run both technical simulations and risk mapping to see economic, geopolitical, and personal liberties implications. AI should be boxed in until we know that the benefits of the research outweigh possible negative outcomes, or if there is a way to mitigate the risks. This means allowing the Big Nine to pursue their research without the constant threat of imminent investor calls and conference presentations.

  Governmental Change: The Case for Reorienting the United States and China

  GAIA must work in partnership with the governments of its member countries. But those national governments must recognize that they can no longer work at the speed of a large bureaucracy. They must engage in collaboration and in long-term planning, and they must be nimble enough to act more quickly in order to confront the future of AI.

  All levels of government—leaders, managers, people who work on budgets, those who write policy—should demonstrate a working knowledge of AI and, ideally, should have technical expertise. In the United States, this means that all three branches of our government should work toward domain expertise on AI. In such varied places as the Department the Interior, the Social Security Administration, Housing and Urban Affairs, the Senate Foreign Relations Committee, Veterans Affairs, and beyond, there must be AI experts embedded and emboldened to help guide decision-making.

  Because we lack standard organizing principles on artificial intelligence within the US government, there are no fewer than two dozen agencies and offices that are working on AI in silos. In order to drive innovation and advancement at scale, we must build internal capacity for research, testing, and deployment—and we need cohesion across departments. At the moment, AI is outsourced to government contractors and consultancies.

  When that work gets outsourced to others, our government leaders are absolved from pushing up their sleeves and familiarizing themselves with the intricacies of AI. They aren’t able to build up the institutional knowledge required to make good decisions. They just don’t have the lexicon, they don’t know the history, and they aren’t familiar with the key players. This lack of familiarity creates unforgivable knowledge gaps, which I’ve observed in meetings with senior leaders across multiple agencies, only some of which include the Office of Science and Technology Policy, General Services Administration, Department of Commerce, Government Accountability Office, State Department, Department of Defense, and Department of Homeland Security.

  Early in 2018—long after the BAT had announced numerous AI achievements and Xi Jinping made the CCP’s AI plans public—President Trump sent Congress a 2019 budget that called for a 15% cut to science and technology research funding.4 What was left was a mere $13.7 billion, which was intended to cover a lot: outer space warfare, hypersonic technology, electronic warfare, unmanned systems, and also artificial intelligence. At the same time, the Pentagon announced that it would invest $1.7 billion over five years to create a new Joint Artificial Intelligence Center. These are appallingly low numbers that demonstrate a fundamental lack of understanding of what AI promises and truly requires. For perspective, in 2017 alone the G-MAFIA spent a combined $63 billion on R&D—nearly five times the US government’s total science and tech research budget.5 But it also points to a bigger, thornier problem: if our government can’t or won’t fund basic research, then the G-MAFIA is stuck answering to Wall Street. There is no incentive to pursue the kind of research that furthers AI in the public interest or any other research on safety, security, and transparency that isn’t attached to a profit center.

  The United States also lacks clear messaging about our role in the future of artificial intelligence given China’s current positioning. We tend to make announcements about AI after China has revealed its next maneuver. Beijing thinks that Americans only care about yoni eggs and craft beers and Netflix and chilling. We’ve demonstrated that as consumers, we are easily manipulated by advertising and marketing, and we are quick to spend money when we don’t have it. We’ve demonstrated that as voters, we are vulnerable to salacious videos and conspiracy theories and what are clearly made-up news stories—we can’t think critically for ourselves. We repeatedly show that money is all that matters as we prioritize fast growth and steady profit over progress in basic and applied research. These are callous assessments, but they’re difficult to argue with. To Beijing and the outside world, it looks as if we are preoccupied with putting Americans and America first.

  For the past five decades, the US posture on China has oscillated between containment and engagement, and this is how our leaders have framed the debate on AI. Should we cooperate with the BAT and with Beijing? Or box China in through the application of sanctions, cyberwarfare, and other acts of aggression? Choosing between containment and engagement assumes that the United States still has the same amount of power and leverage we did in the 1960s. But in 2019, America simply does not enjoy unilateral power on the global stage. Our G-MAFIA are mighty, but our political influence has waned. China, through the BAT and its government agencies, has made too many deals, invested too much money, and developed too many deep diplomatic ties all around the world: in Latin America, Africa, Southeast Asia, and even in Hollywood and Silicon Valley.

  We must come to terms with a third option for China: the United States must learn to compete. But to compete, we need to take a step back and see the bigger picture of AI, not just as a cool technology or as a potential weapon, but as the third era of computing into which everything else connects. The US needs a cohesive national AI strategy backed by a reasonable budget. We need to develop diplomatic relationships that can outlast our four-year election cycles. We need to get into position to offer a better deal than China to countries all around the world—countries who, just like ours, want their people to live healthy, happy lives.

  Regardless of what happens to Xi—his citizens may revolt and try to topple the CCP, or he may suddenly come down with a terminal illness—big parts of the world now depend on China for technology, manufacturing, and economic development. And China depends on AI for its future survival. China’s economy is growing unbelievably fast, and hundreds of millions of Chinese will soon enter the middle and upper middle classes. There is no playbook
for that kind of social and economic mobility at such an immense scale. Beijing understands that AI is the connective tissue between people, data, and algorithms, and that AI can help inculcate the CCP’s values in the masses in order to keep its people in line. It sees AI as a means to the resources it will need in the future, resources that it can obtain through trading with other countries in need of capital and investment.

  So what would possibly compel China to change its developmental track and plans for AI? There’s one very good reason for China to work toward the optimistic scenario from the beginning: basic economics. If it is the case that upward mobility in China is happening too fast for Beijing to contend with, authoritarian rule isn’t the only realistic strategy. China is poised to become a global leader across many different industries and fields—and not just as a manufacturer and exporter of goods designed elsewhere. If Beijing agreed to transparency, data protection, and addressing human rights, it would be in position to colead GAIA as an equal partner with the US, which could mean a realistic path toward elevating millions of Chinese people out of poverty. Collaboration doesn’t mean sidelining the CCP. It could preserve both the CCP and propel China’s formidable workforce, army of researchers, and geoeconomic might to the forefront of human civilization.

  If Beijing won’t acknowledge an alternate—but positive—future that deviates from its various strategic plans, then we can call on the leaders of the BAT and China’s AI tribe to make better choices. We can ask for courageous leadership from the BAT, who can decide they want a better world for the Chinese people, and for their allies and partners. If the BAT helps preserve the status quo in China, 20 years from now its citizens—and the citizens of all the countries that have accepted deals—will be fearfully living under constant surveillance, with no ability to express their individuality. The BAT will enable human suffering. Christians won’t be able to pray together, without fear of being reported and punished. Lesbian, gay, and transgender people will be forced into hiding. Ethnic minorities will continue to be rounded up and sent away, never to be heard from again.

  AI demands courageous leadership now. We need our government to make difficult choices. If we instead preserve the status quo in the US, our eventual default position 20 years from now will be antitrust cases, patent lawsuits, and our government trying in vain to make deals with companies who’ve become too big and too important to override. We must allow the G-MAFIA to work at a reasonable pace. We should be comfortable with the G-MAFIA going a few quarters without making a major announcement. If they aren’t cranking out patents and peer-reviewed research at a breakneck pace, we shouldn’t question whether the companies are in trouble or whether all this time we’ve been inflating an AI bubble.

  In the United States, developing a strategy and demonstrating leadership is critical—but that still isn’t enough to guarantee the institutional capacity we’ll need in the future. We therefore should reinstate the Office of Technology Assessment, which was established in 1972 to provide nonpartisan scientific and technical expertise to those writing policy—and which was defunded by a shortsighted Newt Gingrich and the Republican-controlled Congress 20 years later. The OTA’s job was to educate our lawmakers and staff within all three branches of government on the future of science and technology, and they did so using data and evidence and without politicizing their research.6 For the trivial amount of money it saved by closing the OTA, Congress willingly and intentionally dumbed itself down. Vestiges of the OTA’s work still exist in other areas of government. The Congressional Research Service employs lawyers and analysts who specialize in legislative expertise. Of their five approved research areas, none of their coverage specifically includes AI. Instead, the research focuses on issues like mineral production, space exploration, the internet, chemical safety, farm credits, and environmental justice. The Office of Net Assessment is the Pentagon’s secretive, internal think tank—and in my experience, its staffed with the brightest and most creative minds in the DoD. But the ONA doesn’t have the budget or workforce it should, and some of its work is handled by contractors.

  The US government needs to build internal capacity. It needs to develop strong, solid muscles for innovation. If reviving the Office of Technology Assessment is too much of a political lightening rod, then it can be renamed the Department of the Future or the Office of Strategic AI Capabilities. It should be well funded, free of political influence, and responsible for basic and applied research. It should aggressively educate the executive, legislative, and judicial branches of the US government.

  Starting a new office will help us plan better for the future, but we need a nonpartisan group of smart people who can mitigate the sudden impacts of AI as they happen. For that, we ought to expand the purview of the CDC, and rename it the Center for Disease and Data Control—or the CDDC. As it stands, the CDC is our nation’s health protection agency. We’ve seen it in action during past Ebola crises, when it coordinated quarantine orders with other health agencies and was a primary source for journalists covering outbreaks. When there was a Congolese Ebola outbreak in 2018, border patrol agencies didn’t suddenly staff their own Ebola teams to try and contain the spread of the virus. Instead, they followed standard CDC protocol. So what happens if, a decade from now, we have a recursive self-improving AI that starts to cause problems? What if we inadvertently spread a virus through our data, infecting others? The CDC is the global leader in designing and implementing safety protocols that educate the public and can mobilize disaster responses. Given AI’s very close relationship with health and our health data, it makes sense to leverage the CDC.

  But who would come and work on AI for an OTA or a CDDC when the perks of Silicon Valley are spectacularly more attractive? I’ve had lunch in both the navy’s Executive Dining Facility in the Pentagon and on the G-MAFIA’s campuses. The navy’s dining room is smartly appointed, with insignias on the plates and a trim daily menu of meal options—and, of course, there’s always a chance you could wind up sitting next to a three- or four-star admiral. That being said, enlisted men and women don’t get to eat in the Executive Dining Facility. People who work at the Pentagon have a choice of food courts with a Subway, Panda Express, and Dunkin Donuts.7 I had a toasted panini once at the Center Court Café, which was dry, but edible. The food on the G-MAFIA’s campuses isn’t remotely comparable: organic poke bowls at Google in New York, and seared diver scallops with maitake mushrooms and squid-ink rice at Google’s office in LA. For free. Food isn’t the only perk within the G-MAFIA. Just after Amazon’s Spheres opened in Seattle, a friend took me on a tour of what is essentially an enormous greenhouse/workspace. The Spheres are just marvelous: climate-controlled, glass-enclosed, self-contained ecosystems made up of 40,000 species of plants from 30 different countries.8 The air is clean and fragrant, the temperature is around 72 degrees regardless of what the weather is like outside, and there are comfortable chairs, loungers, and tables all around. There’s even an enormous tree house. Amazon staff are free to work in the Spheres anytime they want. Meanwhile, at Facebook, full-time staff get four months of parental leave, and new parents get $4,000 cash to help them out with supplies.9

  My point is this: it’s really hard to make the case for a talented computer scientist to join the government or military, given what the G-MAFIA offer. We’ve been busy funding and building aircraft carriers rather than spending money on talented people. Rather than learning from the G-MAFIA, we instead mock or chastise their perks. The opportunity cost of civic duty is far too great in the United States to attract our best and brightest to serve the nation.

  Knowing this, we ought to invest in a national service program for AI. Something akin to a Reserve AI Training Corps, or RAITC—like the ROTC, but graduates could go either into the military or into government. Students would enter the program in high school and be offered free college tuition in exchange for working in civil or military service for a few years. They should also be given access to a lifetime of free, practical skills t
raining, which would be held throughout the year. AI is changing as it matures. Incentivizing young people to commit to a lifetime of training is not only good for them, it helps transition our workforce for the third era of computing. It also directly benefits the companies where they ultimately land jobs—because it means their skills sets are kept current.

  But Washington cannot act alone. The US government must look at the G-MAFIA, and at the tech sector, as strategic partners rather than platform providers. Earlier in the 20th century, the relationship between DC and the big technology companies was based in shared research and learning. Now that relationship is transactional at best, but more often adversarial. After two terrorists killed more than a dozen people and wounded nearly two dozen more at a holiday party in San Bernardino, California, the FBI and Apple entered into a heated public debate about encryption. The FBI wanted to crack open the phone to get evidence, but Apple wouldn’t help. So the FBI got a court order demanding that Apple write special software, which Apple then fought not only in court but in the news media and on Twitter.10 That was a reaction to something that already happened. Now imagine if AI was involved in an ongoing crime spree or started to self-improve in a way that was hurting people. The last thing we want is for the G-MAFIA and government to argue back and forth under duress. Foregoing a relationship built on mutual respect and trust makes America—and every one of its citizens—vulnerable.

  Lastly, regulations, which might seem like the best solution, are absolutely the wrong choice. Regardless of whether they’re written independently by lawmakers or influenced by lobbyists, a regulatory pursuit will shortchange our future. Politicians and government officials like regulations because they tend to be single, executable plans that are clearly defined. In order for regulations to work, they have to be specific. At the moment, AI progress is happening weekly—which means that any meaningful regulations would be too restrictive and exacting to allow for innovation and progress. We’re in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but ultimately regulations would cause greater damage in the future.

 

‹ Prev