Solomon's Code

Home > Other > Solomon's Code > Page 32
Solomon's Code Page 32

by Olaf Groth


  •How can the promotion and training of employees for new employment and personal growth opportunities be integrated into AI-driven automation of production and work processes? If your employer assigned you an AI buddy that constantly nudged you to learn new tricks, experiment, and be more productive, would you stay or would you go?

  •How can effective, continuous exchange between different stakeholders be facilitated through AI? If you gave an AI a monthly budget and told it about your preferences, would you let it handle all your transactions?

  •What type of permanent international institution would best facilitate the debate about and the governance of AI in seeking the greatest human and economic benefit? If an international group of people guided the AI agents that make decisions on your behalf, what would the group have to do to ensure your trust in those systems?

  Of course, neither the think tank nor the convening function of the Congress could take on all these questions at once. We need to focus on some of the highest value and most urgent issues—staying safe, keeping a job, and having a say over the direction of our lives. Each issue will need a leader, what Wendell Wallach and Gary Marchant call an “issue manager,”‡ who drives the debate and resolution across cultures. After all, these concerns cross borders, especially those on European and American continents, but across Russia, China, and countries with other political and cultural viewpoints, as well. A congress might have to start as a transatlantic endeavor, bringing together countries of like mind, and then bring in China, Russia, and the other countries we often spar with. It might be that China strikes out on its own, seeking to assert its bolder place in the world order. While not desirable, that could help forge parity between two primary geopolitical forces for AI development—a balance that might clarify both our differences and our commonalities and, perhaps, provide a platform for more constructive dialogue.

  WHAT WE LEARN FROM THOSE WHO’VE COME BEFORE

  The multinational treaties and governance models of our past might not provide cause for optimism, but they do offer guidance for mechanisms that assure trust, preserve values, and balance power. Getting everyone to agree to a common model is hard enough, but it might prove an even tougher battle to monitor AI development and enforce a new global standard of care. When we asked international governance experts and practitioners for examples of a comparable treaty that worked, most pointed to the same, single agreement: the Montreal Protocol on Substances that Deplete the Ozone Layer. The Montreal Protocol to reduce chlorofluorocarbons (CFCs) and mitigate depletion of the ozone layer was the most broadly accepted and most quickly implemented global treaty to tackle a tragedy of the commons problem, they said.

  Jaan Tallinn came to the same conclusion after spending the better part of four years exploring these issues for a research paper on global cooperation. The cofounder of Skype more recently helped launch the Centre for the Study of Existential Risk and the Future of Life Institute, to which he contributes. His paper focuses first on humanity’s experience with the tragedy of the commons—those instances when individuals act in their own self-interest and deplete, spoil, or ignore the necessary maintenance of a shared space or resource. His paper then explores the technological means we might have at our disposal to avoid such fates in the future, and what we might do with those tools as they develop. “There’s no nation that has actively maximized steps to limit deforestation or global warming,” Tallinn says. “It’s a consequence of human activity. So, how do we all think two steps ahead of that?” The sorts of commercial, national, and economic incentives that drive the advancement of AI technologies don’t exist for general AI governance. “The topic of AI safety seems to be a tragedy of the commons issue,” Tallinn says.

  So, what was it about the Montreal Protocol that worked so well? For one, it defined a shared problem—a hole in the ozone layer of the earth’s atmosphere that diminished a protective shield against harmful radiation from the sun and increased rates of skin cancer and other unpleasant health impacts that nobody, regardless of political convictions, liked much. It then addressed this widely accepted problem with a tangible and easily understandable solution, one that had enough popular support to overcome industry objections. Next, it included provisions, including financial incentives, to assist countries as they phased out use of CFCs. Last but not least, it also had teeth, establishing certain trade actions against nations that refused to participate—the types of sanctions that, for a variety of reasons, haven’t worked in many subsequent multinational negotiations. Global agreements to control the use of chemical weapons and landmines, govern the use of nuclear energy, and track the flow of small arms share many of those same attributes. Yet none of them brought together the same unique attributes as the Montreal Protocol—such as the sanctions or the broadly shared acceptance of a critical problem—and none saw quite the same level of success.

  The Paris Agreement on climate change, signed in April 2016, provides some lessons, as well. Once again, participating nations faced a shared problem, and developed countries pledged to help developing states with pollution-mitigation investments. However, political support for the agreement varied across the globe, as environmental and economic interests clashed. The rift opened between countries that already attained the benefits of industrialization and those at an earlier stage of the process, creating what Todd Stern, President Obama’s special envoy for climate change, called a “firewall division.”§ Unsurprisingly, the latter set of countries wanted the free industrial reins that the former group had enjoyed a century earlier.

  However, unlike previous accords, including the Kyoto Protocol of 1997, the representatives in Paris established that any agreement needed to be applied uniformly across both developed and developing economies. It also let countries set their own emission-reduction strategies and goals, so long as those plans supported the joint mission of staying under the two degree global warming threshold. Critically, countries had to subject themselves to public scrutiny by submitting their targets ahead of the Paris talks. The combination of equal treatment and open oversight helped tear down the “firewall division,” and it helped uncover certain allowances given to developing countries that richer nations could help address with private and public aid. As such, the Paris Agreement accomplished what few other accords had before: a bottom-up structure that not only facilitated a global commitment, but one that effectively addressed anxieties by remaining flexible enough to evolve and to allow different approaches in different nations. Despite President Trump’s announcement to withdraw the United States from the agreement, more than 175 nations remain active parties to it.

  Unfortunately, artificial intelligence does not share some of the same solidifying factors that helped facilitate these successful climate agreements. Aside from the notable exception of AI’s threat to jobs, most of its risks aren’t immediately tangible. Few people understand how deeply cognitive computing influences their lives already. (The dystopian stories of robot overlords hardly help on this end. In that telling, how much of a threat does AI really present if it’s not Skynet sending a muscle-bound Terminator back from the future?) Add to this the rapid globalization of trade and the shifting nature of multinational competition and cooperation, and it’s clear the guidance of our AI future must take a different shape—and derive its legitimacy from a combination of existing and new authoritative sources.

  Patrick Cottrell studies the legitimacy and successes (or failures) of various international organizations, from the League of Nations to the International Olympic Committee. Cottrell spent the early part of his career in diplomacy, working at the US State Department before switching back to an academic track. He now teaches political science at Linfield College, where he wrote The Evolution and Legitimacy of International Security Institutions. Cottrell explains that most traditional intergovernmental entities, the United Nations in particular, derive much of their credibility through the “symbolic power” they radiate. “They stand for certain universalist princip
les, like peace and prosperity are good for all,” he says. But these entities—born in a very different post-World War II environment and often formed by sovereign states—aren’t necessarily well equipped to handle the governance challenges of the twenty-first century. Their design cannot easily adapt to transnational threats like forced migration, terrorism, climate change, and cybersecurity threats that cross borders and, especially in the case of cyber, operate in an entirely different dimension.

  In response to these developments, Cottrell points to a growing, interdisciplinary body of work on global governance that is beginning to explore efforts to meet these challenges, particularly scholarship on “new governance.” The speed and uncertainty of an AI-pervasive future call for the greater flexibility and wider participation of these models, which often operate separately from established governance bodies, such as the United Nations. As Cottrell notes, this approach recognizes the need for a broad cross-section of participants at the table, the proactive creation of new knowledge, and an iterative problem-solving design that recognizes that many successes will have to emerge from trial and error. “We can’t possibly foresee the consequences of some of these things or anticipate them entirely,” he says. “But we can create from a technology perspective, a policy perspective, and an industry perspective a mechanism that says, ‘What sort of guidelines should we use to govern research, the ethics, [or] the dual-use application possibilities of AI?’”

  If legitimacy is a social base of cooperation, as Cottrell argues, the legitimacy of this type of governance body would derive from its inclusion and the robustness of the norms and standards it disseminates. It may still work as an oversight body, but one with standards that evolve alongside new developments and mechanisms to disseminate best practices. It doesn’t work if it doesn’t adapt and generate standards from a wide range of participants. Still, Cottrell notes, an alignment with an existing pillar of global governance, such as the United Nations or the World Trade Organization, could help enhance the credibility of a new AI governing body. In this regard, a Cambrian Congress could nest alongside existing transnational organizations, where it can help crystalize shared values and build a consensus that national governments could use to negotiate more inclusive and successful treaties. And yet it could still accommodate individual national governments, which regulate their countries and are gatekeepers to civil action at home. It could also accommodate the necessary participation of an entire network of actors in AI—the separate entities that still share a common interest in governance of these advanced technologies. Only then can we guarantee that innovation opportunity doesn’t get killed by preemptive fear, overly broad restrictions, or intergovernmental squabbling. “The UN Global Compact is perhaps one example that we might look to today,” Cottrell says. The voluntary initiative calls on CEOs to commit to the United Nations’ sustainability principles. It retains a clear tie to governments but “is very clearly shaped by corporate members, even though it’s housed within the UN. That makes it a good bit more agile.”

  That sort of agility doesn’t make it multistakeholder inclusive, but it matters, especially in digital technology, where transformative shifts in performance and scale materialize at a sprinter’s pace. The slow, marathon speeds of traditional multinational governance could never keep up. Even a networked, solutions-based approach can’t hit the same speeds, but it at least moves rapidly enough and remains flexible enough to maintain an appropriate threshold of monitoring capacity, accountability, and evolving mission in a rapidly changing AI ecosystem. Computer science experts can rotate in and out of different parts of the network, including the Cambrian Congress think tank, signing pledges and confidentiality agreements as they enter. Needless to say, compensation would be an issue because AI talent comes at a premium. But for this challenge, the AI sector might take some ideas from existing strategies. For example, Singapore’s approach is to rotate its civil servants on a regular basis, enabling the public sector and its citizens to learn and share knowledge and skills across multiple industries. It still needs to attract talent by offering salaries that are competitive with the private sector, but with this approach it can maximize scarce resources.

  GOOD PEOPLE FOR BAD POLITICS

  Count René Haug among the skeptics. The former deputy head of the Swiss mission to the United Nations, Haug sees little evidence that governments would play together nicely. They certainly didn’t when it came to the Organization for the Prohibition of Chemical Weapons, an initiative that one might think has rather broad support from the global populace. “If the OPCW is any indication, governments won’t want certain information about the inner workings of smart technologies to be discussed and disseminated,” says Haug, who now owns and runs a vineyard in Northern California. The implementation of a need-to-know system of monitoring, enforcement, and protection of confidential business information is expensive and all but impossible to carry out, even when widely adopted. Governments regularly squeeze through loopholes to sidestep monitoring, enforcement, and protection requirements, rendering confidentiality agreements effectively toothless. Even the OPCW suffers from that fate, Haug admits. For example, he says, the United States and other delegations disputed the need for the organization to install a strong encryption program for its servers, and the five permanent members of the UN Security Council basically requested unrestricted access to all the information compiled in those databases, including confidential business information. Countries also could request that their intelligence officers take staff positions at the OPCW, obliterating any chance the multilateral organization would have to safeguard private-sector confidentiality and competitiveness. The meddling of national interests proved essentially fatal, Haug says: “Lose that safeguard of confidential business information, and you lose the private sector.”

  UN programs to monitor and control the flow of small arms ran into similar challenges, says Edward Laurance, professor emeritus at the Middlebury Institute of International Studies in Monterey, California. Laurance and his peers who worked on the evolution of these programs over the years took three concrete steps that, in many ways, mimicked the elements that made the Montreal Protocol on CFCs so successful. First, he says, they developed enough data-backed evidence (the think tank function) to convince national leaders that small arms proliferation causes tangible societal and economic problems for all countries, not just those with the greatest amount of gun-induced violence. “Everybody started to see themselves as being in the game even if they didn’t have the ball,” Laurance says.

  As a second step, working groups established voluntary standards and certifications for the trade of small arms, and then assigned networks of point people who verified compliance in their countries. They then pushed to establish a treaty—one that, at best, sees imperfect compliance but at least established a set of acceptable practices. “We ended up designing a bilateral certificate that was signed by the receiving or buying country and said, ‘You must not sell these weapons across borders or use them on your own people,’” Laurance explains. That created a moral boundary and a framework for exposing noncompliant nations as clear outliers.

  The last and, in some ways, the biggest hurdle is related to the talent and skills needed to carry out the agreements, Laurance says. UN and other government personnel might have the expertise to establish the processes needed for a treaty, but diplomats and agency administrators rarely have the level of technical expertise to carry it out. In the case of small arms, the involvement of military experts helped in many countries, Laurance says, because they know weapons, but monitoring and enforcement rules for advanced technologies would require an even more technically skilled cadre of experts. For that, private consultants and specialized academics would have to take part. “It’s good to have that diversity and not rely on any one resource too much, because governments deemphasize an issue, skew science, or pull out of accords for political reasons,” Laurance says. “Someone needs to stay” and provide the point of contact
and a pool of expertise from each country and major stakeholder. Someone needs to keep the doors open for dialogue, debate, and small steps, even in tough times when bigger steps are hard.

  The private sector and a strong coalition of NGOs might play an especially critical role in this regard. While governments and public policy can and often do change, broad participation of these private- and civil-sector entities might lend greater consistency and longer-term consensus. For example, given that many, if not most, American business leaders supported the goals of the Paris Agreement, President Trump’s decision to withdraw from it might’ve been far less likely if those commercial and civic leaders were parties to negotiations, if not signatories to the accord itself. In a country where “big government” and political-intellectual elites are abhorred by many citizens, especially those who voted for Trump, influential private-sector institutions—unions, the American Chamber of Commerce, small and medium business associations, faith-based organizations, and top business leaders—might have swayed sentiment if they’d been stakeholders in the agreement.

  As it stands today, though, few AI and advanced technology issues seem to capture the public’s attention enough to present an obvious pathway to international governance. The one exception might be autonomous weapons. Lethal Autonomous Weapons (LAWs) capture the imagination as killer robots, but it includes far hazier territory than what Hollywood depicts in its sci-fi blockbusters. These military robots are designed to select and attack military targets without intervention by a human operator. Advances in visual recognition combined with the broad availability of small, cheap, fast, and low-power computing make the “autonomy” part of LAWs easy. A teenager could buy a computer motherboard for about $50 and program it to recognize faces. She could mount it on an $800 drone that could handle the weight. And, from there, it’s not a great leap to the troubling realization that drone-based LAWs are plausible outside the control of military protocol.

 

‹ Prev