Book Read Free

Solomon's Code

Page 31

by Olaf Groth


  EFFECTIVENESS REQUIRES INCLUSIVENESS

  These and so many other initiatives are advancing global awareness of AI values, power, and trust. Yet, we believe each of the current initiatives suffers from at least one of three imperfections. First, many are top-down or driven by a technological, societal, or political elite. No matter how well intentioned, a small, powerful subset of the population can only guide us so far in a field that’s starting to pervade almost every aspect of our lives, often in very personal and identity-shaping ways. If we take one lesson from our past, it ought to be a concerted movement away from a winner-take-all exploitive power grab that has led to some of the darkest chapters of world history. Hard work remains to ensure a broadly accepted charter serves all humanity, something we might better accomplish by adopting elements of the WEF model or looking in unrelated fields—perhaps bringing in NGOs that have little to do with technology and everything to do with a deep understanding of human life patterns and the human condition broadly.

  Second, none of the forums have an explicit mandate to assess and then facilitate social impact for the world’s socioeconomically underprivileged classes. We might yet develop ways to use advanced technologies to bring more citizens of every standing into the decision-making process. Can we give them a voice in the cognitive power game to help define where we should focus our efforts?

  Third, Western representation and thinking tends to dominate almost all the leading organizations to date. AI cuts across geographic boundaries and sectors of society, so a new charter must eventually actively solicit representation from all societies and their diverse interests—whether technical, philosophical, theological, socioeconomic, commercial, cultural, and so on. The voices of those outside the technology mainstream, such as rural residents in South America and Africa, need to be heard. These are the new economic growth centers from which locally and culturally appropriate innovation will spring, and their value systems can help inform global approaches, as the Rwandan drone example suggests.

  As an inevitable first step in filling these three gaps, the government of each country should consider and explicitly state where it stands on the coming disruption and the voice its people should have in guiding it. Such an important threshold in human development cannot be left to chance, and only those with a clear point of view can contribute meaningfully to a collective future vision. This need not be strictly sequential, because a multistakeholder forum can inform national positions. But with national positions starting to form, we can start to build the international coordination and negotiation of a modern digital charter. Countries might even appoint AI ambassadors with clear lines to national leaders and to one another, providing direct links for quicker reactions to rapid changes and issues that emerge. (These networks might even include a few key private-sector representatives, given that the general user agreements on Facebook and other huge platforms already represent their own sort of social contract for a quasi-state.) The ambassadors could and should bring a range of interests and capabilities, including traditional industry skills, commercial interests, social and humanitarian concerns, and computer science expertise. They could bring the unique capabilities of their home countries to the global digital domain, helping broker solutions to problems in the most disrupted spaces of global society. Ideally, the principles that govern these efforts would be outlined in a widely adopted and mutually enforced accord, one embodied in a formal document with a supporting organization to coordinate, monitor, and enforce as necessary.

  As a first order of business, the Congress would need to have a think tank function that is the diagnostic engine and prioritization funnel for the Congress. The staff could track and analyze global developments in AI, assess their impact on societal systems, and schedule deliberations about them in public plenary sessions. To that end, it would work up case studies in priority areas of AI deployment, identifying the issues, second-order societal effects, and incentive schemes for actors involved in each case. This could prevent an overly broad dialogue that meanders around amorphous philosophical terms and doesn’t lead to specific actions or responsibilities. (The last thing this world needs is more endless discussion without action or prescription, but we also need to avoid over-action and over-prescription that can kill innovation and hinder much-needed breakthroughs.)

  This might sound similar to global governance organizations already in place, perhaps in entities such as the United Nations, the International Telecommunications Union, the World Trade Organization, and the International Monetary Fund. Yet, it differs in a material way. To embed good-faith collaboration on something so commercially dominated and so broadly pervasive in our lives, we need to directly involve and grant authority to private-sector representatives, such as the social entrepreneurs creating the world of tomorrow, governments of countries in which AI development lags, and the wide spectrum of NGOs that can represent disadvantaged or voiceless populations in different domains of social life. None of the institutions of the Bretton Woods system, nor individual NGOs such as ICANN, which manages domain names and IP addresses, have the proper design or the right competencies to deal with all the anthropological, legal, ethical, political, and economic dimensions these complex sociotechnical systems present.

  It’s a tricky balance to strike, though, as Terry Kramer’s experience at the World Conference on Information Telecommunications suggests (see Chapter 5). The head of the conference, which sought to develop a set of globally accepted rules for the Internet, shut down much of the discussion that might have led to some common ground, Kramer says. Governments have a hard enough time trying to agree on major structures, and even less chance of agreement if they can’t first establish a mutual understanding upon which to build, he says. When the dust had settled, the United States and about fifty-five other countries in the minority opted to not sign the WCIT treaty.

  Of course, plenty of other agreements fall victim to the whims of politics and long-term neglect, as well. From the Paris Agreement to combat climate change to the Iran nuclear deal, from Brexit to the United States’ threats to pull out of NAFTA, all kinds of multinational treaties become more difficult to sustain in the face of new developments, disappointed expectations, eroded trust, and populist or isolationist rhetoric. By 2018, anxiety and anger about economic and political relationships went from a simmer to a full boil, with a growing cross section of the world population feeling like the global economy has been hijacked by a small, powerful elite, leaving everyone else with no input or voice.

  A broadly inclusive, respectful, bottoms-up approach could help restore trust and, hopefully, avoid the fate of so many other global accords. But this, too, faces significant barriers—not the least of which are the more autocratic political regimes across much of the world, most notably in China but also across large parts of Africa, the Middle East, Latin America, and the Caucasus. Certainly, government representatives are indispensable, but governments alone lack the agility and expertise needed to facilitate profitable and responsible innovation. A coalition of private sector and civil society actors can track developments on the scientific and social fronts with much greater attention to important details, since they work where most of the functional expertise resides. All are needed in balance. The only way to avoid overregulation and the ceiling it can put on breakthrough innovation is to invite the private- and civil-sector actors into the fold. Conversely, the only way to establish effective authority is to keep government entities involved.

  It might make sense to build a coalition through smaller, established government interactions and build from there. Already, Canadian and French officials are working to put AI on the agenda for the next G7 summit, building off the 2018 meeting in Canada and ramping up to the scheduled 2019 meeting in France. This might allow coordination across parallel tracks, getting buy-in from elite policy makers and corporate entities while also activating other, less powerful but equally important stakeholders. The United Kingdom, with its strong commercial and academ
ic presence in AI development and its skepticism of the EU’s data regulations, more closely resembles the US approach. A tighter philosophical alignment has begun to emerge among Berlin, Paris, Brussels, and, to some extent, Toronto and the Nordic countries as well. For people primarily concerned with a focus on the supremacy of humans, humanism, and democracy, the nascent alliance between these nations and the Vatican offers a more hopeful pathway forward. Whether the United States or the United Kingdom will join their ranks—and whether this tenuous coalition can muster enough global leverage to convince other democracies, much less other nondemocratic regimes, to join—remains to be seen. What’s already clear is the global variation in political and philosophical perspectives about the roles of humans and machines.

  As all these national governments join with large multinationals and other powerful entities to develop a working arrangement among themselves, they should also work with global foundations, NGOs, educational and scientific organizations, and small-business representatives to activate a broad range of vital stakeholders who might never get a seat at the table otherwise. In fact, much of this work can be led by global foundations, such as the Bosch, the Rockefeller, and similar foundations. These organizations already have deep insight into global society due to their extensive human-development programs, and they can use their considerable endowments to help invest in the computing power and other resources they need to impartially test programs and convene a progressive set of stakeholders.

  NO EASY SOLUTION

  How do we make progress in the wickedly complex environment of global artificial intelligence, which is so heavily dependent on values and has such high stakes? After all, even efforts to combat issues widely accepted as abhorrent often fail to reach optimum outcomes across the international community. The Organization for the Prohibition of Chemical Weapons (OPCW) has helped limit the development and deployment of such ordnance, but no accord is 100 percent effective. If the OPCW were, the international community might have recognized the development and prevented the use of chemical weapons in the Syrian civil war. If the Nuclear Proliferation Treaty had a perfect track record, we would not face current tensions over the burgeoning nuclear programs in Iran and North Korea.

  Yet, in the domain of AI, two key factors give us hope for progress. First, as the world seeks to confront the complex challenges posed by a more fluid global trend, new or experimental forms of governance are emerging. While the nature of governance networks varies by context, in general we might be seeing a move away from traditional command-and-control models that attempt to enforce rigid and uniform rules. New, more flexible models are emerging that promote greater participatory access, accommodate changing realities and situations as they emerge, and encourage a continuous dialogue between nations and other stakeholders. Because these new models facilitate an ongoing interaction across a broad range of participants, they seek to derive a more organic form of legitimacy and hold promise for the regulation of the rapidly changing and deeply pervasive field of AI.

  Second, because of this multitude of players, the complex interplay between them leaves ample room for alliances both within and across borders. These partnerships can keep lines between countries from hardening and open up more creative space for finding solutions. Country, private sector, and NGO representatives can identify priorities and develop clusters of like minds to support them, creating beachheads for key issues that, even if not globally accepted, establish a foothold for certain values and goals. For example, on the jobs front, one might imagine an alliance between labor unions protecting their members; robotics firms concerned about a backlash; national political parties concerned about votes; the International Labor Organization upholding workers’ rights; universities training students for the jobs of the future; and companies seeking to retain a productive and satisfied workforce. Such a cluster would lend itself to a holistic, design-thinking approach that could generate new solutions that rigid political stances might obstruct.

  These models still raise questions about accountability and enforcement. Fortunately, because participatory regimes rely on the sharing of best practices and developing integrated solutions that work on the ground, they don’t require the same hard-and-fast negotiating positions and the huge, monolithic stakeholders to sustain them. If one big country or corporation drops away, others can help fill the vacuum—although this works only if dropouts pay a price, whether in geopolitical interactions or in the global marketplace. One current example of this is the UN Global Compact, a voluntary corporate responsibility initiative that includes about 12,000 corporations and other institutions from about 145 countries. Its members accept a charter of ten guiding principles on human rights, labor, corruption, and the environment. The compact still faces a steady stream of criticism about accountability among its members, but the organization has started addressing some of those issues—including through one effort that invites representatives of harmed or weakened civil society groups to join and challenge violators in a public discourse about necessary changes. The initiative also includes a process that obliges members to report on their progress and, thus, opens the gates for public shaming when members fall short. Violators and laggards might find themselves reported to national institutions that have teeth, whether courts or regulatory agencies, which can formally audit and penalize them for misdeeds.

  It’s not an airtight solution in a world where Chinese, American, European, and other governments and companies have vastly different views on governance and regulation. No single solution exists. But even an assurance of an ongoing negotiating process carries weight. Rather than tackling one monolithic issue, competing interests might make inroads on partial issues and set up more sweeping agreements for the future. Already, mutual concerns have emerged worldwide about AI-driven cybersecurity, the safety of autonomous systems, and job losses due to automation. Common ground exists across all these issues, and many more, despite the sharp political and economic differences from country to country and company to company. Shared agreement can begin to shape the platform on which mechanisms for accountability and enforcement can stand.

  A BROAD RANGE OF PERSPECTIVES ON TRUST, VALUES, AND POWER

  The task ahead might feel Sisyphean, but let’s not forget the social benefit that advanced technologies can provide. Even the most disadvantaged citizens can and should have a voice in their society and its values, and we can use AI technologies to empower them. Platforms on mobile phones could solicit and deliver feedback on proposed codes of conduct and their impact assessment—from the rural farmer in sub-Saharan Africa as quickly as the homemaker in Mumbai and the broker on Wall Street. The same AI-powered systems whose development we seek to govern can help in our quest, synthesizing millions of data points into a representative picture of value sets, human choices, and development potential. This doesn’t happen immediately, of course—it starts with a diverse collection of existing interests—but any successful global initiative will need a clearly stated plan to gather the broadest collection of public sentiment, and a mechanism to ensure the integration of that feedback.

  The effort to form a governance mechanism should solicit a range of views about how the integration of AI will influence the trust, values, and power that guides our lives:

  •How should we balance individual freedom of choice and societal interests in the use of AI? If you choose to opt out of an AI agent’s recommended cancer-prevention program, should your insurance company know? Should it have the option to raise your rates?

  •How should we deal with people who decide against the use of AI applications? If you refuse to sit through an AI-based video interview, what chances will you have to get the job you want, the job most suited to your talents or, potentially, any job at all?

  •To what extent can AI support sociopolitical processes—such as elections, opinion formation, education, and upbringing—and how can we prevent harmful uses? If countries use social media as a battlefield, how will you know what is true
, who is real, and whether your vote counts?

  •How can we effectively counter the corruption of data sets and the potential discrimination against individuals or groups hidden in data sets? If cybersecurity systems can’t protect your data, will your autonomous car stay on the road? And would a police officer treat you fairly if it didn’t?

  •To what extent should policies and guidelines define an AI system’s respect for humans and nature? If an AI can solve food crises or climate change disasters, will you change your diet or your vacation plans to conform?

  •How much importance should be placed upon social and societal benefits in the research, development, promotion, and evaluation of AI projects? If a computer or robot takes your job and the government pays you to do nothing, would you fight to stay employed? Would you look for a new job that gives you purpose?

 

‹ Prev