Book Read Free

Solomon's Code

Page 33

by Olaf Groth


  In November 2017, UC Berkeley professor and AI expert Stuart Russell testified before the UN Convention on Certain Conventional Weapons in Geneva. He argued that autonomous weapons can easily become weapons of mass destruction because they’re cheap to the point of being disposable, effective at finding specific targets, and, when deployed in arbitrarily large numbers, they can “take down half a city—the bad half.” The video dramatization Russell shared as part of his testimony depicted micro-drones that fit in the palm of your hand, seeking targets via face recognition and killing them with small explosives.

  The specter of small, inexpensive “slaughterbots” lends itself to shock and awe. The video quickly went viral, playing on the dystopian fears so many people hold in relation to artificial intelligence. But it’s certainly worth the attention of citizens and governments. Several groups have been campaigning since 2015 for an outright ban on LAWs. As of this writing, US policy forbids LAWs to fire automatically, in part because it remains extremely difficult to capture in computer code all the factors that ought to go into determining who to kill when. More promising, though, the regulation of LAWs might eventually provide a useful hook on which we might hang some initial, broadly accepted governance. And perhaps that sets the stage for the tougher deliberations to follow.

  THE MACHINE CAN MAKE US BETTER HUMANS

  As these preceding global treaties suggest, we cannot build sufficient trust in artificial intelligence without the broadest possible engagement in the contentious debates that distill common values and strike an appropriate balance of power, especially as political and commercial forces ebb and flow. As a global society, we don’t have a great track record of cooperation on governance issues, but we stand at a moment when such a collaboration might be more critical than ever. Not only do we need to mitigate the serious risks AI might pose to humanity, we need to capitalize on this unique chance to establish a fruitful ecosystem for advanced technologies, one in which powerful artificial intelligences enhance humanity and the world in ways we can’t yet imagine. More than any other time in our history, we can curb the worst of human nature and capitalize on the best of humanity’s creativity, imagination, and sense of discovery. Given our collective global capabilities, from scientific examination to entrepreneurial zeal, we can expand our frontiers and unleash our potential in this emerging cognitive revolution.

  We face daunting obstacles, many of which challenge what it means to be human. The very idea of a powerful intelligence that mimics but, in certain ways, surpasses ours both fascinates and scares us. We gauge it against ourselves to determine whether it’s a threat, and most people in Western countries view it as exactly that. This alone could keep us from tapping its full power and building a fruitful symbiotic partnership between human, artificial, and other intelligences. Our human spirit and our purpose carried us to heights unattained by other lifeforms on earth, with greater intellect and greater power. But we have not always used that advantage responsibly. When we do, and when we employ the technologies we create, we accomplish amazing things—500 million people lifted out of poverty over the last century, an additional twenty years on the average lifespan, and humans walking on the surface of the moon (and maybe, soon, on Mars as well).

  What drives us to these peaks? We aspire to them, wanting our lives to matter and wanting to achieve for ourselves and for our children and grandchildren. Some of us interpret these things in totalitarian or aggressive ways, but most of us don’t. We strive and we struggle, relying on each other to advance human and environmental well-being in small and large ways. Most of us don’t leave others behind. We might forge ahead and sail to new shores, and our actions might seem egotistical and self-indulgent at times, but we usually return to show others the way across the expanse. Sometimes it’s the entrepreneurs with the thickest skulls who produce the greatest impacts on our lives. Sometimes it’s the awkward or antisocial scientists who chart the way to the moon, map the ocean floors, and scale the proverbial mountaintops on their insatiable journey to the frontiers of our universe, our minds, and our souls. Great human thinkers don’t rest unless we advance and grow, and we grow in partnership with the people around us.

  We thrive on empathy, imagination, and creativity—attributes rare or nonexistent in even the most powerful AI systems, yet abundant in humans whatever their nationality, ethnicity, socioeconomic class, or education levels. Our thoughts collide and spark ideas in a mesmerizing display of creative friction. And yet, every time we move closer to another human being to share a new thought or a new emotion, we blend that creativity with the depth of imagination and empathy that allows us to collaborate, innovate, dream, love, and build.

  We always build, and we always destroy. We create the world of the next moment, the next month, the next year, the next century. We construct cold, hard tools to forge warm social and emotional bonds. We build houses and families to live in them. We build roads to connect our neighborhoods, and we build the types of relationships that make us neighbors. We create new technologies and economies, connecting across the globe to expand wealth and knowledge for tomorrow’s generations.

  We built globalization based on the Anglo-American principle of the free flow of goods, services, and capital, and it led to imbalances that were hard to take for many societies. Now we are deconstructing that model, even as a new type of globalization emerges—one that features a global data economy spearheaded not by Wall Street and the City (i.e., London as a financial center), but by the entrepreneurs of the digital realm. That too will lead to missteps and crises. Over time, we will tear down some elements and forge ahead with others. But make no mistake, the global data economy with its autonomous machines is here to stay.

  That’s because the new machines can build, too. They can create models of the future, often in better and more insightful ways than humans can. They can generate insights out of massively complex data sets and the subtle patterns that human brains can’t process. AI systems already create alongside us, but they do so differently. They can’t share the experience of human existence and the millions of years of biological evolution that’s encoded in our brains and that serves as a base memory for the human condition. This shared experience forges a certain kind of common bond that machines can’t share. Many of us express those connections through various religious or philosophical beliefs, almost all of which strive for righteousness, justice, care, and love. Do unto others as you would have them do unto you. Cognitive machines can and should emulate all these things. They can copy and empower us. But they can’t be us, because they have never crawled out of primordial slime through the synthesis of cellular trial and error, success and failure, joy and pain, satisfaction and frustration, to stand on top of Mount Everest or on the surface of the moon, raising their arms and crying out to all humanity. Their learning is not reinforced by emotion. Their bodies are not designed to feed the mind with sensations from the myriad internal and external sensations that enhance a human’s ability to relate to the environment and to survive and evolve within it.

  As we forge deeper symbio-intelligent relationships, we can’t forget that humans are unique—not necessarily better or worse, but unique nonetheless. Yes, we are one species among many, and we may need to grapple with the possibility that machines could evolve into another species alongside us. But as we do that, we also need to avoid the urge to think of the brain as little more than a computer analog, as Alan Jasanoff writes in his recent book The Biological Mind. The “cerebral mystique” belies the complex interplay between physical sensations, emotions, and cognition that comprises the whole of a human being, Jasanoff writes.¶ Someday, we might replicate human brains on silicon or some other substrate, but human cognition still requires a complex array of interactions with our bodies and environments. Human ingenuity might one day come up with a “replicant” that connects an intelligent brain to a humanoid body and taps the many stimuli of the physical environment around it. But until then—and that’s a far, far cry from
the level of artificial intelligence development we have today—machines will remain socially and spiritually stunted.

  And that’s OK, for we must also avoid the trap of equating human and machine, pitching the two as rivals in an existential race to the top of the intellectual pyramid. The complementary power of machine and human intelligence working together offers far too much promise to give way to an unduly competitive mindset. Our “wetware” brains are remarkably inefficient in certain cognitive methods—messy and bubbly and wrangled, easily distracted and prone to meandering. And yet they’re wonderfully experimental. We transpose concepts from one field to another, although we might often do this in ways that have our friends furrowing their brows or rolling their eyes. We waste millions of instructions per second (MIPS) by staring into a sunset or watching the grass grow, making wild connections to far-off concepts that spark near-genius moments. Seemingly random blips in our mental constitutions have us bursting out into laughter, anchoring some crazy thought with an emotion of delight that won’t reveal its funky self until years later, when the most random occurrence provokes it.

  Cognitive machines with their neural networking power can help us become more effective sensors and faster processors, so we strike more connections and make better decisions. They can help us build greater scientific constructs, human communities, or ecological resiliencies. And they can help us strategize by simulating incredibly complicated situations and running outcome scenarios, enhancing our ability to choose the best paths forward. We will need AI agents to help solve our worst and toughest problems—climate change, health care, peaceful coexistence—but they will need our vision, our spirit, our purpose, our inspiration, our humor, and our imagination. Paired with AI’s analytical and diagnostic power, we can soar to new heights, understand nature more deeply and holistically, explore farther into the cosmos, and establish our place in the universe in more satisfying ways than we ever experienced before. Perhaps for the first time, we might achieve our aspirations for humanity while increasing environmental stewardship, stepping out of a zero-sum game and optimizing across so many more of the variables in our everyday existence.

  Combining the unique contributions of these sensing, feeling, and thinking beings we call human with the sheer cognitive power of the artificially intelligent machine will create a symbio-intelligent partnership with the potential to lift us and the world to new heights. Some things will go wrong. We will second-guess ourselves at almost every turn. The pendulum will swing between great optimism and deep concern. We will take two steps forward and one step back, many times over.

  But that is why it is so essential that we engage now and begin the dialogue and debate that, out of our rich human diversity, will establish a common ground of trust, values, and power. This is the foundation upon which we build the future of humanity in a world of thinking machines.

  *Wendell Wallach, and Gary Marchant, “An Agile Ethical/Legal Model for the International Governance of AI and Robotics,” Association for the Advancement of Artificial Intelligence (2018).

  †Sarah Salinas, “The Most Important Delivery Breakthrough Since Amazon Prime,” CNBC, May 22, 2018.

  ‡Wendell Wallach, and Gary E. Marchant, “An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics,” Association for the Advancement of Artificial Intelligence (2018).

  §Todd Stern, Why the Paris Agreement works, The Brookings Institution, June 7, 2017.

  ¶Alan Jasanoff, The Biological Mind (New York: Basic Books, 2018).

  AFTERWORD

  by Laura D. Tyson and John Zysman

  Intelligent tools and systems are diffusing through economies and societies around the globe, affecting how we work, earn, learn, and live. Our daily lives are already powerfully shaped by digital platforms such as Amazon, on which we buy goods and services; Facebook, through which we track our friends, even as we are tracked; and Google, through which we access a world of information. And the press is replete with tales of automated factories run by robots.

  Accelerating the growth in the power of platforms and automated systems, as Solomon’s Code makes clear, is the emerging flood of artificial intelligence tools. The functionalities and applications of these tools are diverse, but as Groth and Nitzberg observe, “At their core, all the various types of AI technologies share a common goal—to procure, process, and learn from data, the exponential growth of which enables increasingly powerful AI breakthroughs.”

  At least in theory, in the long run AI systems with advanced capacities for reasoning and abstraction could perform all human intellectual tasks at or above the human level. Groth and Nitzberg refer to this state as “Artificial General Intelligence.” Others refer to it as the Singularity. Fears about the possible domination of humans by machines embodying artificial general intelligence are stoked by news stories, fiction, and movies. The specter of artificial general intelligence is raising profound questions about what it means to be human.

  Whether, if ever, we arrive at artificial general intelligence, narrow AI tools that imitate human intelligence in specific applications are developing rapidly, resulting in what the press calls the “appearance of intelligent behavior in machines and systems.” As Groth and Nitzberg’s deep dive into the sweep of these new tools makes clear, there is an array of possibilities to capture as well as myriad challenges and concerns to address.

  Many of the impacts of such tools are already evident. Consider the ability of platforms like Facebook to target advertisements and information to particular groups, even individuals; to affect political discourse and outcomes; and to provide new ways for people around the globe to communicate. We have information at our fingertips and remarkable capacities to communicate, but the information about us is widely available and the capacities to communicate often absorb a disproportionate amount of our time. Suddenly, not just a war of information and misinformation is evident. There are also disturbing signs of the makings of a surveillance society, as well as evidence that even the smartest algorithms might systemize rather than counter human biases and flaws in judgment.

  Intelligent tools and systems are spreading rapidly, their power continuously expanding with the growth of AI tools, transforming how goods and services are created, produced, and distributed. Companies will need to adjust their processes, products, and services to the new technological possibilities to sustain or gain competitive advantage. But will the resulting benefits accrue to their work forces? Will we see massive increases in productivity as our societies become increasingly rich, but increasing inequality as the gains are shared ever more unequally? Will we see rapid displacement of work and workers, technological unemployment, and mounting inequality within societies and between economies?

  Optimists proclaim that the future is ours to create. Easy to say, but the difficulty is that there is great uncertainty about the possibilities and challenges in a world of increasingly sophisticated AI tools and applications. Consider the impact of AI-driven automation on work and jobs, which is the focus of an interdisciplinary faculty group at UC Berkeley called Work in an Era of Intelligent Tools and Systems (WITS.berkeley.edu). There is broad agreement in research by McKinsey Global Institute,* the OECD,† the World Economic Forum,‡,§ and individual scholars including Kenney and Zysman,¶ all finding that some work will be eliminated, other work will be created, and most work—as well as the terms of market competition among firms—will be transformed. There is also broad agreement that intelligent tools and systems will not result in technological unemployment—the number of new jobs created will offset the number of old jobs destroyed—but the new jobs will differ from those that are displaced in terms of skills, occupations, and wages. Moreover, it appears likely that automation will continue to be skill-biased, with the greatest risk of technological displacement and job loss falling on low-skill workers. A critical question, then, is how the new tasks and jobs enabled by intelligent tools and systems will affect the quality of jobs. Even if
most workers remain employed, will their jobs support their livelihoods.?

  Although there is widespread agreement that AI-enabled automation will cause significant dislocation in tasks and jobs, there is considerable uncertainty and debate about the magnitude and timing of such changes. Many routine tasks will be displaced or altered; but how many jobs will be displaced entirely? Will long-distance truck drivers become a thing of the past? Or in a future enabled by AI and driverless cars, will truck drivers be able to perform critical management tasks at the beginning and end of their journeys, sleep on the job as their trucks move across structured highways, and make deliveries along the way? Some Japanese firms, confronted with outright skill shortages, are already turning to AI, machine learning, and digital platform systems to permit less experienced, not less skilled, workers to take on more difficult tasks. More generally, the performance of routine tasks currently performed by humans might be converted into human tasks to monitor and assess routine functions performed by machines. What sort of new jobs or tasks will be created? What are the skill and wage differences between the jobs that disappear and the jobs that are created? Whether digital platforms, such as Uber, create a flood of gig work or new sorts of transportation firms with imaginative new arrangements for work will depend as much on the politics of labor and on labor market laws and rules—features that differ dramatically across nations—as on the technologies themselves.

  For firms, communities, economies, and societies, adjusting to new transformational technologies is never simple. One need only look at the economic and social upheavals during the first industrial revolution. Crucial questions driving our research are whether and how technological trajectories supporting high-quality employment and skilled work can be created and sustained. Can we do so quickly enough to match or ideally be ahead of the high pace of this technological change that is already upon us? How do we incentivize corporations and civil society groups to engage in the definition of future skill sets and jobs? These are also the questions raised in Solomon’s Code: How can AI-enabled technologies be shaped to support human purpose and well-being across diverse countries and regions with different political and cultural objectives.

 

‹ Prev