Book Read Free

Solomon's Code

Page 8

by Olaf Groth


  Yet, for all the attention to the soft features and hard science that went into PARO, the depth of its effectiveness stems from something far more human: our ability to suspend our disbelief. Shibata and his team made PARO feel close to real without falling into the “uncanny valley”—that odd place where a lifelike robot begins to feel a little too creepy. So, while they made PARO to weigh about six pounds, around the weight of a baby, they also deliberately made it a seal to avoid existing associations people have with common pets like dogs and cats.§§§§ We know the seals aren’t real, but they’re just real enough in our heads to make us happy and interact on a deeper level with them.

  Jonathan Gratch, a professor of computer science and psychology at the University of Southern California, spends much of his time researching the extent to which people treat machines as a social being. In other words, he thinks deeply about what one might have to add to a robotic seal to make people relate to it more intensely and treat it more like a real seal. What he’s found is that people initially tend to treat interactive robots or other AI systems like social entities, granting them a certain measure of empathy. And when the machines provide cues that suggest an emotional connection—the more it appears to emote, for example—they can provoke even stronger response in their users. Developers of AI and robotic systems often program them to convey these humanlike attributes or emotions, which can foster a tighter bond with the machine. However, over time, those emotional cues need to have meaning behind them, and many of these designs eventually run into what Gratch calls “false affordances.”

  For example, consider a robot or chatbot that appears more humanlike because it can direct one’s attention to a specific object or an idea. Research shows that sort of attention deepens interactions with humans. But when the system tries to direct a person’s attention to an irrelevant or confusing target, people quickly lose faith in it, Gratch explains. The same holds true for attempts at empathy and shallow apologies. So, while people typically over-trust the machine at the outset, they perceive it as an outright betrayal when it doesn’t fulfill its function or satisfy its promise. “If a machine does recognize that and improve,” Gratch says, “it’s very powerful because it’s following ‘the rules.’” In essence, it sustains, and perhaps even heightens, our suspension of disbelief.

  This prompts some concerns among ethicists—initial reactions to PARO included worries that it misled and manipulated patients, for example—but we have any number of harmless examples in our lives. We tell our kids about Santa Claus and encourage them to make believe with their teddy bears and toys, and we revel in the creativity they express when doing so. We take deep pleasure from movies, theater, and novels that inherently require our own imagination and a grain of salt. And we readily employ it in our interactions with technology, including in ways that accelerate the development cycle. We’re willing to accept the fact that our smartphones have “planned obsolescence,” meaning that they are never going to be “done” or “perfect,” with the next iteration coming along soon enough. The changes that occur, whether to our benefit or detriment, often are subtle and go unnoticed, but they are just enough to please us and bridge the time to “the next big thing.”

  This accelerates and amplifies development of machine cognition even more, because these AI systems regularly address some of our most fundamental, immediate, tangible, and common needs. This makes it far easier to “suspend our disbelief” about imperfections, so to speak, in exchange for the benefits we reap. We’re willing to turn over more and more of our cognitive workload to something far from perfect. Eventually, when we step back and look, the pervasiveness and degree of disruption and possibility in our lives will surprise us. We have allowed the machine to get under our skin and into our heads, and from there it can exert more subtle power over our lives. That could provide great benefit, helping alleviate stress among elderly citizens, but we risk conceding more control than we intend.

  BENEFITS AND SIDE EFFECTS

  The promise and threat of AI applications feel more urgent because of the unknowns inherent in such a rapidly developing technology, but we humans have a remarkable ability to accept tradeoffs that don’t always favor our best interests. Nuclear power in the 1970s provided a cheap and clean source of electricity. The victor’s narrative of shortening World War II had cleansed US nuclear power of some destructive moral baggage, and officials heralded it as an alternative to energy shortages and fossil fuels. Iconic scientists like Albert Einstein and Niels Bohr helped establish nuclear power’s reputation as the energy source of the future, and it began to take a place in the lineup of other innovative technologies that ruled the day: the automobile, the television, and the telephone. A shiny future had arrived. We saw the promise and we suspended our disbelief about the awesome power of atomic technology.

  For Americans and many others around the world, that flirtation ended abruptly on March 28, 1979, when Unit Two at Three Mile Island (TMI) in Pennsylvania partially melted down. Nuclear power’s negative side effects became immediately and starkly clear: It could kill large amounts of people in one major accident. The incident in Pennsylvania brought back the fears that attended Hiroshima and Nagasaki, anxieties that have been heightened in the decades since with accidents at Chernobyl and Fukushima. Today, there is nothing subtle about nuclear weapons or nuclear power.

  While generally considered far less controversial and catastrophic, the automobile has gone through a similar cycle. Its substantial social and economic effects were visible from the outset. It brought freedom to individuals and families, and it remade community and economic development—sparking suburban growth as people became more mobile, while also creating a vast automaker ecosystem with tiers of suppliers, distribution channels, and ancillary services. Today, most developed nations consider auto industries too large, too important, and too strategic to fail.

  When we step back and consider the evidence, though, the automobile has caused greater harm to the environment and our lives over the past century than nuclear power. Of the 587 million metric tons of carbon dioxide and equivalents emitted by the United States in 2015, more than a quarter came from transportation, second only behind electricity production, according to the Environmental Protection Agency (EPA).¶¶¶¶ With the noise, stress, and lost productivity, the ancillary costs of the car already begin to add up and its status as a symbol of independence and freedom wanes. Of course, the direct human cost is even more dire. Cars and trucks and their fallible human drivers kill almost 1.3 million people annually around the world—the ninth-leading cause of death, sandwiched between diarrheal diseases and tuberculosis—with an additional 20 million to 50 million people injured or disabled, according to the Association for Safe International Road Travel.#### In the United States alone, more than 35,000 are killed each year, creating an additional financial cost of $230.6 billion a year, the association says.

  There is nothing subtle about the concurrent promise and threat that accompany automobiles and nuclear power. And the basic premise of both have remained essentially unchanged for decades. In early 2018, about 450 nuclear power plants provided about 11 percent of the world’s total electricity generation, according to the World Nuclear Association,***** and an estimated 94.5 million new light vehicles were sold worldwide in 2017, according to research firm IHS Markit.††††† And we pump hundreds of millions of dollars into research to make both technologies safe and ensure that most of us never directly experience that potential for physical or psychological damage.

  Unlike nukes and cars, artificial intelligence and its cousin technologies don’t display obvious, visible effects on our everyday lives. Even when they do, the side effects of those technologies in our lives are far less clear—and in that sense are more akin to pharmaceuticals. We saved hundreds of millions of people from certain death by drastically reducing the incidence of polio and whooping cough and arresting the fatality rates of HIV/AIDS. We fed millions more with high-protein, high-carbohydrate
engineered diets so they could grow stronger, live longer, or enjoy enhanced well-being. Seen from the perspective of an average day in our lives, every decade since World War II has brought advances in medicine and agriculture that have made those days richer and better. Scale economies of industrial agriculture brought greater affordability in food and more conveniences in preparing it. Restaurants and equally scaled industrial-style restaurant chains have made food more accessible, affordable and, in many cases, more pleasurable.

  Yet, these advances have sent subtler and more-pervasive ripples through our health and our societies, as well. As the use of antibiotics in humans and livestock exploded, more types of bacterial infections mutated and adapted to the drugs, sparking an arms race between pharmaceutical companies and Mother Nature. Every time we go to the next round of drugs we risk launching successive waves of infection by new mutations of anything from flu strains to Strep bacteria. Medications also spun off a range of addiction crises as opioids, antidepressants, steroids, and other drugs became more commonplace in our treatment plans. As we tried to optimize our health, a multibillion-dollar supplement industry emerged despite the absence of scientific proof of efficacy. In so many cases, we don’t truly understand what all this added biochemistry does to our bodies, so the consequences remain hidden and we largely ignore them in our daily choices.

  The widespread use of artificial intelligence will enhance our humanity, our well-being, and our lives in so many ways, but we need to consider its potential side effects in the same way we think about pharmaceuticals, not the histories of automotive and nuclear power. The latter two technologies have had very visible and tangible effects, good and bad. By contrast, we can’t see nor understand the often-complex algorithms and neural networks any more than we understand the tiniest and deepest machinations of biological bacteria and viruses and chemical interactions in our medications and bodies.

  Those unseen elements can deliver invaluable progress and prosperity. A 2014 study by the Centers for Disease Control and Prevention (CDC) estimated that vaccines prevented more than 21 million hospitalizations and save 732,000 lives among the children born in the previous twenty years, and that was just for the United States alone. With data as its fuel, AI is poised to become the engine of a new and fruitful autonomy economy. But to harness that potential—to use AI to cure the ills of modern society and avoid the worst of the possible side effects—we need to bend the power of these systems toward the benefit of humanity. And to build the necessary guardrails, we need to understand the subtle, unobservable, and tremendously influential control these thinking machines will exert on us as they seep into our lives.

  SEVEN DEGREES OF POWER

  Shaoping Ma reads some of the same artificial intelligence hype and prophecy in China as his peers do in the United States and Europe. Usually, the Tsinghua University computer science professor says, it’s the media trumpeting concerns of self-driving cars endangering pedestrians or malfunctioning robots injuring human workers. The mania rarely seeps very deeply into the broader consciousness, Ma says through an interpreter, but it spreads through WeChat and other social media apps. “The media wants exciting news,” he says, “and that’s true in China as well.” The human mind can make tremendous imaginative leaps when it comes to technology, going further than even high-tech roadmaps. So, while there’s little evidence of an emerging general superintelligence to date, Ma says, many Chinese residents worry about AI’s development because they know so little about the real technology itself. Like anywhere else, most people in China don’t realize how deeply AI-powered technologies have already permeated their lives.‡‡‡‡‡

  Ma sees it at the university, in his role as vice chairman of the Chinese Association for Artificial Intelligence, and in his joint work with Sogou.com, the country’s second-largest search engine, trailing only Baidu. Even with the government’s announcement of a massive commitment to AI research and development in the coming years, the intricate ways thinking machines infiltrate and influence Chinese lives will remain subtle. In his work with Sogou.com, for example, Ma and his colleagues study search engine challenges, including the idea that people often want to discover something a bit different from what their actual search terms suggest. Like most top search engines, they developed their algorithms with “result diversity” as one of the major metrics, using it to satisfy different information needs according to different users’ search intentions.

  Your search for “wings” might ordinarily return a list of bird-related items that interest you, but it also might include references to the Detroit hockey team and Paul McCartney’s rock band. Although your interest as a birder is known to the system, this variety of results assures a more consistent experience across users, reducing the effect of pure clustering. Result diversity and other subtle adaptations enabled by AI seep into most of what we do online with our computers and smartphones today. In fact, some core AI models, including those that categorize similar shoppers, film buffs, and news readers to improve recommendations and boost sales, have drifted toward commodity status, a basic building block for virtually every digital interaction we’ll have in the years to come. The potential for applications range from medicine to construction, and will spread far wider as other more-analog domains grow increasingly digital.

  For venture capitalists like Aaron Jacobson, a partner at NEA in Silicon Valley, the products researchers develop on top of commoditized AI platforms had become the more attractive investments by 2017. That follows the broader move toward a new “platform economy” built around and dominated by corporations, most based in the United States and China. These AI titans—Alibaba, Amazon, Apple, Baidu, Facebook, and Tencent—have harnessed the power of massive amounts of user profiles, transactions, and communications. They roll over every analog “brick and mortar” space, extending their reach and power to everything from our office layouts to our home thermostats. The sheer power of algorithms reveals itself in the sheer volume of people who use and subscribe to these global platforms. Facebook has more than 2 billion users. According to published reports, Amazon shipped more than 5 billion packages items through its Prime memberships alone in 2017. The scale in China is even more astounding, especially considering that these companies have started to approach or exceed similar numbers despite less global expansion than their US counterparts.

  The power of these companies lies in the fact that they get to know their customers very well and become “sticky” in their lives. Once a platform’s targeting of user needs reaches the right number of connections and the right strokes of satisfaction, people rarely delete their accounts and move elsewhere. Despite all the criticism that Facebook and Twitter took following the 2016 presidential race in the United States, they had little trouble sustaining their domestic user base. The fact that only one or two companies dominate social media, search, or e-commerce channels only helps lock in users all the more. This has led to quasi-monopoly status of some companies, whose subtle and powerful AI-driven platforms deliver more of what we want to heighten our reliance on their content and recommendations.

  In fact, the stickiness of these Internet giants’ algorithms is exactly what makes them so valuable across all their service offerings. AI-powered analyses of someone’s search history sheds light on their interest in the next item to buy (hello, advertisers!), or the next person with whom to share their day. This kind of social-fabric weaving and reweaving make these titans such potent and helpful agents in the design of our lives. But the already commanding power they wield versus the rest of the economy and society keeps growing with every click and every new bit and byte that comes in. These companies can subtly and subconsciously nudge our decisions in invisible, irresistible, and often more profound ways than we care to recognize. Theoretically, we remain free to change our minds and our values—and we often do as our circumstances and environments change—but machine learning algorithms can play into our existing biases and reaffirm them without our conscious realization. We
are subject to skillful manipulation that the average, non-technical person might never understand. Political operatives and skilled hackers can place fake news and advertisements into these social networks and subtly manipulate our perceptions of current news and emerging trends, as Americans on Facebook and Twitter discovered in the 2016 presidential election cycle. AI will then help them place their well-cloaked content directly into the streams of unsuspecting users, heightening perceptions of certain news and exacerbating social tensions—even political outcomes.

  Cognition is power, and the development of increasingly capable thinking machines represents a new source of power, a potent new intelligence that humans have created and with which they must contend. But in the end, we remain thinking beings, too. In the end, we interpret the media we consume, and we still cast our own ballots.

  COGNITIVE POWER REMIXED

  Unanimous AI launched in 2014 with the bold idea of creating “swarm intelligence,” organizing a group of individuals into a sort of “real-time AI system” that combines and enhances individual intelligence and optimizes collective knowledge. The company’s underlying algorithms help coordinate the diverse thinking in groups of people, combining their knowledge, insight, and intuition into “a single emergent intelligence,” according to its website. Founder and CEO Louis Rosenberg likens it to the social interactions of bees, which resolve complex questions of hive location and food sources despite only having 100 million neurons in their individual heads. But the far more complex machinery of the human brain will reach its capacity limits eventually, Rosenberg said at the South by Southwest Festival (SXSW) in March 2018. So, why not continue to build intelligence by forming connections with other minds?

  The process has produced better predictions and insights on everything from the Kentucky Derby to the Oscars. In fact, in early 2018, Rosenberg and his colleagues set the idea on the Oscars, creating an intelligent swarm of fifty everyday film fans and asking them to predict the winners. Individually, they were correct about 60 percent of the time. Collectively, they hit on 94 percent of their predictions, better than any individual expert who published predictions. Even more impressive, none of the group had seen all the nominated movies, and most of them had only seen about two of the films, Rosenberg told the SXSW panel attendees.

 

‹ Prev