Book Read Free

Solomon's Code

Page 25

by Olaf Groth


  Increasingly, we will run into situations where the data generated in the off-line sphere of our lives will not be counted by those who primarily focus on the types of digital information they can readily and efficiently process into “actionable insights.” That’s just fine for those who don’t want to be captured, but it limits their ability to determine their standing in groups, communities, and societies. This evolving balance of power between data serfs and data lords hints at a new digital feudalism, in which those who provide the least digital value find themselves left with the fewest options. It’s a transaction that favors the willing customer, but especially the owners and designers of the platforms, including the Digital Barons.

  The feudal metaphor extends to the workplace. Companies that drive these relationships often enjoy a less costly and more flexible workforce. Yet, it also goes even deeper, to a more fundamental shift in the labor and consumer economies. The concept of a job as one coherent employment relationship to which one brings a specialized set of skills is being disaggregated, with the pieces offered to the awaiting online crowd, which picks over the them and fits them into their own lifestyles or financial pictures. This works well for the well-educated or the young, who have the in-demand skills companies need or the time and resources to accommodate changes in demand for their talents. It works less well for those who need predictability to support their family’s minimum viable livelihood.

  The ironic part is that the data lords often appear much like the Wizard of Oz, powerful in large part because of their ability to pull the levers and push the buttons as “the man behind the curtain.” We see it in the varied approaches to autonomous cars. Many of today’s so-called self-driving cars and trucks have remote drivers at the ready, sitting in control cockpits and ready to take over when a vehicle encounters a novel situation, such as inclement weather, debris on the road, or a construction site with a flag-man waving you over to drive between the cones on the wrong side of the highway. This is the strategy Roadstar.ai employs for its robo-taxis in Shenzhen, and that Los Angeles-based Starsky Robotics uses for its robo-trucks.

  It’s not just the self-driving cars that use the man behind the curtain, either. Data labeling and remote human participation help train many AI systems today. Cloud services that perform visual recognition have long had humans in the loop to identify the images and videos the machine labels incorrectly, in some cases employing thousands of workers overseas. This gives customers better service while building up the already-huge corpus of human-labeled data. In the case of medical imaging, providers must rely on the “tool” model, where the system merely suggests interpretations or notes areas of potential interest in images, the diagnosis left up to human radiologists. Education systems can incorporate a human touch in the interaction by letting a real teacher listen in on a lesson and change the responses in real time, much like Erudite AI does in its current iteration. This is used later to train the system to give the same responses in similar situations and reduce the need for human participation, potentially freeing teachers to engage in even richer interactions with students. In such specific settings, researchers believe realistic, humanlike interaction is achievable, but only in narrow forms.

  Like the Wizard of Oz himself, developers might ask us to pay no attention to the man behind the curtain, but the concept of a human in the loop will remain indispensable for decades to come for certain systems, especially when it comes to ethical and moral decisions. The US Department of Defense still requires a human to make decisions on the use of lethal force, typically thinking about automated processes in terms of decision trees, says Matt Hummer, the director of analytics and advisory services at Govini. The question, then, is where on that decision tree does the human reside? Does a human pull the trigger or press the shiny red button, or does that person merely monitor the actions of the system to ensure proper function? Hummer can imagine times when the military could rely on machines to make automated decisions involving lethal force, particularly in defensive scenarios when time is critical. The Department of Defense has invested heavily in virtual reality and other battle simulation systems to help train systems, Hummer says, but most believe a human will remain in the loop and AI will “do a lot of training that will create those decision trees for us in the future.”

  Humans recognize a significant difference between a mission-driven machine making a fatal mistake and a person making a similarly faulty decision. We allow for errors when a person makes a decision—nobody’s perfect, after all—but we expect the machines to work right every time. What happens when innocents are illegally killed? Military leaders can’t court-martial a machine, especially one that can’t explain how it made its decision. Do they prosecute the developer, or the monitor, or the soldiers who called in the AI-driven system from the field? And how do they ensure that the consequences of military applications remain confined to the battlefield? That’s where the training needs to be impeccable, says Hummer, with safeguards in place if the machine encounters an uncertain situation. For example, an AI-assisted weapon should recognize a nonmilitary situation and refuse to fire. “But even then, we can have faulty AI and have a bad situation,” he says.

  Military use pushes these questions to an extreme edge, albeit an important one to consider given the billions of dollars flowing into AI-powered defense applications around the world. To address the common-yet-critical ethical scenarios people might encounter on an everyday basis, Peter Haas and his colleagues at Brown University’s Humanity Centered Robotics Initiative have taken a novel approach to the human in the loop concept. One piece of their cross-disciplinary approach puts a human and a machine together in a virtual reality simulation, allowing the machine to learn from the human as he or she makes decisions and performs actions. The system works for basic understanding and manipulation, but Haas, the associate director of the center, says the strategy also works in a broader initiative to tie morals to scenes and objects.

  He explains: “In a specific scene, a certain set of objects determines a certain behavioral pattern. So, you see a scene and it has desks, a blackboard, and there’s a bunch of children sitting at the desks. You’re going to expect this is some sort of school scenario. We’re trying to understand, if that’s the expectation, are there certain objects that would change the expectations of the scene? If you see a gun in that scene, then you see danger or a security guard or something like that. You’re looking for other objects to figure out what the expectation for behavior is.”

  Currently, they perform the research in a virtual reality setting. The goal is to prepare more capable robots for interaction within societal environments, basing system norms on human behaviors and ensuring their relevance to the objects and context of their surroundings, Haas says. It doesn’t take too active an imagination to see how various AI systems might come together in that situation. In the schoolroom scenario, for example, facial recognition might kick in to instantly identify the gun-toting person as a police officer who’s visiting the class. “The advantage that we have for robots and AI agents is that we have the ability to draw on large databases of information that humans might not have access to immediately,” Haas says. A human security officer could simply not do it as quickly and error-free in a live threat situation. “Robots and AI agents can quickly leverage big data to solve problems that humans couldn’t, but AI agents don’t have any of the moral competency that humans possess.”

  They’re even further from understanding the variation of norms from one situation to another, let alone one culture to the next. One of Haas’s colleagues has researched cultural norms in Japan and the United States to identify commonalities around which they can begin developing moralistic behaviors in AI agents. Over the coming decade or two, he and his colleagues imagine widespread global participation, with augmented and virtual reality systems helping gather human reactions and behaviors that can help create a moral and ethical library of actions and reactions from which robots and other AI systems can dra
w.

  What makes the center’s approach so intriguing is its aim to incorporate as wide a set of human inputs as possible. AI developers will have to put many of our grayest ethical debates into black-and-white code. Whose norms do they choose—and are those norms representative of human diversity? Algorithms have expanded beyond mere tools to optimize processing power in computer chips, match advertisements with receptive audiences, or find a closely matched romantic partner. Now, code influences far less obvious decisions with far more ambiguous trade-offs, including our voting preferences, whether we want our car to swerve off the road to avoid a dog, or whether a visitor in a classroom presents a threat to our children.

  A DIRECT LINK INTO PHYSICAL HEARTS AND MINDS

  These examples already raise critical questions about digital democracy or digital feudalism, but what happens when the machine connects directly into the human brain and/or body? How do we ensure trust and a balance of power when direct computer-brain interfaces could understand neural processes, read intentions, and stimulate neurons that enable hearing, vision, tactile sensations, and movement? Phillip Alvelda remembers the first time a DARPA-supported haptics technology allowed an injured man to feel an object with a prosthetic arm and fingers. The guy joked about becoming right-handed again, and the crowd laughed through the tears, says Alvelda, a former program manager at DARPA’s Biological Technologies Office. Now, scientists can induce feelings of touch, pressure, pain, and about 250 other sensations. “The medical treatments are legion,” he says now, speaking in his post-DARPA capacity. “We can build artificial systems to replace and compensate for parts of the brain that were damaged.”

  These cortical chip implants already can help alleviate tremors in Parkinson’s patients, restore sight, and help overcome damage from strokes or other brain traumas, and researchers already have the fundamental understanding necessary to dramatically expand their capabilities. Neuroscientists already can identify the brain’s abstraction of various ideas—for example, they can track the regions that light up in response to very specific concepts encoded in language, Alvelda explains. “If I can understand the core piece, we don’t need the language,” he says. “We can interface not at the level of words, but at the level of concepts and ideas. There’s no reason it should be limited to seeing or sensing. There’s no reason we couldn’t communicate at the level of feelings or emotions.”

  At the South by Southwest Festival in March 2017, Bryan Johnson pointed out just how inefficient it was to have hundreds of people sitting quietly in the same room and listening to just three or four panelists for an hour. Johnson, the founder of Kernel, a company working on brain implant technologies, marveled at the aggregate brainpower gathered in the hotel ballroom that afternoon. “What if we could have a conversation between all of us at the same time?” he asked. That sort of hive communication, supplemented by an AI to help process signal from noise, might allow a crowd to instantly share and process all their collective emotions, concepts, and knowledge, perhaps gaining efficiency by activating rich neurological processes or communicating in images instead words.

  That depth of communication, if even possible, remains many technological breakthroughs away from feasibility. For example, such a system would have to cut through enough noise to ensure human brains can process vast streams of incoming information in a room with lots of participants. And given our reliance on so many visual, tonal, and gestural cues to enhance our communication and understanding, we might lose meaning in the interactions that don’t include those signals. But this is not the stuff of fantastical science fiction anymore; researchers have already created neural implants that supplement or replace lost brain function. “With the progress we’ve made in recent years, these are things we could start building now,” Alvelda says. “This is not a speculative thing. We know which part of the brain does it. We’re beginning to understand the coding. We can implant the devices.” It will take time to get regulatory approvals for brain surgeries, so the initial cases will involve injured patients with few other options. And some technical challenges remain, including work to expand the bandwidth of data feeds in and out of the chip, but researchers don’t need a grand breakthrough to make all this work, Alvelda says. “Today we’re making devices that can perform the functions for animals, and we’re a year or two at most from doing these experiments in humans with full bandwidth and wireless connection,” he says. The technology and procedure will still need to go through the FDA protocol, but the timeline won’t be measured in decades. “The [time] between the moment when we write the first synthetic messaging into a human, to the point where we get it commercial—where a blind person has an artificial vision system implanted in the skull—that’s seven or eight years from where we are today,” in 2018.

  That’s not a lot of time to figure out the thorny ethical problems that arise with direct links to the human brain. The mere idea of neural implants raises enough red flags, concerns that Alvelda and his colleagues, peers, and predecessors at DARPA readily acknowledge. On the most basic level, there are questions about the morality and ethics of performing invasive brain surgery to implant such chips. Beyond that, DARPA identified two other areas of principal concern. First, neural implants could open a new avenue for hacks that use a radically more direct form of control—give me what I want, and I’ll let you have your brain back. Second, if security issues were addressed and the potential to restore or augment human senses was made available, would it become something only the affluent could afford? If rich parents can augment their kids and poor parents can’t, we risk driving an even greater wedge into society. The concepts of viruses and firewalls take on a whole new level of meaning in this environment, Alvelda says, and they’re not issues confined to a far-off future.

  Unforeseen quandaries will arise as neural implants gain more power and wider use, too. By 2035, as more communication occurs directly from one brain to another, we might need a whole new method of verifying spoken word inferences. Meanings might change, as an official “spoken” word suggests one idea, but the concept transmitted to smaller set of people suggests another. It will get harder to know whom or what to trust. The science-fiction tropes about thought control might also come into play. Just imagine a criminal trial in which a witness is asked to testify about communications received from a defendant’s neural implant. Which thoughts are private? Which can be subpoenaed? If every thought ends up being fair game, then we may end up losing our freedom to think and reflect as we choose. That is a recipe for oppression; the end of our freedom to change our opinions and the leeway to classify our own thoughts as tentative, inappropriate, self-censoring, or self-adjusting.

  As Alvelda notes, it’s already possible to pass information between silicon and neuron. Does every thought and interaction that moves across that threshold become fair game to investigation, depriving people of the freedom to think and reflect as they choose? If uploads, downloads, and lateral transmissions to others become recordable, analyzable, and judgeable, then the nature of thought itself has changed from a fleeting artifact of brain activity to a firm transaction. One person’s internal reflection might become subject to editing, manipulation, and misappropriation by others.

  Any progression toward more powerful neural implants and communications must include, at a minimum, a bright-line legal protection of thought. This will be a necessary first step to maintain the privacy of our cognitive and conscious minds, and it will justify and encourage the development of technologies that help shield our private inner selves from others.

  NASTY, BRUTISH, AND SHORT§

  The concepts of fairness and justice can be hard to define in the context of AI systems, especially when we try to account what’s fair for an individual versus what’s fair for a group. “We found there are a lot of nuances just to what the notion of ‘fairness’ means,” says Jeannette Wing, the head of Columbia University’s Data Sciences Institute. “In some cases, we might have distinct reasonable notions of fairness, but
put them together and they’re in conflict.” For example, what might be fair for a group of people might not be fair for an individual in that group. When interviewing candidates for a job, a hiring manager could deliberately choose to interview unqualified female candidates simply to satisfy statistical parity. After all, granting interviews to the same percentage of male and female applicants would suggest fair treatment of females as a group. Clearly, though, by deliberately choosing to interview the unqualified female applicants, the manager would be acting unfairly toward qualified women who applied but weren’t granted interviews.

  The individual-group divergence matters because so many AI agents draw individual conclusions based on patterns recognized in a group of similar individuals. Netflix recommends movies based on the ways an individual’s past patterns mirror those of like-minded viewers. Typically, systems can draw blunter, but more accurate, predictions from a larger group. Narrowing in on a smaller group for finer recommendations raises the likelihood of error. That’s not a problem when trying to pick the evening’s entertainment; it’s a major problem when trying to define the terms of probation or a jail sentence for a defendant. As judges supplement their own expertise with statistical models based on recidivism, they risk losing the nuance of an individual defendant’s circumstances. In theory, such algorithms eventually could help judges counterbalance their biases and perhaps close racial disparities in sentencing, but outliers will always exist and these systems, as of 2018, are not as failsafe as we expect objective machines to be.

 

‹ Prev