Solomon's Code

Home > Other > Solomon's Code > Page 7
Solomon's Code Page 7

by Olaf Groth


  When she set out to study the LAPD’s use of its intelligence-driven policing, Brayne, the University of Texas sociologist, saw ample evidence of retroactive data surveillance. It naturally occurred as investigators went to the various databases to research potential suspects and construct an argument for search and arrest warrants. But in some cases, she says, that information never appeared in affidavits or in evidence. It was, she says, “rendered invisible when it [was] submitted to the courts.” She would see it happen in the field, but then not see evidence of it in the courtroom.§§§

  Yet, what surprised her even more was the skepticism many law enforcement officers expressed about the systems, which could integrate a range of officer-monitoring technologies, such as GPS tracking of their patrol cars. As a scholar of the criminal justice system, Brayne expected officers to embrace the “information is power” aspect of data intelligence, but many saw it as an entrenchment of managerial control. In fact, the Los Angeles Police Protective League, the union of LAPD officers, has resisted the use of a range of monitoring technologies, many of which, while available, aren’t used.

  Few Los Angeles and New York City residents can wield the same measure of resistance against these technologies, in part because neither the LAPD’s nor the NYPD’s predictive policing systems were proactively communicated to the communities, much less debated in a wider public forum.¶¶¶ That shouldn’t come as a complete surprise, as police departments playing catchup with tech savvy criminals don’t want to tip off criminals about their enhanced capabilities. As Jonathan Feldman of the California Police Chiefs Association put it in a public hearing in front of the California State Assembly: The broad public, including criminals, already have all these technologies, so how are police supposed to keep citizens safe if they can’t use the same or better tools to connect the dots on illicit activity? And if we debate everything the police do publicly, we tell the bad guys how to avoid detection.###

  Yet, there’s an inherent open-or-closed tension within law enforcement itself. Law enforcement officials also want the public to know about mass surveillance, so it can act as a deterrent to criminal activity in general, Brayne notes. “Part of the thing is to communicate to these guys on the street, ‘Hey we’re following you. We know who you are and who your affiliates are and where you hang out and what you’ve been up to. So, don’t do something illegal because we are already on to you,’” she says. “If you don’t ever need to intervene that’s the most effective law-enforcement mechanism.”

  That’s especially critical in an age when ever “smarter” digital technologies offer reach, skill, and anonymity all at once. Anybody with a mobile phone can access advanced digital tools and inflict reputational and material harm on others with very little accountability. In this environment, law enforcement needs the capability to not just track, but ideally prevent perpetrators from acting. But they’re damned if they do and damned if they don’t. Americans might despise the idea of unfettered surveillance, but the fact that federal agencies had pertinent evidence yet couldn’t connect the dots, share information across departments, and prevent the 9/11 terrorist attacks might bother many of them even more. The 9/11 Commission Report released almost three years later found that various US authorities collectively had the information they needed to identify and stop the terrorists but had no systems in place to share that information. It recommended the creation of a national intelligence director to help coordinate intelligence between agencies.

  Surely, tourists feel safer traveling through New York City today, now that authorities can better track information, predict potential incidents, and, hopefully, defuse tensions and prevent criminal activity and harm. If their primary purpose is to prevent as much injury and death as possible, dispatching first responders after a crime occurs amounts to a failure of duty and a significant drain on human and economic resources. And that doesn’t even begin to touch on the subsequent human and monetary costs of trials, probationary proceedings, and incarceration. Judges in parts of the United States already use AI systems to help determine eligibility for bail and similar probationary questions. Yet, neither the systems nor their developers can (or will) explain the machine’s reasoning, often citing the need to retain trade secrets in a competitive marketplace. This is a problem in countries purportedly based on the transparent rule of law. If we can’t explain the reasoning, then defending attorneys for an alleged criminal can’t argue against it, diminishing their ability to provide an alternative narrative for the defendant.

  But beyond questions of law, we might ask deeper questions about how this influences the formation of human judgment. Judges and juries bring their intellectual and analytical skills to bear in the courtroom, but also their notions of social equity and empathy for both victims and offenders. The “jury of peers” is a fundamental strength of the US judicial system, in part because people can contextualize the offending act. At present, developers can’t bake such a full array of human contextualization into machine learning algorithms, which are most often designed by programmers thousands of miles away from a given situation. So, even as the US judicial system seeks ways to remove tedium and bias from some of the tasks required of the bench, how will it digitize our empathy and a sense of our shared, often-conflicted social values?

  SERVICE DELIVERY, OR PUNITIVE ACTION?

  It’s the type of question worth asking, because the upsides and downsides both could transform society. Predictive policing could establish more civility in communities, fostering an atmosphere that’s conducive to business, tourism, customer traffic, and everyday life. By protecting our property and personal well-being, it could help attract greater investments in neighborhoods and social goods, building the types of wealth that secure families and help fund schools and other community amenities. We might even refer to those outcomes of predictive policing as “trickle-up” urban development. But we also need to think about the potential for that same trickle-up development to generate rampant gentrification and the marginalization of people who are “digitally less hygienic” or otherwise disadvantaged because of their socioeconomic status. We need to consider how to integrate AI and other advanced technologies with broader urban- and social-development policies, so these innovations don’t amplify existing problems.

  As Brayne observed, the LAPD’s predictive policing initiative already prompts different reactions to individuals, even for the same types of incidents. Police typically adopted a reactive mindset when responding to calls for domestic violence in neighborhoods with a lot of gang, drug, or other criminal activity. Yet, a similar call from someone with a house in an affluent neighborhood and other positive attributes culled from background databases put officers in a “service delivery” mindset. “In rich communities, a lot of the calls for service would be people threatening to kill themselves,” she says. “In those cases, they would look to see what might be missing here. Do they already have linkages with child and family services? Did they recently get divorced? Did they lose their job? It’s very ‘service delivery,’ rather than ‘incriminating.’ . . . In an area with a lot of gang activity and crime, they look to see if this woman is also involved in a criminal justice system, like maybe she’s out on parole. It is service delivery oriented when there are kids involved, for sure, rather than just punitive. But yes, I don’t know if I saw a lot of examples of it disconfirming biases.”

  While the ACLU and some LA-based organizations have started to push back on AI-based predictive policing, discussions about how to govern these systems and what values the community wants to instill in these systems remain, at best, preliminary. Broader public awareness isn’t much better. In fact, Chinese citizens might be more broadly aware of and in tune with the social credit system than Americans are with predictive policing activity. So, where will this lead in both China and the United States? What might build trust in one country could erode it in the other. Citizens might start to avoid areas with camera surveillance, shunning targeted neighborhoods
and depriving them of commercial or civic activity. The same systems design to reduce crime might increase bias and social stigma and create a database that supports the dismantling of stereotypes about minorities.

  The fact is, predictive analytics, AI, and interactive robotics already have become essential and valuable tools for individuals, governments, and businesses. To enhance our lives and our communities, however, we need operate these technologies within a shared set of community values, to balance the power of their predictions with the people affected by them, and to establish a new threshold of trust, lest we forget that trust is the most valuable currency in a society. But unfortunately, if unsurprisingly, the public and political debate about AI’s influence on a variety of societal power balances will advance far more slowly than technology does.

  BACK TO THE FUTURE SHOCK

  While pockets of expertise have sprung up throughout the world, the United States and China have emerged as the undisputed leaders of AI research, development, and deployment. Yet, even these two countries—perhaps especially these two countries—have seen the pace of technological development advance beyond the regulatory, legal, or ethical frameworks needed to govern the role of thinking machines. Humanity has never stopped a sector of technological development cold. While different countries have adopted moratoriums or bans at various times—on cloning or chemical and nuclear weapons—we continue to advance the state of the art in genetic engineering, weapons, and virtually every other technological category. Humans consistently advance. We experiment, fail, and hurt. And then we shape, optimize, and perfect technologies, only occasionally or more slowly mitigating their risks with multinational agreements and monitoring institutions.

  Crops of genetically modified organisms (GMOs), for example, provide significant benefits for broad swaths of the global population. Modified plants engineered in the lab can provide greater pest and drought resilience, lower cost and more affordable production, and enhanced control of water and other resources, especially in impoverished regions of the globe that are fighting starvation. Secondary economic benefits accrue, as well—from industrial-scale job creation in the development, rearing, and harvesting of crops, to the shareholder returns for retirement funds invested in GMO companies. But many people suspect that GMOs might not be healthy for us or for the natural ecosystems of crops. European citizens feel more strongly about the danger of such organisms, in part because of a general distrust of humanity’s manipulation of nature and of large corporate interests, while people in less-developed nations might see the direct advantages of GMOs as they struggle to support themselves with less-robust natural crops.

  We see similar arguments play out in anything from arms control to the global production and consumption of fossil fuels, with local historical experiences, divergent philosophies about economic growth, and varying degrees of consensus about viable alternatives driving different debates in different places. Europe, for example, suffered through two catastrophic wars and various dictatorships, some of which conducted experiments on human genetics that engrained a deep belief that interference with nature is morally reprehensible. A more laissez-faire attitude toward corporate innovation and capitalistic pursuits, including GMOs, makes sense in the United States, which never suffered a similar direct trauma. We all innately realize any technology can carry dual purposes—a hammer can pound a nail or crush a skull—but our human imagination can conjure vastly different images of the potential harm to ourselves and our societies.

  Even when we fail, dampening our enthusiasm for growth and giving us a proverbial bloody nose, we keep pushing forward. The dot-com implosion hardly even slowed the web’s increasingly pervasive, influential, and powerful role in our lives. This inexorable drive forward raises critical concerns, particularly in relation to artificial intelligence, where one has no problem finding doomsayers or imagining grim futures. This sort of dystopia has longstanding precedent in science fiction and literature, from Frankenstein to the industry of futurism ignited by Future Shock, the 1970 phenomenon and best-seller from Alvin Toffler, who argued that such rapid change in too short a time would overwhelm people and societies. Futurists and sociologists have always warned us and prescribed ways to prepare for the dangerous effects of accelerating change. The same prophets talking today about the potential for AI to create a feudal structure of workers—split between owners and the gig workers trying to latch on with them—echo Toffler’s concerns about a shredding of the economic fabric and the creation of a temporary-worker underclass.

  These visionaries might very well be right, but such concerns about the dangers rarely translate into proactive decisions to take back control over development and check the power of technological development over our lives. Apothecaries made medicines and poisons, eventually producing powerful opiates long before recognizing the potential dangers of abuse. When innovations produce what we regard as harm to individual people, we do little to act. Only when they became major health epidemics—cigarettes causing widespread cancers or opioid addictions gripping more affluent or privileged sectors of US society—did policy makers begin to call for action. One of the only examples of human foresight and proactive guidance of technological development applied to the genetic engineering of humans, where the UN member countries adopted guidelines condemning it before it was openly exercised.

  AI IS FASTER AND MORE POWERFUL

  In many circles, efforts to guide the development of artificial intelligence generate a similar sense of urgency. For one, its pervasive influence on our lives, values, and relationships could transform human societies, cultures, and economies far more radically than even the Internet has. Then, much like the Internet, AI-based innovation can occur at incredible speeds, even if it comes in fits and starts. It has taken seventy years and a few chilly “AI winters” to even get to this nascent point where we stand today, but a few breakthroughs in deep-learning techniques, the explosion of large data sets, and the availability of cheaper computing power unleashed a breakneck pace of new AI applications, often surprising experts within the field itself. No one expected AlphaGo could beat Go world champion Lee Sedol in a five-game match in 2016, yet the neural network won four of the games. Then, about eighteen months later, AlphaGo Zero came along and, having taught itself the game, beat its predecessor in a hundred consecutive matches.

  Each instance of these rapid changes that catch us off guard raises difficult questions that need answers not just from technologists, politicians, or regulators, but from society as a whole. As Burkhard Huhnke, vice president of automotive strategy at Synopsys, a Silicon Valley software and microchip developer, and former senior vice president of e-mobility and innovation at VW America, explains, our roadways become far safer if we remove error-prone humans from behind the wheel. But the idea of telling people they can’t drive anymore is a different issue altogether, says Huhnke. That’s “a more social aspect,” he says. “Because it touches the liberty of being a driver yourself. . . . This can’t be fixed by regulators; this has to be figured out in a completely different dimension.”****

  GETTING UNDER OUR SKIN

  The unusually close connection between the PARO robotic seal and its users might begin with its sheer cuteness, but it’s the subtle design touches that really secure the bond. Under its soft white fur, developers placed the robot’s touch sensors in balloons so users wouldn’t feel hard spots. Its big black eyes follow movements around the room. It moves its flippers and gurgles with attention. It changes its body posture in response to human touch and it knows the difference between being stroked or rough-handled. Even when recharging, its power cord makes it look like it’s sucking on a pacifier. That adorability belies the serious research and development that made PARO such an effective aid for caregivers serving elderly patients, especially those with dementia and Alzheimer’s. Developed by Takanori Shibata, a chief research scientist at Japan’s National Institute of Advanced Industrial Science and Technology (AIST), the artificial intelligence u
nderlying PARO adjusts its behavior to its interactions and surroundings. It responds to its name and its users’ most common words and, if it goes long enough without affection, it starts to squeal.

  That combination has imbued PARO with a remarkable ability to help caregivers soothe disoriented elderly patients and help them communicate. After a 2008 study determining its effectiveness, the Danish Technology Institute encouraged every Danish nursing home to buy one.†††† Since its introduction in 2004, thousands of the PARO seals were put into use in nursing homes in Japan, Europe, the United States, and elsewhere. In the weeks after an earthquake and massive tsunami wrecked Japan in 2011, a pair of donated PARO seals helped sooth nursing home residents in Fukushima, one of the country’s worst-hit areas.‡‡‡‡

 

‹ Prev