The Formula_How Algorithms Solve All Our Problems... and Create More

Home > Other > The Formula_How Algorithms Solve All Our Problems... and Create More > Page 14
The Formula_How Algorithms Solve All Our Problems... and Create More Page 14

by Luke Dormehl


  In other words, the arrival of the Breathalyzer turned a person’s ability to drive after several drinks from abstract “standard” into concrete “rule” in the eyes of the law. This issue will become even more pressing as the rise of Ambient Law continues—with technologies not only having the power to regulate behavior but to dictate it as well, sometimes by barring particular courses of action from being taken.

  Several years ago, Google announced that it was working on a fleet of self-driving cars, in which algorithms would be used for everything from planning the most efficient journey routes, to changing lanes on the motorway by determining the smoothest path combining trajectory, speed and safe distance from nearby obstacles. At the time of writing, these cars have completed upward of 300,000 miles of test drives in a wide range of conditions, without any reported accidents—leading to the suggestion that a person is safer in a car driven by an algorithm than they are in one driven by a human.45 Since cars driven by an algorithm already conform to a series of preprogrammed rules, it is understandable why specific laws would become just more to add to the collection. These could lead to a number of ethical challenges, however. As a straightforward example, what would happen if the passenger in the car needed to reach a hospital as a matter of urgency—and that this meant breaking the speed limit on a largely empty stretch of road? It is one thing if the driver/passenger was ticketed at a later date thanks to the car’s built-in speed tracker. But what if the self-driving car, bound by fixed Ambient Laws, refused to break the regulated speed limit under any conditions?

  You might not even have to wait for the arrival of self-driving cars for such a scenario to become reality. In 2013, British newspapers reported on road-safety measures being drawn up by EC officials in Brussels that would see all new cars fitted with “Intelligent Speed Adaptation” measures similar to those already installed in many heavy-goods vehicles and buses. Using satellite feeds, or cameras designed to automatically detect and read road signs, vehicles could be forced to conform to speed limits. Attempts to exceed them would result in the deployment of the car’s brakes.46

  One Man’s Infrastructure Is Another Man’s Difficulty

  The move toward more rule-based approaches to law will only continue to deepen as more and more laws are written with algorithms and software in mind. The fundamental problem, however, is that while rules and standards do exist as opposite abstract poles in terms of legal reasoning, they are exactly that: abstract concepts. The majority of rules carry a degree of “standard-ness,” while many standards will be “rule-ish” in one way or another. Driving at a maximum of 60 miles per hour along a particular stretch of road might be a rule (as opposed to the standard “don’t drive at unsafe speeds”), but the fact that we might not receive a speeding ticket for driving at, say, 62 miles per hour suggests that these are not hard-and-fast in their rule-ishness. While standards might be open to too much subjectivity, rules may also not be desirable on account of their lack of flexibility in achieving broader social goals.

  As an illustration, consider the idea of a hypothetical sign at the entrance of a public park that states, “No vehicles are allowed in this park.” Such a law could be enforced through the use of CCTV cameras, equipped with algorithms designed to recognize moving objects that do not conform to the shape of a human. In the hard paternalistic “script” version, cameras might be positioned at park entrances and linked to gates that are automatically opened only when the algorithm is satisfied that all members of a party are on foot. In the “softer” paternalistic version, cameras may be positioned all over the park and could identify the driver of any vehicles seen inside park boundaries by matching their face to a database of ID images and then issuing an automated fine, which is sent directly to the offender’s home address.

  This might sound fair—particularly if the park is one that has previously experienced problems with people driving their cars through it. But would such a rule also apply to a bicycle? Deductive reasoning may well state that since a bicycle is a vehicle also, any law that states that no vehicles should be allowed in a park must also apply to a bicycle. However, is this the intention of the law, or is the law in this case designed to stop motor vehicles entering a park since they will create noise and pollution? Does such a rule also mean barring the entrance of an ambulance that needs to enter the park to save a person’s life? And if it does not, does this mean that algorithmic laws would have to be written in such a way that they would not apply to certain classes of citizen?

  One more obvious challenge in a world that promises “technology [that] more fundamentally understands you, so you don’t have to understand it” is that people might break laws they are not even aware of. In many parts of the world, it is against the law to be drunk in public. With CCTV cameras increasingly equipped with the kind of facial and even gait recognition technology (i.e., analyzing the way you walk) that might allow algorithms to predict whether or not a person is drunk, could an individual be ticketed for staggering home on foot after several drinks at the pub? Much the same is true of cycling at night without the presence of amber reflectors on bike pedals, which is illegal in the UK—or jaywalking, which is legal in the UK but illegal in the United States, Europe and Australia.47 In both of these cases, facial-recognition technology could be used to identify individuals and charge them. As a leading CCTV industry representative previously explained in an article for trade magazine CCTV Today, “Recognizing aberrant behavior is for a scientist a matter of grouping expected behavior and writing an algorithm that recognizes any deviation from the ‘normal.’”48

  I should hardly have to point out what is wrong with this statement. “Normal” behavior is not an objective measure, but rather a social construct—with all of the human bias that suggests.

  Not all uses of algorithmic surveillance are immediately prejudiced, of course. For example, researchers have developed an algorithm for spotting potential suicide jumpers on the London Underground, by watching for individuals who wait on the platform for at least ten minutes and miss several available trains during that time.49 If this proves to be the case, the algorithm triggers an alarm. Another potential application helps spot fights breaking out on the street by identifying individuals whose legs and arms are moving back and forth in rapid motion, suggesting punches and kicks being thrown. Things become more questionable, however, when an algorithm might be used to alert authorities of a gathering crowd in a location where none is expected. Similarly, in countries with overtly discriminatory laws, algorithms could become a means by which to intimidate and marginalize members of the public. In Russia, where the gay community has been targeted by retrograde antigay laws, algorithmic surveillance may be a means of identifying same-sex couples exhibiting affectionate behavior.

  In such a scenario, algorithms would function in the opposite way to that which I described in Chapter 1. Where companies like Quantcast and Amazon prize “aberrant” behavior on the basis that it gives them insights into the unique behavior of individual users, algorithms could instead become a way of ensuring that people conform to similar behavior—or else. As the American sociologist Susan Star once phrased it, one person’s infrastructure is another’s difficulty.50

  All of these are (somewhat alarmingly) examples of what would happen if law enforcement algorithms got it right: upholding the rules with the kind of steely determination that would put even fictitious hard-nosed lawman Judge Dredd to shame. What would happen if they got things wrong, on the other hand, is potentially even more frightening. . . .

  The Deadbeat Dad Algorithm

  On April 5, 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles. The letter informed Gass that his driver’s license had been revoked and that he should stop driving, effective immediately. The only problem was that, as a conscientious driver who had not received so much as a traffic violation in years, Gass had no idea why it had been sent. After several frantic p
hone calls, followed up by a hearing with Registry officials, he learned the reason: his image had been automatically flagged by a facial-recognition algorithm designed to scan through a database of millions of state driver’s licenses looking for potential criminal false identities. The algorithm had determined that Gass looked sufficiently like another Massachusetts driver that foul play was likely involved—and the automated letter from the Registry of Motor Vehicles was the end result. The RMV itself was unsympathetic, claiming that it was the accused individual’s “burden” to clear his or her name in the event of any mistakes, and arguing that the pros of protecting the public far outweighed the inconvenience to the wrongly targeted few.51

  John Gass is hardly alone in being a victim of algorithms gone awry. In 2007, a glitch in the California Department of Health Services’ new automated computer system terminated the benefits of thousands of low-income seniors and people with disabilities. Without their premiums paid, Medicare canceled those citizens’ health care coverage.52 Where the previous system had notified people considered no longer eligible for benefits by sending them a letter through the mail, the replacement CalWIN software was designed to cut them off without notice, unless they manually logged in and prevented this from happening. As a result, a large number of those whose premiums were discontinued did not realize what had happened until they started receiving expensive medical bills through the mail. Even then, many lacked the necessary English skills to be able to navigate the online health care system to find out what had gone wrong.53

  Similar faults have seen voters expunged from electoral rolls without notice, small businesses labeled as ineligible for government contracts, and individuals mistakenly identified as “deadbeat” parents. In a notable example of the latter, 56-year-old mechanic Walter Vollmer was incorrectly targeted by the Federal Parent Locator Service and issued a child-support bill for the sum of $206,000. Vollmer’s wife of 32 years became suicidal in the aftermath, believing that her husband had been leading a secret life for much of their marriage.54

  Equally alarming is the possibility that an algorithm may falsely profile an individual as a terrorist: a fate that befalls roughly 1,500 unlucky airline travelers each week.55 Those fingered in the past as the result of data-matching errors include former Army majors, a four-year-old boy, and an American Airlines pilot—who was detained 80 times over the course of a single year.

  Many of these problems are the result of the new roles algorithms play in law enforcement. As slashed budgets lead to increased staff cuts, automated systems have moved from simple administrative tools to become primary decision-makers. In a number of cases, the problem is about more than simply finding the right algorithm for the job, but about the problematic nature of believing that any and all tasks can be automated to begin with. Take the subject of using data-mining to uncover terrorist plots, for instance. With such attacks statistically rare and not conforming to well-defined profiles in the way that, for example, Amazon purchases do, individual travelers end up surrendering large amounts of personal privacy to data-mining algorithms, with little but false alarms to show for it. As renowned computer security expert Bruce Schneier has noted:

  Finding terrorism plots . . . is a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.56

  While it is clear why such emotive subjects would be considered ripe for The Formula, the central problem once again comes down to the spectral promise of algorithmic objectivity. “We are all so scared of human bias and inconsistency,” says Danielle Citron, professor of law at the University of Maryland. “At the same time, we are overconfident about what it is that computers can do.” The mistake, Citron suggests, is that we “trust algorithms, because we think of them as objective, whereas the reality is that humans craft those algorithms and can embed in them all sorts of biases and perspectives.” To put it another way, a computer algorithm might be unbiased in its execution, but, as noted, this does not mean that there is not bias encoded within it. What the speed limit algorithm experiment mentioned earlier in this chapter shows more than anything is the degree to which assumptions are built into the code that computer programmers write, even when those problems being solved might be relatively mechanical in nature. As technology historian Melvin Kranzberg’s first law of technology states: “Technology is neither good nor bad—nor is it neutral.”

  Implicit or explicit biases might be the work of one or two human programmers, or else come down to technological difficulties. For example, algorithms used in facial recognition technology have in the past shown higher identification rates for men than for women, and for individuals of non-white origin than for whites. An algorithm might not target an African-American male for reasons of overt prejudice, but the fact that it is more likely to do this than it is to target a white female means that the end result is no different.57 Biases can also come in the abstract patterns hidden within a dataset’s chaos of correlations.

  Consider the story of African-American Harvard University PhD Latanya Sweeney, for instance. Searching on Google one day, Sweeney was shocked to notice that her search results were accompanied by adverts asking, “Have you ever been arrested?” These ads did not appear for her white colleagues. Sweeney began a study that ultimately demonstrated that the machine-learning tools behind Google’s search were being inadvertently racist, by linking names more commonly given to black people to ads relating to arrest records.58 A similar revelation is the fact that Google Play’s recommender system suggests users who download Grindr, a location-based social-networking tool for gay men, also download a sex-offender location-tracking app. In both of these cases, are we to assume that the algorithm has made an error, or that they are revealing inbuilt prejudice on the part of their makers? Or, as is more likely, are they revealing distasteful large-scale cultural associations between—in the former case—black people and criminal behavior and—in the latter—homosexuality and predatory behavior?59 Regardless of the reason, no matter how reprehensible these codified links might be, they demonstrate another part of algorithmic culture. A single human showing explicit bias can only ever affect a finite number of people. An algorithm, on the other hand, has the potential to impact the lives of exponentially more.

  Transparency Issues

  Compounding the problem is the issue of transparency, or lack thereof. Much like Ambient Law, many of these algorithmic solutions are black-boxed—meaning that people reliant on their decisions have no way of knowing whether conclusions that have been reached are correct or the result of distorted or biased policy, or even of erroneous facts. Because of the step-by-step process at the heart of algorithms, codifying laws should make it more straightforward to examine audit trails about particular decisions, certainly when compared to dealing with a human. In theory, an algorithm can detail the specific rules that have been applied in each mini-decision, leading up to the final major one. In fact, the opacity of many automated systems means that they are shielded from scrutiny. For a variety of reasons, source code is not always released to the public. As a result, citizens are unable to see or debate new rules that are made; experiencing only the end results of decisions, as opposed to having access to the decision-making process itself. Individuals identified as potential terror suspects may receive questioning lasting many hours, or even be forced to miss flights, without ever finding out exactly why the automated system targeted them. This, in turn, means that there is a chance that they will be detained each time they attempt to board an airplane. In a Kafka-like situation, it is difficult to argue a certain conclusion if you do not know how it has been reached. It’s one thing to have a formula theoretically capable of deciding particular laws, another entirely to have its inner workings transparent in a way that the general popula
ce has access to it.

  While these problems could, as noted, be the result of explicit bias, more often than not they are likely to be created accidentally by programmers with little in the way of legal training. As a result, there is a strong possibility that programmers might change the substance of particular laws when translating them into machine code. This was evidenced in a situation that occurred between September 2004 and April 2007, when programmers brought in from private companies embedded more than 900 incorrect rules within Colorado’s public benefits system. “They got it so wrong that there were hundreds of thousands of incorrect assessments, because the policy embedded in the code was wrong,” says University of Maryland law professor Danielle Citron. “It was all because they interpreted policy without a policy background.”

  The errors made by the coders included denying medical treatment to patients with breast and cervical cancer based upon income, as well as refusing aid to pregnant women. One 60-year-old, who had lost her apartment and was now living on the streets, found herself turned down for extra food stamps because she was not a “beggar,” which is how coders had chosen to term “homelessness.” In the end, eligibility workers for the Colorado Benefits Management System (CBMS) were forced to use fictitious data to get around the system’s numerous errors.60

  The Colorado situation went beyond erroneous computer code. The laws the programmers had entered were distorted to such a degree that they had effectively been changed. As Citron points out, were such amendments to have taken place within the framework of the legal system, they would have taken many months to push through the system. “If administrative law is going to pass a new policy regulation they may be required to put the policy up for comment, to have a notice period when they hear comments from interested advocates and policy-makers,” she says. “They then incorporate those comments, provide an explicit explanation for their new policy, and respond to any comments made. Only then—after this extended notice and comment period, which is highly public in nature—can a new rule be passed. What happened here was a bypassing of democratic process designed to both allow participation and garner expertise from others.” As it was, programmers were given vast and unreviewable policy-making power, defying any kind of judicial review.

 

‹ Prev