—BERNARD STIEGLER, TECHNICS AND TIME, VOL. 218
There are eight million stories in the naked city. This has been one of them.
—NAKED CITY (1958–1963)
Stories make sense as a whole and of a whole: they unfold so that the end makes sense of the beginning. That’s why the very first commandment sworn to by the members of the Detection Club founded in 1931 by Agatha Christie, Dorothy Sayers, and other legends of British mystery writing was, “The criminal must be someone mentioned in the early part of the story.”19 Imagine the outcry if at the end of The Usual Suspects Keyser Söze turned out to be a shop teacher who had not yet been mentioned in the movie. Stories, like strategies, generally work by providing a carefully limited set of possibilities and then narrowing them down to one. In a mystery, the possibilities are the suspects and their sneaky behavior; in a Jane Austen novel, the possibilities are the paths that are open to the hero, framed by the paths that, by convention and character, are not. Stories operate within closed worlds of possibility.
But now more than ever, we feel that we are in an open world. Our ever-growing global network creates new possibilities—and new possibilities for creating new possibilities—every day. The history we are creating together no longer feels much like a story, although we’ll undoubtedly make one up when we’re looking backward.
But the fact that we tell (or listen to) one-hundred-hour narratives and also sniff the air with our lizard tongues, ready to dart in a new direction, is not a contradiction that needs resolution. Because these long narratives occur within constantly connected publics, they have had to take on the lizard’s ability to turn quickly. After all, it cannot be merely an accident that multiseason sagas have arisen at the same time that the “[SPOILER]” label has become a crucial piece of metadata. We need that tag because we now watch these shows together, even when we do not watch them at the same time or in the same place. Talking with friends and strangers about who will be the next person the creators will kill off or who the mysterious stranger will turn out to be helps us make sense of the sprawling plots, keeps us engaged, and gives us a sense of participation in the creation of the work.
This means, though, that with crowds anticipating every move, long narratives have to disrupt the expectations central to traditional stories. Game of Thrones—the books and television series—without fanfare killed off popular characters that viewers had assumed were following an arc to the end. The author, George R. R. Martin, has said that he feels a moral obligation not to reinforce the calming notion that some lives are protected in wars because they happen to be the protagonists. Literature should reflect the truth that our lives are equally precarious, and that war is a horrific waste of them. Readers and viewers may have taken to Martin’s work not because of its moral stance but because never knowing which character might die keeps the long series surprising, but either way, it changes our notion of how a narrative works, as well as what we should expect from our own story.
So it is a mistake to see our constant distraction and our absorption in long-form stories as a contradiction. Rather, they are ends of the same pool of complexity and randomness. Distractions are at the shallow end; long-form storytelling is the deep end. Both recognize the overwhelming detail and arbitrariness of the waters we’re in.
Both are affecting the narratives we tell ourselves about our own lives.
The assumption that we embark on careers has been in disrepute for at least a generation. The causes of this change are multiple: economic factors that tip businesses toward hiring temps and freelancers, a business landscape marked by disruption, the globalization of the workforce, the disintermediation of business functions, the availability of platforms that match a global workforce to atomized tasks, and more. Call it the gig economy, Free Agent Nation, or “taskification,” but careers no longer seem like a natural way to organize one’s life and tell one’s story.20
The alternative is not necessarily aimlessly wandering about, our careers emulating Jack Kerouac’s cross-country drives or the caroming of a silver ball in a pinball machine. A better paradigm might be starting a family, the primordial generative activity. Much of its joy—and worry—comes from watching each member step into Heraclitus’s river. If our careers seem less like a discernible, narrow path we’re following and more like the interdependent movement that happens when everything affects everything all at once, at least we have positive models for understanding it. If the business we launch seems less like a carefully crafted timepiece and more like our child in its complex, interdependent generativity, that would not be the worst imaginable way of reframing our understanding.
Stories are a crucial tool but an inadequate architecture for understanding the future. There’s no harm in telling those stories to ourselves. There’s only harm in thinking that they are the whole or highest truth.
Morality
Just as we saw in the coda to chapter 1 that the Kingdom of the Normal and the Kingdom of Accidents are changing their relationship, so are the Land of Is and the Land of Ought.
The Land of Is, with its sinners, slouches, and villains, suffers in comparison to the perfect Land of Ought, where the ruler is wise, the citizenry is noble, and everybody does exactly what they should. And they do so precisely and only because one ought to do what one ought. No self-congratulation or luxuriating in a sense of moral righteousness mars the purity of motives in the Land of Ought. So when we mortals are wondering what is the morally right thing for us to do—a question always present, if in the background—we look up to see what goes on in the Land of Ought. But because we are mere mortals, we don’t always do what we see there—which is why, as we’ve seen, the arc of the moral universe is so long.
In the history of Western philosophy, the question of what goes on in the Land of Ought has often turned into an argument over principles. For example, in the Land of Ought, the citizens follow the principle “Thou shalt not steal.” So should you steal an apple to save your dying grandmother? No, unless there’s a higher principle that says, “Thou shalt sacrifice property rights to save lives.” (Ought’s wise ruler would undoubtedly express it more elegantly.)
This principled approach to moral philosophy is called deontology by the professionals. While it has several important competitors, the best known is consequentialism because it looks to the consequences of actions to determine their morality. A consequentialist would very likely feel morally OK about stealing the apple for Grandma, assuming that the theft’s only negative effect is its negligible cost to the grocer.
These days, one particular type of consequentialism has come to dominate our moral thinking. Utilitarianism traces back to the early part of the nineteenth century when the philosopher Jeremy Bentham looked across the social stratification of English society and proclaimed that the pain and pleasure of an uneducated chimney sweep is as important as that of the finest snuff-sniffing, sherry-slurping lord. So, said Bentham, to determine if an act is moral, we should simply add up the pleasure and pain that would be felt by everyone affected, treating each person’s pain and pleasure equally. Then we should do that which will cause the least aggregate pain or the most aggregate pleasure.
Utilitarianism for a long time felt like a betrayal of morality, for we had assumed that moral action is what you ought to do regardless of the pain or pleasure it brings; we need morality, we thought, precisely because doing what’s right often entails self-sacrifice or pain. Utilitarianism removes everything from the Ought except its calculation of pain and pleasure. In looking solely to outcomes, it obviates much of the moral vocabulary about intentions that we have traditionally employed.
You can see this in the change in how we think about the Trolley Problem, first proposed in a 1967 philosophical article by Philippa Foot.21 In the article, Foot explores a Catholic teaching called the Doctrine of Double Effect that says that it’s permissible to do something otherwise morally wrong in order to support a higher moral principle, but only if the bad side effect
is not your intent. To explore this, Foot asks us to imagine the now famous situation: You are a passerby who sees a trolley careening toward five people on its track. You can pull a lever to switch the trolley to a track that has only one person on it, or you can take no action, knowing that it will result in five deaths. Should you pull the lever?
If you say yes on utilitarian grounds, then Foot asks, why shouldn’t a surgeon kill and carve up a healthy patient in order to harvest organs that would save five other patients? The utilitarian calculus is the same: five lives for one. But—and it’s about to get tricky—the Doctrine of Double Effect says that it’s wrong to kill someone as a means to save others, as when the five patients are saved by means of the organs harvested from the one. But the five on the track are saved by your diverting the trolley to the track where there just happens to be one unfortunate person. If you could somehow yank the single person off the track, you still would have saved the five. But there’s no way to save the five patients except by killing the one healthy person; they are saved directly by that person’s death. The distinction between these direct and indirect intentions is essential to the original Trolley Problem argument.
Yes, this now sounds not only confusing but trivial, but that’s the point.22 In the fifty years since Foot towed the Trolley Problem into view, our culture has rapidly migrated to utilitarianism as the default, so we spend less time looking up to the Land of Ought, where intentions count greatly, and more time assessing pure consequences. Intentions, blame, and guilt now feel like interior states and thus distinct from the consequences that need weighing. Principles aren’t entirely gone from our moral conversations, but they can feel archaic or worse: letting five people die to maintain the purity of your intentions can seem self-indulgent.
The decline of principle-based morality has been hastened by our assigning moral decisions to AI systems. Because those systems are not conscious, they don’t themselves have intentions, and thus they don’t make distinctions between direct and indirect intents. The operationalizing of morality—turning it into programming code—is affecting our idea of morality.
Consider Isaac Asimov’s Three Laws of Robotics from a short story he wrote in 1942. (That story was included in the 1950 book I Robot, on which the 2004 movie was based.)
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These operationalize moral principles by organizing them into a hierarchy that enables the robot to know whether it should steal an apple to save a human—yes, it should—without having to engage in endless arguments about the contradictory mandates of moral principles.
This approach skirts around the problems we humans have with applying principles. For example, we all agree that killing is wrong, but few of us believe that that rule is absolute. That’s why we can’t come to agreement about the merits of capital punishment, abortion, drone strikes, the use of lethal force against unarmed offenders, steering a trolley into five people instead of one, or going back in time to kill baby Hitler. Deciding these cases would require a Laplace’s demon that thoroughly understands human history, psychology, cultural values, personal histories, social norms, and the particularities of each case. Even then we’d probably argue against the demon the way Abraham argued with God to spare Sodom and Gomorrah.
So we can’t expect our machines to be better than we are at applying moral principles to particular cases. We can only instruct them on what outputs should follow from particular inputs. That’s what Asimov’s Three Laws do.23 For example, we’re going to want our system of self-driving cars to lower the number of traffic fatalities compared to today’s rates. If there’s some unexpected event on a highway—a deer leaps the fence, lightning strikes the car ahead of us—rather than giving the autonomous vehicles principles they have to apply, we’re going to instruct them to network with the other autonomous cars on the road to figure out the collective behaviors that would result in the fewest deaths. That’s an engineering problem, not a moral one.
Behind the engineering design, there are of course values: we program autonomous cars to minimize fatalities because we value life. But the AI only has the instructions, not the values or principles. It’s like training a junkyard dog to bark when strangers enter the yard: the dog may follow your instructions but is unlikely to know that behind them is your principled commitment to the sanctity of private property.
But here we hit a knotty problem. Operationalizing values means getting as specific and exact as computers require. Deciding on the application of values entails messy, inexact, and never-ending discussions. For example, we talked in the coda to chapter 2 about the need to rein in AI so that in attempting to achieve the consequentialist goals we’ve given it—for example, above all, save lives on the highway, then reduce environmental impacts—these systems don’t override our moral principles, especially of fairness.24 Good. We don’t want AI to repeat or, worse, amplify historic inequities.
But how do we wrangle our values into the precision computers require? For example, if a machine learning system is going through job applications looking for people who should be interviewed, what percentage of women would count as fair? Fifty percent seems like a good starting point, but suppose the pool of women applicants is significantly lower than that because gender bias has minimized their presence in that field. Should we require 50 percent anyway? Should we start out at, say, 30 percent and commit to heading up to 50 percent over time? Perhaps we should start at, say, 70 percent to make up for the historical inequity. What’s the right number? How do we decide?
And it quickly gets far more complex. Machine learning experts are still coming up with variations on fairness that are couched in the operational terms that computers understand. For example, “Equal Opportunity” fairness, as its originator Moritz Hardt calls it, says that it’s not enough that the people a machine learning system recommends be granted loans (for example) represent the general demographic breakdown, or the breakdown of those who applied for a loan. If that’s all that fairness requires, then you could stuff the acceptance pool with randomly chosen demographic members, including people the machine learning system thinks are terrible risks for loans. Instead, Hardt argues, you want to try to make sure that the same percentage of men and women who are likely to succeed at loans are given loans.25 Others have suggested that this doesn’t go far enough: fairness requires that the percentage of men and women who succeed and the percentage who were wrongly denied loans (wrongly because they would have paid them back) be the same for the genders. And from there the conversation gets really complex.
Those are just some of the types of fairness that machine learning experts are discussing. There are many more. In fact, one talk at a conference on fairness and machine learning was titled “21 Definitions of Fairness and Their Politics,” although it was mischievously overstating the situation.26
Whatever particular flavor of fairness we decide is appropriate in this or that case, computers’ need for precise instructions is forcing us to confront a truth we have generally been able to avoid: we humans are far more clear and certain about what is unfair than what is fair.
That sort of imbalance is far from unusual. The British philosopher J. L. Austin made the same sort of point when he argued against the usefulness of “reality” as a philosophical concept.27 We use the word real mainly when we need to distinguish something from the many ways in which it can be unreal: a real car and not a toy, a counterfeit, a phantasm, a hallucination, a stage prop, a wish, and so many more. We have a large and quite clear vocabulary for the ways in which things can be unreal. But from this we should not conclude that there must be a clear and distinct way in which something can be real.
Similarly, there are m
any ways a situation can be unfair. We are quite good at spotting them. But that does not mean that the meaning of fair is anywhere near as clear. That’s not to say that fairness is a useless concept. On the contrary. But it plays a different role from unfairness. When we declare something to be unfair, we are not merely stating a fact. We declare unfairness as a way to initiate the sense of outrage that generates solidarity—those who agree are your cohort—and action. On the other hand, we rarely yell, “That’s fair!” Far more often, it’s said with a shrug intended to end a discussion, not to open one.
AI is going to force us to make decisions about fairness at levels of precision that we previously could ignore or gloss over. It will take contentious political and judicial processes to resolve these issues. Operationalized fairness’s demand for precision can make fairness look more like a deal than an ideal. Not that there’s anything wrong with that.
But we may learn another lesson as well, one that further diminishes our impulse to consult the Land of Ought for moral guidance. In her 1982 book, In a Different Voice, Carol Gilligan argues that men tend to look for what is the principled thing to do, while women tend to do that which cares for the person in need; men’s eyes look up to the Land of Ought, while women look into the eyes of the people affected. Gilligan of course knows she’s generalizing, and it’s entirely possible that the generalization holds less well than it did forty years ago. But the distinction is real and goes beyond gender.
The upward glance to moral principles turns away from the concrete particularities of a case, locating moral value in the principle those particularities get subsumed under. In a similar way, when utilitarians sum up the aggregate pleasure and pain an action will bring, they are locating moral goodness in that aggregate, not in the particularities of each case. Now, utilitarians would properly push back that the sum in fact reflects each person’s pleasure or pain, but to do their calculations the utilitarians have to at least momentarily turn away from the individuals they’re quantifying; one of the complaints against Robert McNamara’s leadership of the Department of Defense during the Vietnam War was the use of “body counts” as a metric of success. So both deontologists and utilitarians honor the Ought as something above and beyond the individuals affected, albeit in different ways.
Everyday Chaos Page 20