Farsighted

Home > Other > Farsighted > Page 13
Farsighted Page 13

by Steven Johnson


  Other descendants of Bentham’s equation do not rely exclusively on monetary assessments. One heavily mathematical approach goes by the name “linear value modeling” (LVM), and it is employed widely in making astute planning decisions like the one the citizens of New York failed to make with the destruction of Collect Pond. The formula goes something like this: Once you have mapped the decision, explored alternative options, and built a predictive model of outcomes, you then write down a list of the values that are most important to you. Think back to Darwin’s personal choice of whether to marry. His values included freedom, companionship, the clever conversation of men at clubs, having children, and many others. Just as Franklin suggested in his original description of the pros-vs.-cons list, a values model requires that you give each of those values a weight, a measure of their relative importance to you. (Darwin, for instance, seems to have valued the promise of lifelong companionship and children more highly than the clever men in the clubs.) In the most mathematical version of this approach, you give each value a weight somewhere between 0 and 1. If the clever conversation is secondary to you, you give it a .25, while the prospect of having children might get a .90.

  With the values properly weighted, you then turn to the scenarios you’ve developed for each of the options on the table, and you effectively grade each option in terms of how it addresses your core values. Grade that on the scale of 1 to 100. Remaining a bachelor scores very poorly on the “having children” value, but does better on the clever conversation front. Once you’ve established those grades for each scenario, you then do some elemental math: multiply each grade by the weight of each value and add up the numbers for each scenario. The scenario with the highest score wins. Had Darwin built a values model for his decision, the ledger might have looked like this:

  VALUES

  WEIGHTS

  SCENARIO A: UNMARRIED

  SCENARIO B: MARRIED

  Lack of quarreling

  .25

  80

  30

  Children

  .75

  0

  70

  Freedom

  .25

  80

  10

  Lower expenses

  .50

  100

  10

  Clever men in clubs

  .10

  80

  40

  Lifelong companion

  .75

  10

  100

  Adjusted by weight, the grades for each scenario look like this:

  VALUES

  WEIGHTS

  SCENARIO A: UNMARRIED

  SCENARIO B: MARRIED

  Lack of quarreling

  .25

  20

  7.5

  Children

  .75

  0

  52.5

  Freedom

  .25

  20

  2.5

  Lower expenses

  .50

  50

  5

  Clever men in clubs

  .10

  8

  4

  Lifelong companion

  .75

  7.5

  75

  The result would have been the same as the one that Darwin eventually arrived at: a decisive victory for marriage—144.5 to 105.5—despite the fact that the bachelor option had higher grades for more than half the values.

  Franklin called his approach “moral algebra,” but values modeling is closer to a moral algorithm: a series of instructions for manipulating data that generates a result, in this case a numerical rating for the various options being considered. I suspect many of us will find this kind of calculation to be too reductive, taking a complex, emotional decision and compressing it down to a mathematical formula. But, of course, the whole process is dependent on the many steps that have preceded it: mapping the decision, imagining scenarios, conducting premortems, and holding charrettes. The weights and grades only work if they’re calculated at the end of a full-spectrum investigation of the choice at hand. Still, the same framework can be applied without actually doing the math: list your core values, think about their relative importance to you, sketch out how each scenario might impact those values, and, based on that more narrative exercise, make your decision.

  In situations where the choice involves more than two options, LVM practitioners often find this approach is especially useful as a tool for eliminating weaker scenarios. Something about tallying up the numbers has a tendency to shine a particularly unforgiving light on a choice that fares poorly on almost all the metrics. (In the lingo, these are called “dominated” alternatives.) In the end, you might not make the ultimate choice for the two top rivals based exclusively on the numbers, but the numbers might have helped you pare down the list to just two alternatives worth considering. The values calculation helps you prune, after spending so much time growing alternative branches.

  In a way, the value-modeling approach is a descendant of Bentham and Mill’s “greatest happiness for the greatest number,” though at first glance, it might seem to be a more self-centered rendition of their moral calculus. But the values model needn’t be oriented exclusively around one’s personal interests and objectives. To begin with, the decision doesn’t have to be based on a single person’s values. In fact, value modeling turns out to be particularly useful for dealing with a decision where the stakeholders have competing values, because you can do calculations with different weights corresponding to the different perspectives of all the stakeholders. Darwin’s pros-vs.-cons ledger doesn’t easily scale up to the competing demands of a community. But the linear value modeling approach does. And, of course, the values you prioritize don’t have to be self-centered, either. If you give a high weight to “improving the well-being of the growing city of Manhattan by building a park,” by definition you bring a “greater number” into your calculations—greater, at least, than the small circle of your immediate family.

  The fact that these sorts of calculations can help us make more farsighted decisions raises one intriguing possibility. If we’re going to use mathematical algorithms in our deliberative process, what happens when we try to run these calculations in a machine whose native language is algorithmic?

  RISK MAGNITUDE

  In May 2012, Google filed patent #8781669 with the US Patent Office. The filing had an unlikely name for a company that had made its name filtering web searches: “Consideration of risks in active sensing for an autonomous vehicle.” The filing turned out to be one of the first public acknowledgments that Google was exploring self-driving cars.

  The patent filing outlines a long series of technical interactions between sensors, supplemented by diagrams of where those sensors are positioned on the car. But at its core, it is a description of how an autonomous vehicle would make difficult deci
sions. The filing contains a fascinating table, outlining exactly how the software controlling the car would consider risk when confronted with a dangerous situation on the road: A pedestrian jumps into your lane on a two-way street with oncoming traffic. What should the car decide to do?

  At first glance, that kind of choice might seem less pertinent to the subject matter of this book, given that these are the very antithesis of deliberative decisions for humans. At 40 mph, to deliberate for even a half second is effectively not to choose at all, because you will have already collided with the pedestrian before you settle on a path. But computers work at different speeds: faster at some things, slower (or flat-out incompetent) at others. One of those faster things is running through the spatial geometry—and physics—of a system with a moderate number of meaningful variables: a body walking through an intersection; an SUV hurtling toward you. Because those kinds of problems can be solved—though “solved” isn’t quite the right word for it, as we will see—at seemingly miraculous speeds, digital decision-making algorithms can condense some of the techniques that we have explored for farsighted decisions into a few nanoseconds. That’s why the table included in Google’s patent bears a meaningful resemblance to the tables of linear values modeling. Google’s self-driving car can shrink deliberation down to the speed of instinct.

  The table is a list of “bad events.” Some are catastrophic: getting hit by a truck, running over a pedestrian. Some are minor: losing information from a sensor on the car because it’s blocked by some object. Each bad event is scored with two key attributes: risk magnitude and probability. If the car barely crosses the median, there’s a low probability that it will collide with an oncoming car, but that collision itself would have high risk magnitude. If it swerves into the parking lane, the angle might obscure one of the cameras, but the likelihood of a high-magnitude collision might be reduced to zero. From these assessments, the software calculates a “risk penalty” for each action by multiplying a risk magnitude by the probability. Getting hit by an oncoming vehicle might be extremely unlikely (.01 percent), but the magnitude of the risk is so high that the software steers away from options that might lead to that outcome, even if many of the other “bad events” are more than a thousand times more likely to happen.

  BAD EVENT

  RISK MAGNITUDE

  PROBABILITY (%)

  RISK PENALTY

  Getting hit by a large truck

  5,000

  0.01%

  0.5

  Getting hit by an oncoming vehicle

  20,000

  0.01%

  2

  Getting hit from behind by a vehicle approaching in the left-hand lane

  10,000

  0.03%

  3

  Hitting a pedestrian who runs into the middle of the road

  100,000

  0.001%

  1

  Losing information that is provided by the camera in current position

  10

  10%

  1

  Losing information that is provided by other sensor in current position

  2

  25%

  0.5

  Interference with path planning involving right turn at traffic light

  50

  100% (if turn is planned)

  50/0

  As the car confronts a dynamic situation on the road, it rapidly assembles multiple versions of this table, based on the potential actions it can take: swerving left, swerving right, slamming on the brakes, and so on. Each action contains a different set of probabilities for all the potential risks. Swerving right away from oncoming traffic reduces the risk of a head-on collision to almost zero, but still leaves a meaningful possibility that you’ll collide with the pedestrian. The risk magnitude scores are effectively the car’s moral compass, a distant descendant of Bentham’s utilitarian analysis: it is better to interfere with path planning for an upcoming right turn than it is to run over a pedestrian, because the former will lead to greater good for the greater number, particularly that pedestrian. In the Bad Events Table, the moral code is expressed numerically: in this example, the software assumes running over a pedestrian is five times worse than colliding with an oncoming vehicle—presumably because the car is traveling at a speed where the pedestrian would likely die, but the occupants of both cars would survive the collision. At higher speeds, the risk magnitudes would be different.

  The Bad Events Table is a kind of mirror-image version of the values model. Our values model reconstruction of Darwin’s pros-vs.-cons list created weights for all the positive outcomes he wished to achieve in life: clever conversation, a family, companionship. The Google table creates weights for all the negative outcomes, and it modifies those weights with probability assessments. Although it was designed to make split-second decisions, the structure of the Bad Events Table has important lessons for human beings making deliberative decisions about events that might unfold over months or years. For starters, it deliberately includes the kind of probability assessments that were so important to the internal debate over the bin Laden raid. And it forces us to consider not just our objectives and values, but also something we can be too quick to dismiss: the highly unlikely catastrophe. Some outcomes are so disastrous that it’s prudent to avoid them at any cost, even if their likelihood is slim. Taking the time to generate your own Bad Events Table for a complex decision you’re mulling over keeps your mind from focusing exclusively on the upside.

  Uncertainty, as Herbert Simon famously demonstrated, is an inevitable factor in any complex decision, however farsighted the decision-maker might be. If we had perfect clairvoyance about the downstream consequences of our choices, we wouldn’t need all the strategies of premortems and scenario plans to help us imagine the future. But there are ways to mitigate that uncertainty in the act of deciding. The first is to avoid the tendency to focus exclusively on the most likely outcome. When people are lucky enough to hit upon an option that seems likely to generate the best possible results, given all the variables at play, they naturally tend to fixate on that path and not think about the less likely outcomes that reside within the cone of uncertainty. A decision path where there’s a 70 percent chance of arriving at a great outcome but a 30 percent chance of a disastrous one is a very different kind of choice from one where the 30 percent chance is not ideal, but is tolerable. And so part of the art of deciding lies in making peace with the less likely outcome as a fail-safe measure. McRaven and his team had good reason to believe that the Pakistanis would ultimately understand why the Americans felt the need to invade their airspace without warning during the bin Laden raid, but they also recognized that their allies might see the raid as a betrayal and seek some kind of retribution. And so they established the alternate supply route to the troops in Afghanistan as a way of getting comfortable with that potential outcome. But if there’s no way around the second most likely outcome being a catastrophic one, it’s probably time to go back and look for another path forward.

  Another way of mitigating uncertainty is to favor paths that allow modifications after you’ve embarked on them. Decision paths vary in terms of how much you can tinker with them after you’ve committed to one path over another. A path that promises a great outcome 70 percent of the time but doesn’t allow further iteration once you’ve made the final choice m
ay, in the end, be less desirable than a decision that allows you to modify it after the fact. This is, in a sense, a version of the “minimally viable product” idea that is so fashionable in the tech sector today: Don’t try to ship the perfect product; ship the simplest product that might possibly be useful to your customer, and then refine and improve it once it’s out in the market. Thinking about a decision this way suggests a different variable to add to the linear values model: downstream flexibility. Moving to a new town and buying a house has less downstream flexibility than moving to a new town and renting. The third option that Darwin didn’t dare include on his pros-vs.-cons list—move in with with Emma without marrying and see how they get along before tying the knot—has become far more commonplace today, precisely because it gives you more flexibility if things don’t go as planned in the future. If there are paths on the table that allow that kind of downstream flexibility, they might well be the most strategic ones to take, given the uncertainty and complexity of the future. We have a tendency to value the decisive leader, the one who makes a hard choice and sticks with it. But sometimes the most farsighted decisions are the ones that leave room for tinkering down the line.

 

‹ Prev