Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 131
Rationality- From AI to Zombies Page 131

by Eliezer Yudkowsky


  So you explain away the beauty of a flower?

  “No. I explain it. Of course there’s a story behind the beauty of flowers, behind the fact that we find them beautiful. Behind ordered events, one finds ordered stories; and what has no story is the product of random noise, which is hardly any better. If you cannot take joy in things that have stories behind them, your life will be empty indeed. I don’t think I take any less joy in a flower than you do. More so, perhaps, because I take joy in its story as well.”

  Perhaps, as you say, there is no surprise from a causal viewpoint—no disruption of the physical order of the universe. But it still seems to me that, in this creation of humans by evolution, something happened that is precious and marvelous and wonderful. If we cannot call it a physical miracle, then call it a moral miracle.

  “Because it’s only a miracle from the perspective of the morality that was produced, thus explaining away all of the apparent coincidence from a merely causal and physical perspective?”

  Well . . . I suppose you could interpret the term that way, yes. I just meant something that was immensely surprising and wonderful on a moral level, even if it is not surprising on a physical level.

  “I think that’s what I said.”

  But it still seems to me that you, from your own view, drain something of that wonder away.

  “Then you have problems taking joy in the merely real. Love has to begin somehow. It has to enter the universe somewhere. It is like asking how life itself begins—and though you were born of your father and mother, and they arose from their living parents in turn, if you go far and far and far away back, you will finally come to a replicator that arose by pure accident—the border between life and unlife. So too with love.

  “A complex pattern must be explained by a cause that is not already that complex pattern. Not just the event must be explained, but the very shape and form. For love to first enter Time, it must come of something that is not love; if this were not possible, then love could not be.

  “Even as life itself required that first replicator to come about by accident, parentless but still caused: far, far back in the causal chain that led to you: 3.85 billion years ago, in some little tidal pool.

  “Perhaps your children’s children will ask how it is that they are capable of love.

  “And their parents will say: Because we, who also love, created you to love.

  “And your children’s children will ask: But how is it that you love?

  “And their parents will reply: Because our own parents, who also loved, created us to love in turn.

  “Then your children’s children will ask: But where did it all begin? Where does the recursion end?

  “And their parents will say: Once upon a time, long ago and far away, ever so long ago, there were intelligent beings who were not themselves intelligently designed. Once upon a time, there were lovers created by something that did not love.

  “Once upon a time, when all of civilization was a single galaxy and a single star: and a single planet. A place called Earth.

  “Long ago, and far away, ever so long ago.”

  *

  Part W

  Quantified Humanism

  281

  Scope Insensitivity

  Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.

  Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario,2 or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.3 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”4 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”

  An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.

  We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.5 Baron and Greene found no effect from varying lives saved by a factor of 10.6

  A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.7

  The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.

  *

  1. William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010), doi:10.3768/rtipress.2009.bk.0001.1009.

  2. Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235, http://yosemite.epa.gov/ee/epa/eerm.nsf/vwAN/EE-0280B-04.pdf/$file/EE-0280B-04.pdf.

  3. Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis220 (New York: North-Holland, 1993), 165–215, doi:10.1108/S0573-8555(1993)0000220007.

  4. Kahneman, Ritov, and Schkade, “Economic Preferences or Attitude Expressions?”

  5. Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173, doi:10.1006/jeem.1995.1011.

  6. Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125, doi:10.1037/1076-898X.2.2.107.

  7. David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300, doi:10.1023/A:1007744326393.

  282

  One Life Against the World

  Whoever saves a single life, it is as if he had saved the whole world.

  —The Talmud, Sanhedrin 4:5

  It’s a beautiful thought, isn’t it? Feel that warm glow. />
  I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet—it’s a bit complicated, but essentially I managed to turn someone’s whole life around by leaving an anonymous blog comment. I wasn’t expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.

  Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

  But if you ever have a choice, dear reader, between saving a single life and saving the whole world—then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference. For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there’s a qualitative duty to save what lives you can—then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend—so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost—and thus passing to the entire world changes little.

  I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world—not to be confused with pretend rhetorical saving the world—it is as if they had saved an intergalactic civilization.

  Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the children. I’m nearby, within reach, so I leap forward and drag one child off the railroad tracks—and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. “Quick!” you scream to me. “Do something!” But (I call back) I already saved one child from the train tracks, and thus I am “unimaginably” far ahead on points. Whether I save the second child, or not, I will still be credited with an “unimaginably” good deed. Thus, I have no further motive to act. Doesn’t sound right, does it?

  Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don’t think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer.

  It’s not cognitively easy to spend money to save lives, since cliché methods that instantly leap to mind don’t work or are counterproductive. (I will write later on why this tends to be so.) Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don’t.

  *

  283

  The Allais Paradox

  Choose between the following two options:

  1A. $24,000, with certainty.

  1B. 33/34 chance of winning $27,000, and 1/34 chance of winning nothing.

  Which seems more intuitively appealing? And which one would you choose in real life? Now which of these two options would you intuitively prefer, and which would you choose in real life?

  2A. 34% chance of winning $24,000, and 66% chance of winning nothing.

  2B. 33% chance of winning $27,000, and 67% chance of winning nothing.

  The Allais Paradox—as Allais called it, though it’s not really a paradox—was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.1 I’ve modified it slightly for ease of math, but the essential problem is the same: Most people prefer 1A to 1B, and most people prefer 2B to 2A. Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.

  This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

  Among the axioms used to prove that “consistent” decisionmakers can be viewed as maximizing expected utility is the Axiom of Independence: If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.

  All the axioms are consequences, as well as antecedents, of a consistent utility function. So it must be possible to prove that the experimental subjects above can’t have a consistent utility function over outcomes. And indeed, you can’t simultaneously have:

  U($24,000) > (33/34) × U($27,000) + (1/34) × U($0)

  0.34 × U($24,000) + 0.66 × U($0) < 0.33 × U($27,000) + 0.67 × U($0).

  These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.

  Maurice Allais initially defended the revealed preferences of the experimental subjects—he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn’t really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn’t a good idea in real life.

  (How naive, how foolish, how simplistic is Bayesian decision theory . . .)

  Surely the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?

  (I’m starting to think of this as “naive philosophical realism”—supposing that our intuitions directly expose truths about which strategies are wiser, as though it were a directly perceived fact that “1A is superior to 1B.” Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)

  “But come now,” you say, “is it really such a terrible thing to depart from Bayesian beauty?” Okay, so the subjects didn’t follow the neat little “independence axiom” espoused by the likes of von Neumann and Morgenstern. Yet who says that things must be neat and tidy?

  Why fret about elegance, if it makes us take risks we don’t want? Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome’s probability, add them up, etc. Okay, but why do we have to do that? Why not make up more palatable rules instead?

  There is always a price for leaving the Bayesian Way. That’s what coherence and uniqueness theorems are all about.

  In this case, if an agent prefers 1A to 1B, and 2B to 2A, it introduces a form of preference reversal—a dynamic inconsistency in the agent’s planning. You become a money pump.

  Suppose that at 12:00 p.m. I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05 p.m. I consult a switch with two settings, A and B.
If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows “34,” in which case I pay you nothing.

  Let’s say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00 p.m., you pay me a penny to throw the switch to B. The die comes up 12. After 12:00 p.m. and before 12:05 p.m., you pay me a penny to throw the switch to A.

  I have taken your two cents on the subject. If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don’t be surprised when your pennies get taken from you . . .

  (I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)

  *

  1. Maurice Allais, “Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’Ecole Americaine,” Econometrica 21, no. 4 (1953): 2, doi:10.2307/1907921; Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica 47 (1979): 263–292.

  284

  Zut Allais!

  Huh! I was not expecting so many commenters to defend the preference reversal. Looks like I ran into an inferential distance.

  It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as:

  Experimental subjects tend to defend incoherent preferences even when they’re really silly.

  People put very high values on small shifts in probability away from 0 or 1 (the certainty effect).

  Let’s start with the issue of incoherent preferences—preference reversals, dynamic inconsistency, money pumps, that sort of thing.

 

‹ Prev