Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 154
Rationality- From AI to Zombies Page 154

by Eliezer Yudkowsky


  Hence the old saying: “Money makes the world go ‘round, love barely keeps it from blowing up.”

  Now, we do have the problem of akrasia—of not being able to do what we’ve decided to do—which is a part of the art of rationality that I hope someone else will develop; I specialize more in the impossible questions business. And yes, spending money is more painful than volunteering, because you can see the bank account number go down, whereas the remaining hours of our span are not visibly numbered. But when it comes time to feed yourself, do you think, “Hm, maybe I should try raising my own cattle, that’s less painful than spending money on beef?” Not everything can get done without invoking Ricardo’s Law; and on the other end of that trade are people who feel just the same pain at the thought of having less money.

  It does seem to me offhand that there ought to be things doable to diminish the pain of losing hit points, and to increase the felt strength of the connection from donating money to “I did a good thing!” Some of that I am trying to accomplish right now, by emphasizing the true nature and power of money; and by inveighing against the poisonous meme saying that someone who gives mere money must not care enough to get personally involved. This is a mere reflection of a mind that doesn’t understand the post-hunter-gatherer concept of a market economy. The act of donating money is not the momentary act of writing the check; it is the act of every hour you spent to earn the money to write that check—just as though you worked at the charity itself in your professional capacity, at maximum, grownup efficiency.

  If the lawyer needs to work an hour at the soup kitchen to keep themselves motivated and remind themselves why they’re doing what they’re doing, that’s fine. But they should also be donating some of the hours they worked at the office, because that is the power of professional specialization. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen. More on this later.

  To a first approximation, money is the unit of caring up to a positive scalar factor—the unit of relative caring. Some people are frugal and spend less money on everything; but if you would, in fact, spend $5 on a burrito, then whatever you will not spend $5 on, you care about less than you care about the burrito. If you don’t spend two months’ salary on a diamond ring, it doesn’t mean you don’t love your Significant Other. (“De Beers: It’s Just A Rock.”) But conversely, if you’re always reluctant to spend any money on your Significant Other, and yet seem to have no emotional problems with spending $1,000 on a flat-screen TV, then yes, this does say something about your relative values.

  Yes, frugality is a virtue. Yes, spending money hurts. But in the end, if you are never willing to spend any units of caring, it means you don’t care.

  *

  325

  Purchase Fuzzies and Utilons Separately

  Previously:

  There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone . . .

  If the lawyer needs to work an hour at the soup kitchen to keep themselves motivated and remind themselves why they’re doing what they’re doing, that’s fine. But they should also be donating some of the hours they worked at the office, because that is the power of professional specialization. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen. More on this later.

  I hold open doors for little old ladies. I can’t actually remember the last time this happened literally (though I’m sure it has, sometime in the last year or so). But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car’s interior. I looked in to see if there were packages being taken out, but this was not so. I looked around to see if anyone was doing anything with the car. And finally I went up to the house and knocked, then rang the bell. And yes, the trunk had been accidentally left open.

  Under other circumstances, this would be a simple act of altruism, which might signify true concern for another’s welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable. I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn’t deduct any penalty points for the others. Just so long as people get helped.

  But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others. That is: can I really defend this as the best use of my time, given the other things I claim to believe?

  The obvious defense—or, perhaps, obvious rationalization—is that an act of altruism like this one acts as a willpower restorer, much more efficiently than, say, listening to music. I also mistrust my ability to be an altruist only in theory; I suspect that if I walk past problems, my altruism will start to fade. I’ve never pushed that far enough to test it; it doesn’t seem worth the risk.

  But if that’s the defense, then my act can’t be defended as a good deed, can it? For these are self-directed benefits that I list.

  Well—who said that I was defending the act as a selfless good deed? It’s a selfish good deed. If it restores my willpower, or if it keeps me altruistic, then there are indirect other-directed benefits from that (or so I believe). You could, of course, reply that you don’t trust selfish acts that are supposed to be other-benefiting as an “ulterior motive”; but then I could just as easily respond that, by the same principle, you should just look directly at the original good deed rather than its supposed ulterior motive.

  Can I get away with that? That is, can I really get away with calling it a “selfish good deed,” and still derive willpower restoration therefrom, rather than feeling guilt about its being selfish? Apparently I can. I’m surprised it works out that way, but it does. So long as I knock to tell them about the open trunk, and so long as the one says “Thank you!,” my brain feels like it’s done its wonderful good deed for the day.

  Your mileage may vary, of course. The problem with trying to work out an art of willpower restoration is that different things seem to work for different people. (That is: We’re probing around on the level of surface phenomena without understanding the deeper rules that would also predict the variations.)

  But if you find that you are like me in this aspect—that selfish good deeds still work—then I recommend that you purchase warm fuzzies and utilons separately. Not at the same time. Trying to do both at the same time just means that neither ends up done well. If status matters to you, purchase status separately too!

  If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:

  To purchase warm fuzzies, find some hard-working but poverty-stricken woman who’s about to drop out of state college after her husband’s hours were cut back, and personally, but anonymously, give her a cashier’s check for $10,000. Repeat as desired.

  To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price. Make a big deal out of it, show up for their press events, and brag about it for the next five years.

  Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilities—find the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.

  I would furthermore advise the billionaire that what they spend on utilons should be at least, say, 20 times what they s
pend on warm fuzzies—5% overhead on keeping yourself altruistic seems reasonable, and I, your dispassionate judge, would have no trouble validating the warm fuzzies against a multiplier that large. Save that the original fuzzy act really should be helpful rather than actively harmful.

  (Purchasing status seems to me essentially unrelated to altruism. If giving money to the X-Prize gets you more awe from your friends than an equivalently priced speedboat, then there’s really no reason to buy the speedboat. Just put the money under the “impressing friends” column, and be aware that this is not the “altruism” column.)

  But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time. Writing a check for $10,000,000 to a breast-cancer charity—while far more laudable than spending the same $10,000,000 on, I don’t know, parties or something—won’t give you the concentrated euphoria of being present in person when you turn a single human’s life around, probably not anywhere close. It won’t give you as much to talk about at parties as donating to something sexy like an X-Prize—maybe a short nod from the other rich. And if you threw away all concern for warm fuzzies and status, there are probably at least a thousand underserved existing charities that could produce orders of magnitude more utilons with ten million dollars. Trying to optimize for all three criteria in one go only ensures that none of them end up optimized very well—just vague pushes along all three dimensions.

  Of course, if you’re not a millionaire or even a billionaire—then you can’t be quite as efficient about things, can’t so easily purchase in bulk. But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but don’t confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.

  And when it comes to purchasing expected utilons—then, of course, shut up and multiply.

  *

  326

  Bystander Apathy

  The bystander effect, also known as bystander apathy, is that larger groups are less likely to act in emergencies—not just individually, but collectively. Put an experimental subject alone in a room and let smoke start coming up from under the door. Seventy-five percent of the subjects will leave to report it. Now put three subjects in the room—real subjects, none of whom know what’s going on. On only 38% of the occasions will anyone report the smoke. Put the subject with two confederates who ignore the smoke, and they’ll only report it 10% on the time—even staying in the room until it becomes hazy.1

  On the standard model, the two primary drivers of bystander apathy are:

  Diffusion of responsibility—everyone hopes that someone else will be first to step up and incur any costs of acting. When no one does act, being part of a crowd provides an excuse and reduces the chance of being held personally responsible for the results.

  Pluralistic ignorance—people try to appear calm while looking for cues, and see . . . that the others appear calm.

  Cialdini:2

  Very often an emergency is not obviously an emergency. Is the man lying in the alley a heart-attack victim or a drunk sleeping one off? . . . In times of such uncertainty, the natural tendency is to look around at the actions of others for clues. We can learn from the way the other witnesses are reacting whether the event is or is not an emergency. What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too. Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us. Therefore everyone is likely to see everyone else looking unruffled and failing to act.

  Cialdini suggests that if you’re ever in emergency need of help, you point to one single bystander and ask them for help—making it very clear to whom you’re referring. Remember that the total group, combined, may have less chance of helping than one individual.

  I’ve mused a bit on the evolutionary psychology of the bystander effect. Suppose that in the ancestral environment, most people in your band were likely to be at least a little related to you—enough to be worth saving, if you were the only one who could do it. But if there are two others present, then the first person to act incurs a cost, while the other two both reap the genetic benefit of a partial relative being saved. Could there have been an arms race for who waited the longest?

  As far as I’ve followed this line of speculation, it doesn’t seem to be a good explanation—at the point where the whole group is failing to act, a gene that helps immediately ought to be able to invade, I would think. The experimental result is not a long wait before helping, but simply failure to help: if it’s a genetic benefit to help when you’re the only person who can do it (as does happen in the experiments) then the group equilibrium should not be no one helping (as happens in the experiments).

  So I don’t think an arms race of delay is a plausible evolutionary explanation. More likely, I think, is that we’re looking at a nonancestral problem. If the experimental subjects actually know the apparent victim, the chances of helping go way up (i.e., we’re not looking at the correlate of helping an actual fellow band member). If I recall correctly, if the experimental subjects know each other, the chances of action also go up.

  Nervousness about public action may also play a role. If Robin Hanson is right about the evolutionary role of “choking,” then being first to act in an emergency might also be taken as a dangerous bid for high status. (Come to think, I can’t actually recall seeing shyness discussed in analyses of the bystander effect, but that’s probably just my poor memory.)

  Can the bystander effect be explained primarily by diffusion of moral responsibility? We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help—that they mainly wish to escape antiheroism and possible retribution. Something like this may well be a contributor, but two observations that mitigate against it are (a) the experimental subjects did not report smoke coming in from under the door, even though it could well have represented a strictly selfish threat and (b) telling people about the bystander effect reduces the bystander effect, even though they’re no more likely to be held publicly responsible thereby.

  In fact, the bystander effect is one of the main cases I recall offhand where telling people about a bias actually seems able to strongly reduce it—maybe because the appropriate way to compensate is so obvious, and it’s not easy to overcompensate (as when you’re trying to e.g. adjust your calibration). So we should be careful not to be too cynical about the implications of the bystander effect and diffusion of responsibility, if we interpret individual action in terms of a cold, calculated attempt to avoid public censure. People seem at least to sometimes hold themselves responsible, once they realize they’re the only ones who know enough about the bystander effect to be likely to act.

  Though I wonder what happens if you know that you’re part of a crowd where everyone has been told about the bystander effect . . .

  *

  1. Bibb Latané and John M. Darley, “Bystander ‘Apathy,’” American Scientist 57, no. 2 (1969): 244–268, http://www.jstor.org/stable/27828530.

  2. Cialdini, Influence.

  327

  Collective Apathy and the Internet

  In the last essay I covered the bystander effect, a.k.a. bystander apathy: given a fixed problem situation, a group of bystanders is actually less likely to act than a single bystander. The standard explanation for this result is in terms of pluralistic ignorance (if it’s not clear whether the situation is an emergency, each person tries to look calm while darting their eyes at the other bystanders, and sees other people looking calm) and diffusion of responsibility (ever
yone hopes that someone else will be first to act; being part of a crowd diminishes the individual pressure to the point where no one acts).

  Which may be a symptom of our hunter-gatherer coordination mechanisms being defeated by modern conditions. You didn’t usually form task-forces with strangers back in the ancestral environment; it was mostly people you knew. And in fact, when all the subjects know each other, the bystander effect diminishes.

  So I know this is an amazing and revolutionary observation, and I hope that I don’t kill any readers outright from shock by saying this: but people seem to have a hard time reacting constructively to problems encountered over the Internet.

  Perhaps because our innate coordination instincts are not tuned for:

  Being part of a group of strangers. (When all subjects know each other, the bystander effect diminishes.)

  Being part of a group of unknown size, of strangers of unknown identity.

  Not being in physical contact (or visual contact); not being able to exchange meaningful glances.

  Not communicating in real time.

  Not being much beholden to each other for other forms of help; not being codependent on the group you’re in.

  Being shielded from reputational damage, or the fear of reputational damage, by your own apparent anonymity; no one is visibly looking at you, before whom your reputation might suffer from inaction.

  Being part of a large collective of other inactives; no one will single out you to blame.

  Not hearing a voiced plea for help.

 

‹ Prev