In some sense, both dilemmas are equivalent. Seeing this is not easy because it requires going against our intuitive body signals. But from a purely utilitarian perspective, from the motivations and consequences of our actions, the dilemmas are identical. We choose to act in order to save five at the expense of one. Or we choose to let fate take its course because we feel that we do not have the moral right to intervene, condemning someone who was not destined to die.
Yet from another perspective, both dilemmas are very different. In order to exaggerate the contrasts between them, we present an even more far-fetched dilemma.
You are a doctor on an almost deserted island. There are five patients, each one with an illness in a different organ that can be solved with a single transplant that you know will undoubtedly leave them in perfect health. Without the transplant they will all die. Someone shows up in the hospital with the flu. You know you can anaesthetize them and use their organs to save the other five. No one would know and it is only a matter of your own conscience being the judge.
In this case, the vast majority of people presented with the dilemma would not take out the organs to save the other five, even to the point of considering the possibility aberrant. Only a few very extreme pragmatists choose to kill the patient with the flu. This third case again shares motivations and consequences with the previous dilemmas. The pragmatic doctor works according to a reasonable principle, that of saving five patients when the only option in the universe is one dying or five dying.
What changes in the three dilemmas, making them progressively more unacceptable, is the action one has to take. The first act is turning a wheel; the second, pushing someone; and the third, cutting into people with a surgical knife. Turning the wheel isn’t a direct action on the victim’s body. Furthermore, it seems innocuous and involves a frequent action unconnected to violence. On the other hand, the causal relationship between pushing the man and his death is clearly felt in our eyes and our stomachs. In the case of the wheel, this relationship was only clear to our rational thought. The third takes this principle even further. Slaughtering a person seems and feels completely impermissible.
The first argument (five or one) is utilitarian and rational, and is dictated by a moral premise, maximizing the common good or minimizing the common evil. This part is identical in all three dilemmas. The second argument is visceral and emotional, and is dictated by an absolute consideration: there are certain things that are just not done. They are morally unacceptable. This specific action that needs to be done to save five lives at the expense of one is what distinguishes the three dilemmas. And in each dilemma we can almost feel the setting in motion of a decision-making race between emotional and rational arguments, à la Turing, in our brain. This battle that invariably occurs in the depths of each of us is replicated throughout the history of culture, philosophy, law and morality.
One of the canonical moral positions is deontological–this word derives from the Greek deon, referring to obligation and duty–according to which the morality of actions is defined by their nature and not their consequences. In other words, some actions are intrinsically bad, regardless of the results they produce.
Another moral position is utilitarianism: one must act in a way that maximizes collective wellbeing. The person who turns the wheel, or pushes the man, or slices open the flu sufferer would be acting according to a utilitarian principle. And the person who does not do any of those actions would be acting according to a deontological principle.
Very few people identify with one of those two positions to the extreme. Each person has a different equilibrium point between the two. If the action necessary to save the majority is very horrific, deontology wins out. If the common good becomes more exaggerated–for example, if there are a million people being saved instead of five–utility moves into the foreground. If we see the face, expression or name of the person to be sacrificed for the majority–particularly when it is a child, a relative or someone attractive–deontology again has more weight.
The race between the utilitarian and the emotional is waged in two different nodes of the brain. Emotional arguments are codified in the medial part of the frontal cortex, and evidence in favour of utilitarian considerations is codified in the lateral part of the frontal cortex.
Just as one can alter the part of the brain that allows us to understand another person’s perspective and hack into our ability to use the theory of mind, we can also intervene in those two cerebral systems in order to inhibit our more emotional part and foster our utilitarianism. Great leaders, such as Churchill, usually develop resources and strategies to silence their emotional part and think in the abstract. It turns out that emotional empathy can also lead us to commit all sorts of injustices. From a utilitarian and egalitarian perspective of justice, education and political management, it would be necessary to detach oneself–as Churchill did–from certain emotional considerations. Empathy, a fundamental virtue in concern for our fellow citizens, fails when the goal is to act in the common good without privilege and distinctions.
In everyday life there are very simple ways to give more weight to one system or the other. One of the most spectacular was proven by my Catalan friend Albert Costa. His thesis is that when making the cognitive effort to speak a second language we place ourselves in a mode of cerebral functioning that favours control mechanisms. As such, it also favours the medial part of the frontal cortex that governs the utilitarian and rational system of the brain. According to this premise, we could all change our ethical and moral stance depending on the language we are speaking. And this does, in effect, occur.
Albert Costa showed that native Spanish-speakers are more utilitarian when speaking English. If a Spanish-speaker were to be posed with the man-on-the-bridge dilemma in a foreign language, in many cases he or she would be more willing to push him. The same thing happens with English-speakers: they become more pragmatic when evaluating similar dilemmas in Spanish. For a native English-speaker it is easier to push a man in Spanish.
Albert proposed a humorous conclusion to his study, but one which surely has some truth to it. The battle between the utilitarian and the emotional is not exclusive to abstract dilemmas. Actually, these two systems are invariably expressed in almost all of our deliberations. And, many times, in the safety of our homes more ardently than anywhere else. We are in general more aggressive, sometimes violently and mercilessly, with those we love most. This is a strange paradox in love. Within the trust of an unvarnished, unprejudiced relationship with vast expectations, jealousy, fatigue and pain, sometimes irrational rage emerges. The same argument between a couple that seems unbearable when we are living through it becomes insignificant and often ridiculous when seen from a third-person perspective. Why are they fighting over something so stupid? Why doesn’t one or the other simply give in so they can reach an agreement? The answer is that the consideration is not utilitarian but rather capriciously deontological. The deontology threshold drops precipitously and we are not willing to make the slightest effort to resolve something that would alleviate all the tension. Clearly we would be better off being more rational. The question is, how? And Albert, half joking and half serious, suggests that the next time we are fighting with our significant other, we should switch to Spanish (or any other non-native language). This would allow us to bring the argument into a more rational arena, one less burdened by visceral epithets.
The moral balance is complicated. In many cases, acting pragmatically or in a utilitarian manner requires detaching ourselves from strong emotional convictions. And it implies (most often implicitly) assigning a value (or a prize) to issues that, from a deontological perspective, it seems impossible to rationalize and convert into numbers.
Let’s perform a concrete mental experiment to illustrate this point. Imagine that you are going to be late for an important meeting. You are driving and right after you have crossed a railway line you realize that the warning signs at the level crossing are not working. You feel l
ucky that no train was passing when you drove across. But you understand that with the traffic due to get heavier someone will be hit by a train and most likely die. You then call 999 to inform the emergency services, but at the same time you realize that, if you don’t make the call, the fatal accident will close the streets just behind you and prevent traffic coming from various places in the city. And with that, you will make it on time to work. Would you hang up and let someone die to gain a few minutes and make it on time to your meeting? Of course not. The question seems absurd.
Now imagine that it is five of you in the same car coming together. You are the only one who realizes that the warning signals are not working–maybe because as a child you were fascinated by level crossings. Same question and surely same answer. Even if no one would know ‘your sin’, you would make the call and prevent the accident. It does not matter if it is one, five, ten or one million people arriving late, the answer is the same. More and more people being late wouldn’t add up to the value of one life. And this principle seems to be quite general. Most of us have a strong conviction that, regardless of the dilemma, an argument about life and death would trump all other considerations.
However, we may not live up to this conviction. As absurd as the previous dilemma seems, similar considerations are made daily by each driver and by policy-makers who regulate traffic in major cities. In Great Britain alone about 1,700 people die as a result road accidents. And even if this is a dramatic decrease from the number in the 1980s (close to 6,000), these numbers would be way lower if traffic speed were further restricted to, say, 25 mph. But this, of course, has a cost. It would take us twice the time to get to work.*
If we forget the cases in which fast driving actually saves lives, as with ambulances, it becomes clear that we are all making an unconscious and approximate comparison that has time, urgency, production and work on one side of the equation, and life and death on the other.
Establishing rules and principles for morality is a huge subject that is at the heart of our social pact and, obviously, goes far beyond any analysis of how the brain constructs these judgements. Knowing that certain considerations make us more utilitarian can be useful for those who are struggling to behave more in that way, but it has no value in justifying one moral position over another. These dilemmas are only helpful in getting to know ourselves better. They are mirrors that reflect our reasons and our demons so that, eventually, we can have them more at our disposal and not simply silently dictating our actions.
The chemistry and culture of confidence
Ana is seated on a bench in a square. She is going to play a game with another person, chosen at random among the many people in the square. They don’t know each other, and they do not see each other or exchange a single word. It is a game between strangers.
The organizer explains the rules of the game, and gives Ana fifty dollars. It is clearly a gift. Ana has only one option: she can choose how to divide the money with the other person, who will remain unaware of her decision. What will she do?
The choices vary widely, from the altruists who fairly divide the money to the egotists who keep it all. This seemingly mundane game, known as the ‘Dictator Game’, became one of the pillars of the intersection between economics and psychology. The point is that most people do not choose to maximize their earnings and share some of the tokens even when the game is played in a dark room where there is no record of the decision made by the dictator. How much is offered depends on many variables that define the contours of our predisposition to share.
Just to name a few: women are more generous and share more tokens independently of their monetary values. Instead, men tend to be less generous and even less as the value of the tokens increases. Also, people behave more generously when under the gaze of real human eyes. What is even more surprising is that just displaying artificial eye-like images on the screen on which players are making their choice makes them share more tokens. This shows that even minimal and implicit social cues can shape our social behaviour. Names also matter. Even when playing with recipients that they have never met, and that they do not see, dictators share more of their tokens when the recipient’s name is mentioned. And, on the contrary, a more selfish behaviour can be primed if the game is presented using a market frame of buyers and sellers. Last, ethnicity and physical attractiveness also dictate the way people share, but in a more sophisticated manner. In a seminal study conducted in Israel, Uri Gneezy showed that in a trust game which involved reciprocal negotiations, participants transferred more money when the recipient was of Ashkenazic origin than when they were of Eastern origin. This was true even when the donor was of Eastern origin, showing that they discriminate against their own group. However, in the dictator game, players shared similarly with recipients of both origins. Gneezy’s interpretation is that discrimination is the result of ethnic stereotypes (the belief that some groups are less trustable) and not a reflection of an intrinsic taste for discrimination (the desire to harm or offer less to some groups per se). Attractiveness also modulates sharing behaviour in a more complicated way. Attractive recipients tend to receive more, but this seems to depend a lot on specific conditions of how the game is played. One study found that differences based on attractiveness are more marked when the dictators can see the recipient but also listen to them. And the list is much longer. The point is that there are a great number of variables, from very sophisticated social and cultural constructs to very elementary artificial features that shape, in a very predictable manner, our sharing behaviour. And, most often, without us knowing anything about this.
Eva takes part in another game. It also begins with a gift, of fifty dollars that can be shared at will with a stranger named Laura. In this game, the organizers will triple what Eva gives Laura. For example, if she decides to give Laura twenty dollars, Eva will be left with thirty and Laura will get sixty. If she decides to give Laura all of it, Eva will have nothing and Laura will get 150. At the same time, when Laura gets her money, she has to decide how she wants to share it with Eva. If the two players can come to an agreement, the best strategy would be for Eva to give her all the money and then Laura would split it equally. That way they would each get seventy-five dollars. The problem is that they don’t know each other and they aren’t given the opportunity to make that negotiation. It is an act of faith. If Eva decides to trust that Laura will reciprocate her gracious action, she should give her all the money. If she thinks that Laura will be stingy, then she shouldn’t give her any. If–like most of us–she thinks a little of both, perhaps she should act Solomonically and keep a little–a sure thing–and invest the rest, accepting the social risk.
This game, called the ‘Trust Game’, directly evokes something we have already covered in the realm of optimism: the benefits and risks of trust. Basically, there are plenty of situations in life in which we would do better by trusting more and cooperating with others. Seen the other way around, distrust is costly, and not only in economic decisions but also in social ones–surely the most emblematic example being in couple relationships.
The advantage of taking this concept to its most minimal version in a game/experiment is that it allows us to exhaustively investigate what makes us trust someone else. We had already guessed some elements. For example, many players in the experiment often find a reasonable balance between trusting and not exposing themselves completely. In fact, the first player usually offers an amount close to half. And trusting the other person depends on the similarities between the players, in terms of accent, facial and racial features, etc. So again we see the nefarious effects of a morality based on superficiality. And what a player offers also depends on how much money is at stake. Someone who may be willing to offer half when playing with ten quid might not do the same when playing with 10,000. Trust has a price.
In another variant of these games, known as the ‘Ultimatum Game’, the first player, as always, must decide how to distribute what they have been given. The second player can
accept or reject the proposal. If they reject it, neither of them gets anything. This means that the first player has to find a fair balance point that is usually a little above nothing and a little below half. Otherwise, both players lose.
Bringing this game to fifteen small communities in remote parts of the planet, and in search of what he called homo economicus, the anthropologist Joseph Henrich discovered that cultural forces establish quite precise rules for this type of decision. For example, in the Au and Gnau peoples of Papua New Guinea, many participants offered more than half of what they received, a generosity rarely observed in other cultures. And, to top it all off, the second player usually rejected the offer. This seems inexplicable until one understands the cultural idiosyncrasy of these Melanesian tribes. According to implicit rules, accepting gifts, even unsolicited ones, implies a strict obligation to repay them at some point in the future. Let’s just say that accepting a gift is like taking on some sort of a mortgage.*
Two large-scale studies, one carried out in Sweden and the other in the United States, using twins both monozygotic (identical) and dizygotic (fraternal ones, whose genomes are as different as any other siblings), show that individual differences in generosity seen in the trust game also have a genetic predisposition. If a twin tends to be very generous, in most cases their identical twin will also be. And the opposite is also true: if one decides to keep all the money, there is a high likelihood that the identical twin will do the same. This relationship is found to a lesser extent in dizygotic twins, which allows us to rule out that this similarity is merely a result of having grown up together, side by side, in the same home. That, of course, doesn’t contradict what we already saw and intuited: that social and cultural differences influence cooperative behaviour. It merely shows that they aren’t the only forces that govern generosity.
The Secret Life of the Mind Page 9