The Secret Life of the Mind
Page 8
The outlines of identity are blurred. Or, to put it more precisely, each of us makes up a consortium of identities that are expressed in different, sometimes contradictory, ways in varying circumstances. The disassociation between various members of the consortium has two clear projections: one that is hedonistic and bold, that ignores the risks and future consequences (the optimist), and another that ponders those risks and consequences (the pessimist). This dynamic is particularly exacerbated in two quite different scenarios: in certain neurological and psychiatric pathologies, and in adolescence.
The predisposition to ignore risk grows with the activation of the nucleus accumbens in the limbic system, which corresponds to the perception of hedonistic pleasure. In fact, in an experiment that shocked some of his colleagues at the Massachusetts Institute of Technology, Dan Ariely recorded this in a detailed, quantitative way with regard to a precise aspect of pleasure: sexual arousal. He found that the more excited people get, the more predisposed they are to doing things that they would otherwise consider aberrant or unacceptable. Such things of course included taking the risk of having unprotected sex with strangers.
Adolescence is a period plagued with excessive optimism and exposure to risky situations. This happens because the brain’s development, like the body’s, is not homogeneous. Some cerebral structures develop very quickly and mature within the first few years of life, while others are still immature when we become teenagers. One popular neuroscientific myth is that adolescence is a time of particular risk because of the immaturity of the prefrontal cortex, a structure that evaluates future consequences and coordinates and inhibits impulses. However, the later development of the control structure in the frontal cortex cannot explain per se the spike in risk predisposition recorded during the teenage years. In fact, children, whose prefrontal cortex is even more immature, expose themselves to less risk. What is characteristic of adolescence is the relative immaturity of prefrontal cortex development–and as a result, the ability to inhibit or control certain impulses–with a consolidated development of the nucleus accumbens.
The naïve clumsiness of those teenage years, in a body that is growing more than its capacity to control itself, can be seen as a reflection of the adolescent cerebral structure. Understanding this, and taking into account the uniqueness of this time in our lives, can help us to empathize and, as a result, engage in a dialogue more effectively with teenagers.
This understanding of the brain structure is also relevant for making public decisions. For example, in many countries there is debate surrounding whether teenagers should be allowed to vote. These debates would benefit from taking into account an informed view of the development of reasoning and the process of decision-making during adolescence.
The work done by Valerie Reyna and Frank Farley on risk and rationality in teenagers’ decision-making shows that, even when they don’t have good control of their impulses, teenagers are intellectually indistinguishable from adults in terms of rational thought. Which is to say, they are capable of making informed decisions about their future despite the fact that they struggle, more than an adult would, to rein in their impulses in emotionally charged states.
But, of course, we don’t need a biologist to tell us that we alternate between reason and impulse, and that our impulsivity shows up in the heat of the moment even beyond our teenage years. This is expressed in the myth of Odysseus and the Sirens, which also gives us perhaps the most effective solution for dealing with this consortium that comprises our identity. When heading off on his voyage home to Ithaca, Odysseus asks his sailors to tie him to the boat’s mast so that he won’t act on the inevitable temptation to follow the Sirens’ song. Odysseus knows that in the heat of the moment, the craving will be irresistible,* but instead of cancelling his voyage he decides to make a pact with himself, binding together his rational self with his anticipated future impulsive one.
The analogies with our daily life are often much more banal; for many of us, our mobile phones ring out with the contemporary version of the Sirens’ song, virtually impossible to ignore. To such an extent that, although we know the clear risks of answering a text while at the wheel, we do it even when the message is something completely irrelevant. Ignoring the temptation to use our phone while driving seems difficult, but if we leave it somewhere inaccessible–such as in the boot of the car–we, like Odysseus, can force our rational thinking to control our future recklessness.
Flaws in confidence
Our brain has evolved mechanisms to ignore–literally–certain negative aspects of the future. And this recipe for creating optimists is just one of the many ways the brain produces a disproportionate sense of confidence. Studying human decisions in the social and economic problems of daily life, Daniel Kahneman, a psychologist and Nobel Prize laureate in Economics, identified two archetypal flaws in our sense of confidence.
The first is that we tend to confirm that which we already believe. That is to say, we are generally headstrong and stubborn. Once we believe something, we look to nourish that prejudice with reaffirming evidence.
One of the most famous examples of this principle was discovered by the great psychologist Edward Thorndike when he asked a group of military leaders what they thought about various soldiers. The opinions dealt with different aptitudes that included physical traits, leadership abilities, intelligence and personality. Thorndike proved that the evaluation of a person mixes together abilities that, on the face of them, have no relationship to each other. That was why the generals rated the strong soldiers as intelligent and good leaders, although there is no necessary correlation between strength and intelligence.* Which is to say that when we evaluate one aspect of a person, we do so under the influence of our perception of their other traits. And this is called the halo effect.
This flaw of the decision-making mechanism is not only pertinent to daily life but also in education, politics and the justice system. No one is immune to the halo effect. For example, when faced with an identical group of conditions, judges are more lenient with people who are more attractive. This is an excellent example of the halo effect and the deformations it causes: those who are lovely to look at are viewed as good people. Of course, this same effect weighs on the free and fair mechanism of democratic elections. Alexander Todorov showed that a brief glance at the face of two candidates allows one to predict the winner with striking precision–close to 70 per cent, even without data on the candidates’ history, thoughts and deeds, or their electoral platforms and promises.
The confirmation bias–the generic principle from which the halo effect derives–cuts reality down so we see only what is coherent with what we already believe to be true. ‘If she looks competent, she’ll be a good senator.’ This inference, which ignores facts pertinent to the assessment and is based entirely on a first impression, turns out to be much more frequent that we realize or will admit to in our day-to-day decisions and beliefs.
In addition to the confirmation bias, a second principle that inflates confidence is the ability to completely ignore the variance of data. Think about the following problem: a bag holds 10,000 balls, you take out the first one and it’s red, you take out a second one and it’s red too. You take out the third and fourth, and they are red as well. What colour will the fifth one be? Red, of course. Our confidence in that conclusion far outweighs the statistical probability. There are still 9,996 balls in the bag. As Woody Allen says, ‘Confidence is what you have before you understand the problem.’ To a certain extent, confidence is ignorance.
Postulating a rule based on just a few cases is both a virtue and a vice of human thought. It is a virtue because it allows us to identify rules and regularities with consummate ease. But it is a vice because it pushes us towards definitive conclusions when we have barely observed a tiny slice of the reality. Kahneman proposed the following mental experiment. A survey of 200 people indicates that 60 per cent would vote for a candidate named George. Very shortly after finding out about that survey,
the only thing we remember is that 60 per cent would vote for George. The effect is so strong that many people will read that and think that I wrote the same thing twice. The difference is the size of the sample. In the first phrasing, the case explicitly states that it is the opinion of only 200 people. In the second, that information has disappeared. This is the second filter that distorts confidence. In fact, in formal terms, a survey showing that out of 30 million people 50.03 per cent would vote for George would be much more decisive, but the belief system in our brains mostly makes us forget to weigh in whether the data comes from a massive sample or whether we are dealing with three balls in a bag of 10,000. As the recent ‘Brexit’ outcome in the UK or the Donald Trump vs Hillary Clinton election show, often, in the build-up to an election, the pollsters forget this basic rule of statistics and draw firm conclusions based on a strikingly small and often biased amount of data.
In short, the confirmatory effect and variance blindness are two ubiquitous mechanisms that, in our minds, allow us to base opinions on just a small portion of the coherent world while ignoring an entire sea of noise. The direct consequence of these mechanisms is inflated confidence.
A vital question in understanding and improving our decision-making is to explore if these confidence flaws are native to complex social decisions or if they are seen throughout the vast spectrum of decision-making. Ariel Zylberberg, Pablo Barttfeld and I set out to solve this mystery by studying extremely simple decisions, such as which is the brighter of two points of light. We found that the principles which inflate confidence in social decisions, such as the confirmatory effect or ignoring variance, are traits that persist even in the simplest perceptual decisions.
It is a common trait of our brains to generate beliefs that are more optimistic than actual data suggests. This was confirmed by a series of studies recording the neuronal activity in different parts of the cerebral cortex. It was consistently observed that our brains–and the brains of many other species–are constantly mixing sensory information from the outside world with our own internal hypotheses and conjectures. Even our vision, the brain function we imagine to be most anchored to reality, is filled with illusions. Vision doesn’t function passively depicting reality like a camera, but rather more like an organ interpreting and constructing detailed images based on limited and imprecise information. Even in the first processing station in the visual cortex, neurons respond according to a conjunction of information received by the retina and information received by other parts of the brain–parts that codify memory, language, sound–which establish hypotheses and conjectures about what is being seen.
Our perception always involves some imagination. It is more similar to painting than to photography. And, according to the confirmation effect, we blindly trust the reality we construct. This is best witnessed in visual illusions, which we perceive with full confidence, as if there were no doubt that we are portraying reality faithfully. One interesting way of discovering this–in a simple game that can be played at any moment–is the following. Whenever you are with another person, ask him or her to close their eyes, and start asking questions about what is nearby–not very particular details but the most striking elements of the scene. What is the colour of the wall? Is there a table in the room? Does that man have a beard? You will see that the majority of us are quite ignorant about what lies around us. This is not so puzzling. The most extraordinary fact is that we completely disregard this ignorance.
Others’ gazes
Both in everyday life and in formal law we judge others’ actions not so much by their consequences but by their determining factors and motivations. Even though the consequence may be the same, it is morally very different to injure a rival on a playing field through an unfortunate, involuntary action than through premeditation. Therefore, in order to be able to decide whether the player acted with bad intentions, just observing the consequences of their actions is not enough. We must put ourselves in their place and see what happened from the victim’s perspective. Which is to say, we have to employ what is known as the theory of mind.
Let us consider two fictional situations. Joe picks up a sugar bowl and serves a spoonful into his friend’s tea. Before he did so, someone switched the sugar for a poison of the same colour and consistency. Of course, Joe didn’t know that. His friend drinks the tea and dies. The consequences of Joe’s action are tragic. But was what he did wrong? Is he guilty? Almost all of us would say no. In order to arrive at that conclusion, we put ourselves in his shoes, recognizing what he knows and doesn’t know, and seeing that he had no intention of hurting his friend. Not only that, but in most people’s minds he was not even negligent in any way. Joe is a good guy.
Same sugar bowl, same place. Peter takes the bowl and replaces the sugar with poison because he wants to kill his friend. He spoons the poison into his friend’s tea but it has no effect on him, and his friend walks away alive and kicking. In this case, the consequences of Peter’s action are innocuous. However, we almost all believe that Peter did the wrong thing, that his action is reprehensible. Peter is a bad guy.
The theory of mind is the result of the articulation of a complex cerebral network, with a particularly important node in the right temporoparietal junction. As its name suggests, it is found in the right hemisphere, between the temporal and parietal cortices, but its location is the least interesting thing about it. Cerebral geography is less important than the fact that a function’s location in the brain can be a window to inferring the causal relationships in its workings.*
If our right temporoparietal junction were to be temporarily silenced, we would no longer consider Joe and Peter’s intentions when judging their actions. If that region of our brains is not functioning as it should, we would believe that Joe did wrong (because he killed his friend) and that Peter did right (because his friend is in perfect health). We wouldn’t take into consideration that Joe didn’t know what was in the sugar bowl and that Peter had failed to carry out his macabre plan only through chance. These considerations require a precise function, the theory of mind, and without it we lose the mental ability to separate the consequences of an action from its network of intentions, knowledge and motivations.
This example, demonstrated by Rebecca Saxe, is proof of a concept that goes beyond the theory of mind, morality and judgement. It indicates that our decision-making machinery is composed of a combination of pieces that establish particular functions. And when the biological support of those functions is disarmed, the way we believe, form opinions and judge changes radically.
More generally, it suggests that our notion of justice does not result from pure and formal reasoning, but that instead it is conceived in a particular state of the brain.*
But there is no need in fact to have very sophisticated brain-stimulating devices to prove this concept. The common image of the reality of justice being ‘what the judge ate for breakfast’ seems in fact to be quite true. The percentage of favourable rulings by US judges drops dramatically during the morning, then peaks abruptly after the lunch break to drop again substantially in the next session. This study of course cannot factor out the many variables that change between breaks such as glucose, or fatigue, or accumulated stress. But it shows that simple extraneous factors which condition the state of the judge’s brain have a strong influence on the outcome of court decisions.
The inner battles that make us who we are
Moral dilemmas are hypothetical situations taken to an extreme that help us reflect on the underpinnings of our morality. The most famous of them is the ‘trolley problem’, which goes like this:
You are on a tram without brakes that is travelling along a track where there are five people. You are well acquainted with its functioning and know without a shadow of a doubt that there is no way to stop it, and that it will run over those five people. There is only one option. You can turn the wheel and take another track where only one person will be run over.
Would you turn the wheel? In Brazil,
Thailand, Norway, Canada and Argentina, almost everyone–young or old, liberal or conservative–chooses to turn it on the basis of a reasonable, utilitarian calculation. The choice seems simple: five deaths or one? Most people across the world choose to kill one person and save five. Yet experiments show that there is a minority of people who consistently decide not to turn the wheel.
The dilemma consists in doing something that will provoke the death of one person or not doing anything to keep five people from dying. Some people could reason that fate had already chosen a path and that they shouldn’t play God and decide who dies and who lives, even when the maths favours that choice. They reason that we have no right to intervene, bringing about the death of somebody who would have been fine if not for our action. We all have a different judgement of the responsibility for action or inaction. It is a universal moral intuition that is expressed in almost every legal system.
Now, another version of the dilemma:
You are on a bridge from which you see a tram hurtling down a track where there are five people. You are completely sure that there is no way to stop it and that it will run over those five people. There is only one alternative. On the bridge there is a large man sitting on the railing watching the scene. You know for certain that if you push him, he will die but he will also make the trolley go off the tracks and save the other five people.
Would you push him? In this case, almost everyone chooses not to. And the difference is perceived in a clear, visceral way, as if it were decided by our bodies. We don’t have the right deliberately to push someone to save someone else’s life. This is supported by our penal and social system–both the formal one and the judgements of our peers: neither would consider these two cases to be equal. But let’s forget about that factor. Let’s imagine that we are alone, that the only possible judgement is our own conscience. Who would push the man from the bridge and who would turn the wheel? The results are conclusive and universal: even completely alone, with no one watching, almost all of us would turn the wheel and almost no one would push the man from the bridge.