Copycats and Contrarians
Page 8
Kahneman sets out his dual systems model in terms of interactions between two different thinking styles: System 1 and System 2. System 1 thinking is quick, automatic, intuitive and emotional. When we come across a wild lion in the bush, System 1 is in the driver’s seat. We feel fear, and we run or hide without consciously considering our options. System 2 thinking is quite different. It is slow, controlled and deliberative. In situations when cognitive effort is vital, then System 2 thinking will step up. When we are in a job interview, sitting an exam or playing chess, then System 2 is in control, and we draw on our logical, reasoning capacities.
System 1 thinking requires much less mental energy than System 2. Conversely, System 2 is good at deliberation and carefully assessing different options, but it is lazy and wants to economise on cognitive effort. As Kahneman observed:
most of what you . . . think and do originates in your System 1, but System 2 takes over when things get difficult . . . The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance.4
So Systems 1 and 2 do not operate alone. They act in concert, but the quicker System 1 dominates most of the time. Reason is not irrelevant when we are in danger. Emotion is not irrelevant when we are forced to think deeply. Both will be operating, either in the foreground or the background of our thinking.
Kahneman’s analysis of different thinking styles is useful in our study of copycats and contrarians. It can be applied to capture how our herding and anti-herding choices are motivated by interactions between System 1 and System 2 thinking, connecting the self-interested herding models of the economists with the collective herding models from other social sciences. As we have seen, self-interested herding is about inferring something about what motivated others around us to make their choices. We balance this social information with what we know (our private information) and use logical rules (such as Bayes’ rule) to reconcile discrepancies between private and social information. All this is led by a System 2 style of deliberative thinking. Collective herding is driven by deeper, less conscious influences including emotions, personality, psychological instincts and social pressures. With collective herding, System 1 is in control. Which system dominates will depend on the situations in which we find ourselves. When we need to decide quickly, collective herding is more likely to dominate. When we have more time to reflect, self-interested herding will dominate. Sometimes the two will be operating together, as we shall see from the neuroscientific evidence. Similarly, anti-herding contrarians also sometimes deliberate slowly and carefully, but at other times decide to rebel, triggered by impulsive, instinctive emotional drivers.
Measuring mimicry
We have explained how dual systems models can reconcile divergent explanations for herding.5 These theories have a lot of power, but they do raise some empirical questions – paralleling those we might ask about Freud’s attribution of our adult behaviour to unconscious drives formed from our childhood experiences. Just as it is difficult to empirically verify a Freudian account, how can we provide evidence about whether System 1 or System 2 is in control? How can we know whether herding and anti-herding reflects careful deliberation, or emotional impulse, or some combination of the two? We cannot necessarily tell which system is driving someone just by observing what they do.
To answer these questions, we need to know more about neuroanatomy and its links with the basic principles underlying modern neuroscientific techniques. Today’s understanding of neuroanatomy builds on neurological insights from Aelius Galenus, better known to us as Galen (AD 129– c. 199), an impressively insightful physician whose work foreshadows many insights from modern neurology. Galen was born in Pergamon (an ancient Greek city, now part of modern Turkey) into an affluent family. His architect father, Aelius Nicon, had initially pushed his son towards philosophy and politics, but had a dream in which Asclepius, the Greek god of medicine, instructed him to allow his son to study medicine.6 Galen went on to develop a successful medical practice in Rome. He was physician to Marcus Aurelius’ son Commodus and became part of Rome’s intellectual community under a succession of emperors.
Galen’s knowledge of neuroanatomy was enhanced through his work as a surgeon, including a spell tending to the gladiators of Pergamon. Galen was influenced by Plato and developed the Greek philosopher’s chariot allegory in ways that are pertinent to the idea that our thinking styles might be rooted in our brain structures. Foreshadowing Freud’s id, ego and superego, Galen thought that our brains are the home of rational thought, our spirituality is in our hearts, and our appetites are in the liver. His medical practice complemented his interests in how our brains work – remarkably, very early on he recognised that the spinal cord is an extension of the brain.7
Many centuries later, Gustave Le Bon, whom we met in the previous chapter, developed insights that were similiar to Galen’s. For instance, he postulated that the spinal cord channels the social emotions manifested in mobs whilst the brain guides orderly and rational crowd behaviours.8 Some of Le Bon’s speculations around neuroanatomy – his theories to do with brain size and intellect across genders and races, for example – are discredited in modern neuroscience.9 Nonetheless, Galen was on the right track with his ideas about how brain structure links to the psychology of crowds and mobs. Galen’s and Le Bon’s hypotheses would strike many modern neuroscientists as gross oversimplifications, especially Le Bon’s very rough division of the spinal cord from the brain. He did, however, anticipate some findings from modern neuroscience. Neuroscientists have now identified regions deep in our brain associated with more primitive and emotional thinking, linking areas in our brain stem and mid-brain limbic system with our impulsive and/or social behaviours. Areas in our prefrontal cortex (the region at the front of our brain, above our eyes) have been implicated in tasks that require more complex thinking, including mathematical and analytical reasoning, and economic decision-making.
Opening black-box brains
The tools that neuroscientists can use to unravel what is going on in the black boxes of our brains are increasing all the time in range and sophistication. How can they capture the underlying neural processes that drive our choices, including our tendencies towards herding and anti-herding? Some of the early applications of neuroscientific tools were based around lesion patient studies. These studies focus on people who, through either accident or illness, have experienced localised brain damage. Using information about the location of the damage, neuroscientists can make inferences about how those brain areas are implicated in different types of decision-making.
Galen himself conducted some very early lesion patient studies, having been puzzled by the fact that no-one had ‘ever taken the trouble . . . to put a ligature around parts of the living animal in order to learn which function is injured’.10 Galen’s experiments did not go much further, however, as he came up against both religious and scientific constraints. Lesion patient studies resurfaced after the Enlightenment as science started gaining ground over religion. A famous historical lesion patient was Phineas Gage, an American railway worker, who in 1848 suffered a harrowing accident. A tamping iron, used to pack explosives into holes, exploded and was shunted into the front of his skull and through his brain. Amazingly, Gage seemed to recover well from his accident – at least physically. However, his friends and colleagues started to notice significant changes in his personality. A reliable and industrious worker, Gage had held down a steady job for years, but after recovering from the accident he was not such a good employee. His personality had changed. At work, he became feckless and unreliable. Socially, he became erratic and difficult. His physician Dr John Martyn Harlow was fascinated by these changes in Gage’s personality. He studied Gage and his medical record intensively and concluded that the change in his patient’s behaviour could be explained by the damage sustained to the frontal lobes, the areas of our brains associated with higher levels of cognitive functioning and self-control.11
More than 150 years later, modern neuroscientists are drawing on similar studies extensively. The US-based neuroscientist Antonio Damasio and his colleagues are pioneers in the use of lesion patient studies to study economic and financial choices. They are especially interested in what guides our risky choices, for example in gambling or asset trading. Damasio and his team have presented much evidence about the important role that emotion plays in decision-making, demonstrating that brain lesions in emotional processing areas are associated with severe deteriorations in ordinary functioning, even for patients with no outward evidence of injury. Mirroring Kahneman’s model of dual systems thinking, Damasio argues that emotional influences do not necessarily preclude rational thought.12
Lesion patient studies are relatively simple, if blunt, tools. Neuroscientists cannot directly control the regions available for study (unless they are complicit in significant legal and ethical transgressions, forbidden by modern research ethics committees). Unfortunate accidents and illnesses dictate which areas of the brain are damaged and neuroscientists are confined to studying the lesions as they find them. In the last few decades, however, the technological sophistication of the neuroscientist’s toolbox has rapidly advanced. Improvements to physiological and neuroscientific techniques mean that we can start to observe and understand how our neural circuitry is responding as we make our decisions. Physiologists can monitor heart rate, skin conductance, sweat rate and other physical responses and use this evidence to make inferences about emotional responses. Neuroscientists can measure brain activity by using techniques such as electroencephalography (EEG) to capture electrical impulses on the scalp. They can measure blood flow through the brain using brain-imaging techniques. They can zap areas of the brain temporarily to disable them using a technique called transcranial magnetic stimulation.
Brain imaging is a particularly popular technique. It requires complex machinery, but it gives neuroscientists more control over which areas of the brain they can study. Brain scanning also enables neuroscientists to work with a broader range of healthy people, thus addressing the ethical concerns around experimenting with vulnerable patients. Brain scanning techniques are used to capture how blood flows into localised regions of the brain. When we respond to mental stimuli, specific brain regions are activated, and blood flow in these areas increases relative to blood flows through passive brain regions. This produces changes in magnetic susceptibility, which can be mapped using a magnetic resonance scanner. This scanning technique is known either as Blood Oxygen Level Dependent (BOLD) brain imaging or functional magnetic resonance imaging (fMRI). Brain scanning is far from infallible and is often prohibitively expensive.13 It does, however, allow neuroscientists to focus on what is happening in specific brain areas. With fMRI, neuroscientists can study brain function in a targeted and controlled way, including as people participate in specific activities and tasks. By identifying specific ‘regions of interest’ in the brain, and by separating out the areas usually implicated in emotional, instinctive decision-making from those associated with higher-level cognitive reasoning, fMRI studies can capture whether herding is driven more by our emotional System 1 thinking or our deliberative System 2 thinking, or some combination of the two.
Copycats and contrarians in the brain scanner
In applying some of these techniques to discover more about the thought processes driving copycats and contrarians, we can learn some lessons from other brain imaging studies. One pioneering fMRI study of System 1 versus System 2 thinking was conducted by Dutch neuroscientists Wim De Neys, Oshin Vartanian and Vinod Goel.14 They used imaging techniques to investigate some judgement tasks that Daniel Kahneman and his old friend and colleague Amos Tversky had devised in their early work, specifically to see if these connected with Kahneman’s more recent ideas about dual thinking systems. De Neys and his colleagues used a version of Kahneman and Tversky’s Engineer-Lawyer problem.15 Participants in this experiment were told that a sample of 1,000 people includes 5 engineers and 995 lawyers. The probability that a given person is an engineer is 5 in 1,000; the probability that they are a lawyer is 995 in 1,000. The participants were then asked to estimate the chances that one person from this sample is either a lawyer or an engineer. Alongside the statistical information, the participants were also given a narrative account – to give them a mental image of the person they were guessing about. They were told that they were estimating the chances that a forty-five-year old man called Jack was an engineer rather than a lawyer. Jack, the participants were informed, is married and conservative, and enjoys carpentry and mathematical puzzles. Although this information is irrelevant to the statistical likelihood of Jack being an engineer or a lawyer, at least from a ‘frequentist’ probability perspective (i.e. probabilities calculated on the basis of how often an event occurs across a large number of trials), most people were excessively distracted by it. After being told Jack’s story, they overestimated the chances that Jack is an engineer. De Neys and Goel wanted to capture how people were thinking about this Engineer-Lawyer task. They brought thirteen people into their lab and asked them to try the task while in the fMRI scanner. The experiment produced some fascinating results. Areas of the brain usually thought to be associated with System 2 analytical thinking (usually used when people are solving a statistical problem) did not dominate. The fMRI evidence picked up stronger activations in the emotional areas, suggesting that the participants were being distracted by the narrative information. They were using more subjective and emotional styles of thinking to resolve what was meant to be a mathematical problem.
Neuroscientific evidence is growing about the various ways in which our social instincts underlie a wide range of real-world decision-making.16 Can we apply similar tools and insights to those used by De Neys and his colleagues to unravel self-interested herding and collective herding? Helping to answer these questions, neuroscientists and experimental psychologists are joining with economists to advance the new subdiscipline of neuroeconomics.17 The types of neuroeconomic collaborations vary. Sometimes the economists provide the theory, models and analytical structure around which the neuroscientists build their own models. Sometimes the neuroscientists provide the economists with new tools to test innovative theoretical hypotheses, and this is where economics and neuroscience combine in the study of herding.
I first came across neuroeconomics at the American Economic Association annual meeting in Philadelphia in 2005. Before then, in my thinking about what happens when we are copying others I’d struggled, as many economists do, with the problem that the brain is a black box. After attending the session on neuroeconomics it occurred to me that perhaps neuroeconomics could fill a gap in economists’ understanding of herding and anti-herding. After discussions with distinguished neuroscientist Wolfram Schultz and his team, based at the Department of Physiology, Development and Neuroscience at the University of Cambridge, we decided to combine neuroscientific techniques with economic insights to investigate herding.
Schultz was one of the pioneers in what was then the very new science of neuroeconomics. He is interested in how we learn, and particularly in how our reward pathways enable us to learn from the errors we make. His seminal contributions include the theory of reward prediction error.18 This hypothesis links to reinforcement learning: the general idea that we and other animals learn to repeat actions when we associate those actions with reward. Animals learn because it is physiologically rewarding. Reward prediction error develops this idea but with an additional subtlety: animals learn behaviours not because of the direct stimulation they get from a reward, but because of the errors they make in their prediction of a reward. These prediction errors are picked up by neurons emitting the dopamine neurotransmitter (a chemical messenger) into reward-processing regions of the brain. For example, when a monkey randomly presses a lever and is surprised by the reward of a piece of fruit, the dopamine neurons emit a positive signal, encouraging the monkey to repeat the action. As she does so, she is again rewarded but
is less surprised by the reward. Her reward prediction errors get smaller and smaller as she learns to predict more accurately the likelihood of a reward. When the prediction errors reach zero, the monkey’s prediction of a reward and the actual rewards she receives have matched up, and learning stops.19
How can we connect this with our decisions to copy and herd with others? As we have seen, herding can be explained as the product of social learning, and social learning is driven by reward learning too. Together with Christopher Burke and Philippe Tobler (both now at the University of Zurich), Wolfram Schultz and I brought together economic and neuroscientific tools and insights in a neuroeconomic study of herding and social learning.20 When people follow others their neural reward system is activated, but which neural areas specifically – those more usually associated with logical thinking or those more usually associated with instinctive emotional responses?
For our first experiment, we recruited a group of people comprising students and other adults from the local community around Cambridge. We asked them to decide whether or not to buy a financial share. If they made the right choice then they could earn some money. They were given some information to help them decide. In the first stage of the experiment, we gave the participants some private information in the form of a share price chart. In the second stage, we showed them the decisions of a herd – depicted in an image of four other people’s faces – with a tick or a cross to denote whether the person had decided to buy or not.21 To capture the social condition, we also showed our participants a photo of the faces of four chimps. Why? Generally, scientific experiments are controlled. To get an objective measure of how the experimental conditions are changing behaviour, a controlled experiment needs a baseline – and the control condition serves this purpose. For our fMRI experiments, our control condition needed to be similar to the human herd image, because otherwise any differences in the brain activations we measured when we introduced our experimental participants to the social information about the herd’s choices might have been driven by differences in the visual stimuli, not by social influences (pictures of faces are more stimulating than no picture at all). The monkey faces were as close as we could get to human faces – but we did have to assume that our participants were unlikely to let a herd of monkeys dictate their financial choices. Then, using fMRI, we scanned the participants’ brain activity as they were assessing the information and making their choices. We were curious to know what happens in people’s brains when they are balancing private information and social information. When our participants were balancing private and social information, what neural mechanisms would be activated, not just for copycats herding but also for contrarians anti-herding?