Book Read Free

Copycats and Contrarians

Page 20

by Michelle Baddeley


  Experts may also tend towards myopic consensus. Agreeing with a crowd may be helpful in building a research career in the short term. However, it is less likely to yield career rewards, in terms of original research and insights, in the long run. But short-term impact may be particularly pressing for young experts at the start of their careers. In building their own reputations, junior members of a research lab often imitate and follow their mentors – partly reflecting social learning, but also because of social pressures. A young researcher who has just received their PhD is more likely to get a tenure track job if they flatter their seniors and group leaders by following in their footsteps. This is not necessarily undesirable: juniors may have much to learn from their seniors. In terms of one’s career, though, there is more to be made out of being genuinely original – but the associated risks are high in the short term.

  In September 2011, the social psychologist Diederik Stapel was suspended by his employer, Tilburg University, for inventing data on the sociology of urban environments. He manufactured evidence that he claimed demonstrated the link between disordered, littered environments and discriminatory behaviour and deprivation. He sustained his academic fraud for some years because those who suspected he had falsified the data felt unable to challenge him. Stapel reportedly responded aggressively when others, especially junior researchers, questioned his data and findings.27 This demonstrates the pressures that most of us feel to agree with a group. A junior researcher who disagrees with their seniors, and the whistleblowers who publicly reveal their concerns about falsified or misleading data and analysis, stand to lose all the personal capital they have invested in their careers and networks. They may be ostracised by their bosses and find that their careers stall without the support of a powerful mentor.

  However, an expert cannot build a good reputation, at least not in the long run, if they are manufacturing evidence. What motivates people like Diederik Stapel to take such extreme risks with their reputation and their careers? For most experts, there are rewards to contrarianism. As we saw in chapter 5, maverick contrarians are more inclined to take extreme risks than conformist copycats. Added to this, the research community values originality particularly highly. A researcher who just agrees with others may start to incur costs in terms of slowed career progression due to their safe but unoriginal research strategy. Experts ambitious about building their reputations may have an incentive to invent startling findings if these can give them a reputation for original thinking, and their junior colleagues may be scared to dissent. For the copycat experts, their susceptibility to group influence can have profoundly negative consequences, especially if group leaders can exploit their juniors’ obedience to authority to manipulate the path of research.

  Experts in equilibrium

  How can we pull all these elements together into a model that captures the social influences on experts whether they be conformist or contrarian? It can partly be understood as a process of balancing benefits and costs, broadly defined. Most experts, consciously or unconsciously, will focus on the private value of their personal beliefs and opinions. They value truth, but they are also subject to other intrinsic and extrinsic motivations. Researchers may have many friendly chats in the pub with their peers if they are generally in agreement with them. Their senior colleagues may invite them to participate on a research team if they are impressed by their aptitude. There are psychological benefits from conforming to others’ beliefs; being contrarian is a far more isolating strategy. Also, experts may gain strategic advantages from joining a group. This links into what economists call payoff externalities. When an expert contributes to a growing consensus in a particular direction, this accelerates the movement of others in that direction. The rewards for those who join the herd increase as others join, and then decrease depending on the number of others joining the consensus. When an original, innovative view is taking off and an expert joins a small, elite group who hold it, the value of joining that group increases. Something like a knowledge bubble is generated. As the group grows larger, other rewards kick in. Reputation grows, conformity with others is satisfying. Strategically, joining a new consensus has career benefits. The consensus grows as experts replicate other researchers’ novel findings, though to individuals the value of replication can be small – a particular problem for academic research. But it is not a linear process. Once the consensus has taken hold and no longer seems novel and original, the returns for joining the consensus start to decrease. Publishing ideas around an established consensus becomes difficult because the findings are no longer original. As the consensus-forming group swells, then each new expert joining this group gains less and less. In economists’ language, the marginal returns from joining the consensus group will fall. Eventually, these marginal returns may reduce to zero, for example if supporting a consensus view is deemed unoriginal and judged to contribute little to the development of new research ideas. There may be stagnation, lots of reinventing the wheel and, at best, insignificant and marginal accretions of knowledge. Then, an ambitious researcher will have nothing to gain from joining the consensus.

  The contrarian researcher’s rewards come from the opposite direction: as more experts join the consensus, the more of a pariah the contrarian will seem. The contrarian expert will be the loser from the knowledge bubbles that develop as herds of experts follow and develop a new consensus opinion. The contrarian’s reputation will falter and their career will stagnate. Eventually, though, the balance may shift. As the consensus view starts to seem unoriginal, the rewards for holding the contrarian view may still fall, but at a decreasing rate. They may even start to rise again as everyone gets fed up with the consensus view, more information comes along and a paradigm shift turns the contrarian into a trendsetter. There is a stable point, an equilibrium, when the gains from consensus and contrarian viewpoints are balanced.

  Expert bias

  The influences outlined above are largely objective and conscious. More intractable problems emerge, especially in uncertain situations, when experts unconsciously use herding heuristics and other rules of thumb to guide their interpretation of events. Their beliefs coincide with the prior opinions of others, and their private judgements are lost. This is not about individuals pursuing their own self-interest in career or other terms. Instead, unconscious biases are leading experts down the wrong path. Whilst social influences are less benign when experts consciously manipulate them to protect and build their reputations, at least these conscious transgressions can be controlled, for instance via cleverly designed incentive structures, or via sanctions and punishments. If experts’ judgements are distorted without them even realising it, then that is a harder problem to solve.

  As we have seen in previous chapters, in understanding the role played by psychological factors in our decision-making, behavioural scientists are exploring how and why people use quick decision-making rules – heuristics and rules of thumb – when they are faced with complex information. As Daniel Kahneman and Amos Tversky observed, heuristics and rules of thumb lead to bias, including group biases such as groupthink, which emerge when an individual’s beliefs coincide with prior opinions of others around them for reasons that are not objective. This creates herding and path dependency – the future is determined by the past, rather than a comprehensive assessment of current, up-to-date information or what is new and different. Sociopsychological influences compound these problems – for example, many of us feel more comfortable conforming. A bias towards herding may also reflect work pressures. For example, one study found that around 78 per cent of Spanish doctors treating patients with multiple sclerosis were likely to follow the herd in recommending treatments. The researchers identified mental fatigue in the context of cognitively demanding decision-making as a key factor.28 Related to herding bias is the problem of confirmation bias. Behavioural economists and psychologists have shown that people tend to interpret evidence to support their own world view. For example, if a person is a climate-change deni
er, then they will tend to interpret evidence about the slowdown in global warming as supporting their prior beliefs – that is, as a sign that climate change is a myth. Confirmation bias will affect people’s opinions of experts and expert evidence, and so group beliefs and herd opinions will persist.

  Researchers have explored the extent to which this sort of phenomenon operates in scientific research too. One example is the Sokal hoax. In 1996, the physics professor Alan Sokal decided to test the refereeing process for academic journals. He submitted a nonsensical research paper – ‘Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’ – to the research journal Social Text, structuring his fabricated nonsense around prevailing opinions in the social sciences. His contrived paper was accepted by the journal and its referees. According to Sokal, this was because it fitted well with the journal reviewers’ and editors’ preconceptions. It confirmed their world view and so they were willing to accept it.29 Experts do of course make genuine mistakes. But they may check for errors more carefully if their initial findings conflict with their prior opinions than if they do not – giving an additional foothold for confirmation bias. Shortcomings in research methodology can be downplayed. When researchers are prone to unconscious bias, they may genuinely believe that their evidence has a strong objective basis when it does not.

  Another behavioural bias relevant to herding and social influences reflects ‘anchoring and adjustment’ heuristics which, as explained in chapter 3, were identified by Kahneman and Tversky.30 Behavioural economists and economic psychologists have shown that many of our decisions are made around reference points: we anchor and adjust our decisions relative to the status quo. Social influences are important in this because many of our reference points are socially determined – we are naturally biased towards popular existing opinions. Another insight from the literature on heuristics and bias that may have relevance is a problem we explored in earlier chapters, that of loss aversion, as also identified by Kahneman, Tversky and others. The psychic and practical losses to reputation from disagreeing with the consensus are potentially disproportionately large relative to the gains from conforming, and, in a world in which we are more prone to worry about losses than gains, we are more likely to see experts avoiding the reputational risks they would be taking by dissenting.

  The personalities of scientists also determine their tendencies to be copycats or contrarians. Strong personalities may be more likely to hold strong convictions – but are such people less likely to herd because of those convictions or because of their strong personalities? How do we unravel the two in our search for truth? In the experimental sciences, we often imagine that careful design of clean experiments and/or the robust application of statistical principles and the scientific method can limit the chances of blind groupthink. To an extent they can, if a researcher has insight and self-awareness. But statistics can be manipulated to persuade, and confirmation bias is hard to overcome, even amongst the most insightful researchers.

  Experts’ herding externalities

  These influences on individual researchers have wider impacts beyond the individual expert. All of us want to do well for ourselves, even if we moderate this with philanthropic inclinations. The problem is that experts’ judgements, by their very nature, have implications for other people. These are a type of ‘externality’ – the term that economists use to describe the costs or benefits an individual imposes on others around them, when these others have no control over the individual’s choice or decision. Specifically, groupthink and herding may help a lone expert but generate negative externalities for scientific communities and society at large. As copycat experts follow each other, then they are effectively discarding their private knowledge, and society suffers as a consequence.

  In chapter 1, we made the point that the negative consequences from herding are not just about whether the herd is going in the right or wrong direction, but about the fact that private information and judgements are lost. We can illustrate this point more clearly in the context of experts. Herding externalities can be a serious problem for scientific research if it means that experts are less likely to discover something new. In the context of experts’ opinions, missing new insights can reflect the excessive weight assigned to a theory that is popular. An individual researcher may find evidence that contradicts the consensus and, for a range of reasons, may discard it. A financial analyst assessing the prospects of an investment in the subprime mortgage market, for example, may have a hunch that these assets are toxic, but they see others around them continuing to invest in them. They weight this evidence more strongly than their own private judgement about the risks of investing in these assets. A speculative housing asset bubble grows, with devastating consequences for people and economies across the world.

  Experts in the crowd

  Developing this theme leads us into some ideas from economists about the disconnect between what is best for the individual and what is best for the group. Knowledge and evidence can and should share many of the characteristics of what economists call public goods. In their purest form, public goods are fully accessible to everyone. Individuals are not excluded from consuming them. There are no barriers to entry. One person’s consumption of them does not diminish the potential for others to consume them. The stock of public goods in their purest form does not deplete, and the marginal cost of one more person using them is zero. From an individualistic perspective, the problem of public goods is that there is no market incentive to provide them, given that it is difficult to charge people if you cannot easily stop them from consuming. And if you cannot charge people, you cannot make a profit. So, who pays for public goods?

  From a societal perspective, amassing knowledge is a collective effort, and institutions other than markets have evolved to support this, though market institutions have also evolved to make a profit from it. Most controversially, in academia, profits are made by effectively privatising knowledge via scientific journals’ expensive paywalls and/or financing arrangements in which the academic researchers themselves are charged for publishing their own research. Specifically in the context of copycat experts, the collective nature of research and knowledge accumulation makes it hard to separate consensual beliefs that are well grounded from consensual beliefs that lack proper foundation. If accumulating knowledge is a collective effort by large numbers of experts, then no single expert can be held responsible for errors.

  As we noted above, reputation is affected as the balance favouring copycat experts shifts in favour of contrarian experts. When copycats’ reputations are more robust, consensual beliefs will generate over-consensus and group bias. Empirical philosopher Michael Weisberg and his colleagues have explored the idea that consensual beliefs have negative impacts at an aggregate scale. Using computational modelling methods, Weisberg and his team artificially generated two types of population, one dominated by copycat ‘followers’ and the other by contrarian ‘mavericks’. They created visual maps to capture how much of a knowledge landscape was explored by either group. Their simulations showed that substantially more ground was explored by mavericks than by followers. Followers explore less because they are sticking with the crowd. Mavericks explore more because they venture into territories where others haven’t yet been. The implication for experts is that if an expert community is dominated by large groups of followers, then the knowledge landscape is not fully explored. Experts who are followers learn much less when they are all copying each other. In epistemic terms, essentially, they are just retreading ground already well trodden by others. With a good proportion of mavericks in a population of experts, the outcome is reversed. The knowledge landscape is more likely to be fully explored. Experts are more likely to discover more when they focus less on what their predecessors have explored. So, Weisberg advocates incentives for risk-taking in research – to overcome the welfare loss from too many copycats just imitating each other.31 Weisberg’s study shows that contrarians are essential
. We need contrarians to shepherd herds of experts away from a path dominated by social influences, towards fresh perspectives and new interpretations of data and evidence. There are no easy answers, though, because, social influences can be valuable too – for example, replicating results is an essential but neglected aspect of scientific research. If a hypothesis has genuinely been verified across a range of different studies then that may be because it is a more plausible and probable hypothesis than the alternatives.

  As we have seen, economists’ models of herding show why we might logically follow others if we believe they have better information than we do. By extension, supporting consensus views does not necessarily mean that those views are wrong. It may be logical to ignore what little we know already if we can do better for ourselves by following others. This is true for experts too. The problem is that, at a macro level, it leads to path dependency. This insight can be simplified to the observation that if more experts support a theory then, all things being equal, perhaps it is more likely to be true. That does not mean that it is a definite truth. Academic research is not generally about absolute proof. Imagine two competing hypotheses, both of which are initially novel and have no ‘tribal’ support. When a theory or hypothesis is widely supported by many experts then it is reasonable to believe that it is more likely to be true. The chances of a large number of experts supporting a false hypothesis may seem smaller than the chances of a large number supporting a true hypothesis, especially when experts have good, objective reasons for agreeing with each other.

 

‹ Prev