Book Read Free

Copycats and Contrarians

Page 19

by Michelle Baddeley


  We live in an age when many are sceptical about experts and their opinions. Experts are often vulnerable to fierce, critical attack – especially as modern news is so dominated by unreliable tabloid journalism and social media. But our disenchantment with experts is not new. Nor is it necessarily a bad thing. Medical quackery,14 once favouring lobotomies and cold-water cures, now encompasses unorthodox medical, surgical and nutritional fads. With the right celebrity endorsement, the consequence can be iatroepidemics – epidemics of treatment-caused diseases – in which an element of faith is strong. Social influences have effects when they lead people to adopt a treatment just because we trust others who are advocating it.15 Almost by definition, if we are amateurs, we should sometimes have faith in experts.

  There is plenty of evidence that our attitudes and responses towards expert opinions are at best confused, and this confusion is often magnified by sloppy standards of journalistic reporting of science ‘news’. We want opposing things from our experts: we want them to be original and innovative, but at the same time we are reassured by high levels of expert agreement. Forecasters of everything from the economy to the weather are often vilified for deviating from a common judgement – and they are judged only with the benefit of hindsight.

  Michael Gove MP is famous in some circles and infamous in others for his opinion that ‘People have had enough of experts’. He voiced his words in a Sky News broadcast in the run-up to the UK’s 2016 EU referendum, during which he refuted the opinions expressed by most economists, including experts from the very reputable Office for National Statistics and the Institute for Fiscal Studies, that leaving the EU would have serious negative economic consequences for the UK. Gove’s quote was perhaps unfairly truncated from his original statement: ‘The people of this country have had enough of experts from organisations with acronyms, saying that they know what is best, and getting it consistently wrong’.16 Nonetheless, his anti-expert views were clear enough, in this interview and his subsequent broadcasting, social media and print media appearances. In the build-up to the June vote, Michael Deacon, the parliamentary sketch-writer for the Daily Telegraph, published a clever satirical piece that put the specious nature of Gove’s arguments in sharp focus. Do we really need doctors? Or pilots? Or maths teachers?

  The mathematical establishment have done very nicely, thank you, out of the notion that 2 + 2 = 4. Dare to suggest that 2 + 2 = 5, and you’ll be instantly shouted down. The level of groupthink in the arithmetical community is really quite disturbing. The ordinary pupils of Britain, quite frankly, are tired of this kind of mathematical correctness.17

  Gove’s comment is easy to satirise, but that does not mean that he was wholly wrong. Scepticism about experts has been growing, not helped by the contradictory précis of experts’ health and lifestyle advice we read all the time in the popular press.18

  Experts do not help themselves though, with their poor communication styles. The public does not realise, and perhaps is not encouraged by modern media to realise, that experts are not astrologers. Experts do not claim perfect foresight. They are forming their judgements in an uncertain world in which the future is unknown and sometimes unknowable. Sometimes we forget that most of the time we want an expert’s opinion on something because no-one knows the truth. The evidence is uncertain and unclear. The essence of very uncertain phenomena is that good data are scarce and difficult to interpret. It can be hard to predict future trends in complex phenomena – be they storms, stock market fluctuations, oil reserves or the spread of epidemics. We turn to experts for an answer, forgetting that experts are fallible humans and sometimes have no reliable ways of identifying the truth or forecasting future events. In an uncertain world, experts themselves are unsure, and should admit that they are unsure.

  Given all this uncertainty, experts present us with the chances of one event or another. Their predictions are not much more than informed probabilistic guesses. Scientific experts properly acknowledge the chance element in their predictions – they would not get published in any reputable journals if they did not. But these caveats are often lost in the translation to popular media, especially social media, where experts’ research and judgements are condensed into tweets of 280 characters or fewer.

  Another bias is that we don’t always give expert opinions the extra weighting they deserve given that experts are people with a deep, specialist knowledge of their subject. The best of broadcasting organisations, including the BBC, have been criticised for giving equal time to both amateur and expert opinions, on the implicit assumption that both amateurs and experts are equally well-informed. Is the implication that years of education and research count for nothing because everyone’s opinions should be weighted equally? This trend was particularly controversial in debates between scientists and climate-change deniers. In 2014, the BBC Trust undertook a review that reiterated that not all different opinions are equal: scientific evidence and experts’ opinions should be weighted more strongly than those of amateurs not grounded in a comprehensive knowledge of a subject.19 Modern technologies may be to blame for our disenchantment with experts because they enable quick dissemination of unsubstantiated opinion as if it were fact. The consequences are that when scientific research falls into disrepute, funding trickles away too.20 So, we need to understand better where the pitfalls lie. When experts present information to us in an authoritative way, using esoteric and technical language, we need to remember that they too are susceptible to herding and social influences. These influences might affect experts consciously or unconsciously, and sometimes malignly.

  If you were to ask an expert what their goal is, they would (hopefully) answer that it is to find some objective truth via a balanced assessment of existing evidence. An academic expert would add that they aim to develop the existing research and uncover facts, following a robust and balanced scientific method. All of this pretends that experts are essentially machine-like information processors. We expect them to absorb some data, process it and churn out the best objective judgement they can. If their judgement is wrong, then we conclude that they must be mad, bad or stupid – or maybe some combination of the three. We forget that experts are social animals, just like the rest of us.

  Sociable experts in an uncertain world

  Social influences have more traction in an uncertain world. How do we unravel all these influences in assessing experts, given that often we have no absolute, objective benchmark of truth against which we can judge the quality of an expert’s opinion? As we have seen in previous chapters, people are more likely to follow a crowd if their own information is muddy. Subjective social influences have more traction when the objective truth is very hard to find. In an uncertain world, experts do not deliver facts, they interpret data. For an economist to predict what might happen in the next year to house prices, oil prices or government deficits is really an enormous task. In these situations, the expert opinion is often just that – an opinion, not a statement of fact. Housing markets, for instance, are driven by so many unpredictable and complex factors that it is not surprising that economic forecasts have such a bad reputation for unreliability. Admitting they are unsure is sometimes the expert’s most honest answer, and their best course of action is to collect more information so that the uncertainty diminishes. The problem is that woolly answers about what an expert does not know are not newsworthy. People do not want to hear that even an expert cannot really be that sure.21

  What has this to do with copycats and contrarians? We have seen that when information is fuzzy and facts are unclear, social influences can be strongest. Herding takes a powerful hold over opinion, judgement and belief. It magnifies the difficulties inherent in interpreting complex data and evidence. In general, the evolution of knowledge is a social process. Learning about others’ research happens in social contexts – at conferences, symposia and seminars. Research is mostly collaborative, and good research builds on what has gone before. As Isaac Newton observed, borrowing a metaphor attributed
to the French philosopher Bernard of Chartres, ‘If I have seen further, it is by standing on the shoulders of giants.’ Given a leg-up by the pioneering thinkers who came before us, we can see further and understand better. And, given certain assumptions, a collective judgement may be more accurate than that of any one individual.22 As captured by Condorcet’s wisdom of crowds postulate, introduced in chapter 2, if many experts pool their beliefs, then the collective knowledge outcome may be more powerful than one expert’s opinion alone – but only if the individuals’ beliefs start off as independent and uncorrelated. Contrary evidence may be richer and more informative than evidence that just confirms what we already know. Contrarians play an important role in discovering new, surprising knowledge and upsetting the herd consensus. Beryl Lieff Benderly has observed that new ideas are not always welcomed within the scientific culture.23 So the novel ideas on which progress depends do not always and easily find their way into the light.

  We can look at the problems from two complementary perspectives, reflecting the underlying theme in this book: self-interested herding, which can be explained using economic theory, and collective herding, driven by sociopsychological influences. The former involves the expert’s promotion of their own individual self-interest, and can be explored through the incentives that motivate and mould the individual experts’ pronouncements. The latter is more complex, particularly in terms of quantifying its impacts.

  Self-interested experts

  In judging expert opinion, we are trying to disentangle not just whether experts are right or wrong, but also what motivates them to disagree with one another. Is it a genuine opinion, based on the interpretation of solid evidence? Are contrarian experts mendacious curmudgeons, primarily motivated by their quest for fame? Are conformist experts obsequiously courting established authorities as a way to promote their own careers and publication records? To unravel some of these complexities we can explore the various reasons that experts might have for promulgating a consensus or a contrarian opinion.

  Let’s start by looking at some of the incentives driving self-interested experts. In the context of copycat experts, the economic models of self-interested herding that we introduced in chapter 1 assume that individuals are genuinely trying to discover the truth about a situation. This assumption is not unrealistic. Most researchers and scientists are keen to promote the development of knowledge. But what if experts face incentives that create a dissonance between what is best for them as individuals and what is best for society at large? What motivates the selfish expert? Identifying the truth in expert opinions becomes even more complex when we allow that incentives do not always necessarily align with testing the robustness of other scientists’ results. When it comes to herding, problems emerge when experts follow a consensus opinion or judgement for reasons that have less to do with the objective pursuit of truth and more to do with their individual motivations, both intrinsic (reflecting personal satisfactions, as we shall see) and extrinsic (primarily the standard economic incentive of money).

  Information distortions

  Essentially, expertise is about information. A key problem of expertise is not only the absence of clear information, but also problems of distorted information. Information is often not evenly distributed. We don’t all know the same things, and often people, including experts, have an incentive to deceive. When we go to experts it is because we are ignorant in some way – we are vulnerable when experts exploit their specialist knowledge. Very many economists have explored the issue of asymmetric information and the problems that emerge when experts exploit their expertise for personal gain. In a more general context, another economics Nobel laureate, George Akerlof, explored some of the consequences of this asymmetry in developing his principle of adverse selection, which explains how adverse outcomes and outputs come to dominate a market. Akerlof gave the example of the market in second-hand cars. Because most of us have very limited mechanical knowledge, a used car dealer may exploit our ignorance to sell us a ‘lemon’ (a dodgy used car). A problem emerges. Not all used car dealers sell lemons. Some sell ‘plums’ (high-quality used cars). But, because we as the buyer cannot tell the difference between a good car and a bad car, we are only willing to pay for a plum what we would pay for a lemon – a price reflecting the average quality. This is great for those selling lemons, but not so great for those selling plums. From the plum-sellers’ perspective, there is not much reason to keep their cars in a market when they cannot get a fair price. They withdraw their plums from the market, the quality of used cars declines and prices fall as average quality falls, meaning more good cars are withdrawn from the market, and the quality and price fall again, and so on. This type of market selects adverse outcomes – that is, the market floods with lemons.24

  What has this to do with expert opinion? Particularly when the popular press is involved, the quality of expert opinion may be driven down in a similar way. Even if an expert can come up with an accurate judgement, how can we tell the difference between the genuinely knowledgeable expert and the self-promoting expert who is mainly interested in getting their soundbites quoted to advance their own career prospects? It’s not easy, and much of the time it’s about more than being right or wrong. The truth is that we often cannot tell the difference between a reliable expert who takes great care to research a topic thoroughly and analyse evidence using rigorous methods, and an unreliable expert who might be sloppy in analysing and reporting their data. If the public cannot tell the difference, then each expert may be as likely as the other to get airtime and interviews. So, there are fewer incentives to be reliable, and the quality of expertise declines.

  Another type of asymmetric information that experts might opportunistically exploit is moral hazard. This problem captures the fact that the incentives of what social scientists call a principal (someone who wants to delegate a task to another) do not always align with the incentives of the agents (someone to whom the task is delegated). This idea is applied across a wide range of economic contexts including labour markets, insurance markets and financial markets. It can be applied to experts too. Whereas adverse selection is about choices we make before signing a contract, moral hazard is a post-contractual problem: when a principal hires an agent to deliver goods or services, they cannot be sure that the agent is not shirking their responsibilities. Agents may have incentives to behave in opportunistic, amoral ways. In the context of experts, we indirectly hire experts and researchers as our agents in the search for knowledge. We, as the experts’ principals, cannot easily observe or judge the quality of our experts’ output. This creates problems if the experts’ incentives do not match our incentives – for example, if they can acquire personal benefits from promulgating eye-catching and newsworthy scientific results. As our hired experts have superior information and it is costly and difficult, if not impossible, for us to monitor their output effectively, then we may be hoodwinked. Expert financial consultants illustrate the problem. Their job is to provide expert financial advice but their personal incentives may instead encourage them to promote particular financial products. Their principals are the recipients of their advice – people who need help with selecting pensions, insurance plans, mortgages or loans – and they will not have the time or expertise to judge the advice they are being given. We may be encouraged to buy a financial product that is not good value or which does not suit us, because we trust an expert even if we cannot judge their expertise.

  Moral hazard and adverse selection also apply to experts in other ways, reflecting the fact that experts can conceal the quality of their research findings. Whilst deliberate fraud is rare, there are a few examples of experts who have exploited others’ ignorance for their own advantage. One example is Andrew Wakefield, a medical doctor who was first lauded and then vilified for his expert opinions on the combined measles, mumps and rubella (MMR) vaccine. In an article in the esteemed medical journal The Lancet, Wakefield claimed that MMR vaccine uptake was implicated in the development of auti
sm and gastrointestinal disease. His opinions hit the headlines and rapidly spread widely, with the consequence that many parents were scared to immunise their children with the MMR vaccine. The problem was not only that these individual children were now susceptible to serious infectious diseases, but also that whole communities became vulnerable to them. Herd immunity – when everyone in a population is protected from infectious disease because a large proportion are immune – was threatened. As with the instability in financial markets that we explored in the previous chapter, the actions of a lone individual spread quickly and widely through complex social systems, generating instability, which is exacerbated by herding. Other researchers tried to replicate Wakefield’s findings but they could not. His peers concluded that his paper about the consequences of MMR vaccines had been based on falsified evidence. The Lancet retracted his paper, and Wakefield was later struck off the UK’s medical register. Why would he have taken this risk with his career? The British journalist Brian Deer investigated the case for an article in The Sunday Times, later published in the British Medical Journal. Deer claimed that Wakefield had been motivated by his own interests – he had allegedly been hired by lawyers in a lawsuit against the MMR vaccine’s manufacturers.25 If this is true then financial incentives and Wakefield’s own self-interest had overwhelmed the moral principles that we expect our medical doctors to uphold, but this was only possible because of asymmetric information.

  Reputation

  Economists Matthias Effinger and Mattias Polborn explore how herding and anti-herding both reflect an investment in reputation by experts. Experts will realise that they can significantly build their reputations if they are the only ‘smart’ expert – the only one who gets it right. The herd may be right, or it may be wrong. The point is that, if the herd turns out to be wrong, then being the only smart expert who is right can reap large rewards in terms of money and/or reputation, whereas the benefits of being correct alongside others are less. Anti-herding is therefore more likely when there are large rewards from being the lone smart expert. Then, experts will have an interest in contradicting the expert opinions of other experts. However, reputation can also be susceptible to herding. As we’ve seen in previous chapters, in many circumstances, our reputations survive better if we agree with the group. We are less likely to be contrarian because we face disproportionate losses if we are dissenters. We take fewer risks with our reputation if we conform, a point introduced in the context of self-interested herding in chapter 1. If an expert has invested years of their career in a specific theory or position, then it is not surprising that they resist change or dissent.26 As illustrated by the Squier case described earlier in this chapter, there are large costs in terms of career and reputation for experts who disagree with a consensus.

 

‹ Prev