Alchemy

Home > Other > Alchemy > Page 2
Alchemy Page 2

by Rory Sutherland


  In most corporate settings, if you suddenly asked ‘Why do people clean their teeth?’ you would be looked at as a lunatic, and quite possibly unsafe. There is after all an official, approved, logical reason why we clean our teeth: to preserve dental health and reduce cavities or decay. Move on. Nothing to see here. But, as I will explain later in this book, I don’t think that’s the real reason. For instance, if it is, why are 95 per cent of all toothpastes flavoured with mint?

  Human behaviour is an enigma. Learn to crack the code.

  My assertion is that large parts of human behaviour are like a cryptic crossword clue: there is always a plausible surface meaning, but there is also a deeper answer hidden beneath the surface.

  5 Across: Does perhaps rush around (4)

  To someone who is unfamiliar with cryptic crosswords it will seem almost insane that the correct answer to this clue is ‘deer’, because there is no hint of the animal in the surface meaning of the clue. A simple crossword would have a clue like ‘Sylvan ruminants (4)’. But to a cryptic crossword aficionado, solving this clue is relatively simple – provided you accept that nothing is as it appears. The ‘surface’ of the clue has misled you to see ‘does’ and ‘rush’ as verbs, while both are actually nouns. ‘Does’ is here the plural of doe.* Rush is a reed. Reed ‘around’ – i.e. spelled backwards – is ‘deer’.*

  This insight is only possible once you know not to take the clue literally, and human behaviour is often cryptic in a similar sense; there is an ostensible, rational, self-declared reason why we do things, and there is also a cryptic or hidden purpose. Learning how to disentangle the literal from the lateral meaning is essential to solving cryptic crosswords, and it is also essential to understanding human behaviour.

  To avoid stupid mistakes, learn to be slightly silly.

  Most people spend their time at work trying to look intelligent, and for the last fifty years or more, people have tried to look intelligent by trying to look like scientists; if you ask someone to explain why something happened, they will generally give you a plausible-sounding answer that makes them seem intelligent, rational or scientific but that may or may not be the real answer. The problem here is that real life is not a conventional science – the tools which work so well when designing a Boeing 787, say, will not work so well when designing a customer experience or a tax programme. People are not nearly as pliable or predictable as carbon fibre or metal alloys, and we should not pretend that they are.

  Adam Smith, the father of economics, identified this problem in the late eighteenth century,* but it is a lesson which many economists have been ignoring ever since. If you want to look like a scientist, it pays to cultivate an air of certainty, but the problem with attachment to certainty is that it causes people completely to misrepresent the nature of the problem being examined, as if it were a simple physics problem rather than a psychological one. There is hence an ever-present temptation to pretend things are more ‘logical’ than they really are.

  Introducing Psycho-Logic

  This book is intended as a provocation, and is only accidentally a work of philosophy. It is about how you and other humans make decisions, and why these decisions may differ from what might be considered ‘rationality’. My word to describe the way we make decisions – to distinguish it from the artificial concepts of ‘logic’ and ‘rationality’ – is ‘psycho-logic’. It often diverges dramatically from the kind of logic you’ll have been taught in high school maths lessons or in Economics 101. Rather than being designed to be optimal, it has evolved to be useful.

  Logic is what makes a successful engineer or mathematician, but psycho-logic is what has made us a successful breed of monkey, that has survived and flourished over time. This alternative logic emerges from a parallel operating system within the human mind, which often operates unconsciously, and is far more powerful and pervasive than you realise. Rather like gravity, it is a force that nobody noticed until someone put a name to it.

  I have chosen psycho-logic as a neutral and non-judgemental term. I have done this for a reason. When we do put a name to non-rational behaviour, it is usually a word like ‘emotion’, which makes it sound like logic’s evil twin. ‘You’re being emotional’ is used as code for ‘you’re being an idiot’. If you went into most boardrooms and announced that you had rejected a merger on ‘emotional grounds’, you would likely be shown the door. Yet we experience emotions for a reason – often a good reason for which we don’t have the words.

  Robert Zion, the social psychologist, once described cognitive psychology as ‘social psychology with all the interesting variables set to zero’. The point he was making is that humans are a deeply social species (which may mean that research into human behaviour or choices in artificial experiments where there is no social context isn’t really all that useful). In the real world, social context is absolutely critical. For instance, as the anthropologist Pierre Bourdieu observes, gift giving is viewed as a good thing in most human societies, but it only takes a very small change in context to make a gift an insult rather than a blessing; returning a present to the person who has given it to you, for example, is one of the rudest things you can do. Similarly, offering people money when they do something you like makes perfect sense according to economic theory and is called an incentive, but this does not mean you should try to pay your spouse for sex.*

  The alchemy of this book’s title is the science of knowing what economists are wrong about. The trick to being an alchemist lies not in understanding universal laws, but in spotting the many instances where those laws do not apply. It lies not in narrow logic, but in the equally important skill of knowing when and how to abandon it. This is why alchemy is more valuable today than ever.

  Illustration by Greg Stevenson

  Not everything that makes sense works, and not everything that works makes sense. The top-right section of this graph is populated with the very real and significant advances made in pure science, where achievements can be made by improving on human perception and psychology. In the other quadrants, ‘wonky’ human perception and emotionality are integral to any workable solution.

  The bicycle may seem a strange inclusion here: however, although humans can learn how to ride bicycles quite easily, physicists still cannot fully understand how bicycles work. Seriously. The bicycle evolved by trial and error more than by intentional design.

  Some Things Are Dishwasher-Proof, Others Are Reason-Proof

  Here’s a simple (if expensive) lifestyle hack. If you would like everything in your kitchen to be dishwasher-proof, simply treat everything in your kitchen as though it was; after a year or so, anything that isn’t dishwasher-proof will have been either destroyed or rendered unusable. Bingo – everything you have left will now be dishwasher-proof! Think of it as a kind of kitchen-utensil Darwinism.

  Similarly, if you expose every one of the world’s problems to ostensibly logical solutions, those that can easily be solved by logic will rapidly disappear, and all that will be left are the ones that are logic-proof – those where, for whatever reason, the logical answer does not work. Most political, business, foreign policy and, I strongly suspect, marital problems seem to be of this type.

  This isn’t the Middle Ages, which had too many alchemists and not enough scientists. Now it’s the other way around; people who are very good at deploying and displaying conventional, deductive logic are everywhere, and they’re usually busily engaged in trying to apply some theory or model to something in order to optimise it. Much of the time, this is a good thing. I don’t want a conceptual artist in charge of air-traffic control, for instance. However, we now unfortunately fetishise logic to such an extent that we are increasingly blind to its failings.

  For instance, the victorious Brexit campaign in Britain and the election of Donald Trump in the United States have both been routinely blamed on the clueless and emotional behaviour of undereducated voters, but you could make equally strong cases that the Remain campaign in Britain and Hillary Cli
nton’s failed bid for the American presidency failed because of the clueless, hyper-rational behaviour of overeducated advisors, who threw away huge natural advantages. At one point we in Britain were even warned that ‘a vote to leave the EU might result in rising labour costs’ – by a highly astute businessman* who was so enraptured with models of economic efficiency that he was clearly unaware most voters would understand a ‘rise in labour costs’ as meaning a ‘pay rise’.

  Perhaps most startlingly of all, every single one of the Remain campaign’s arguments resorted to economic logic, yet the EU is patently a political project, which served to make them seem greedy rather than principled, especially as the most vocal Remain supporters came from a class of people who had done very nicely out of globalisation. Notice that Winston Churchill did not urge us to fight the Second World War ‘in order to regain access to key export markets’.

  More data leads to better decisions. Except when it doesn’t.

  Across the Atlantic, meanwhile, the Clinton campaign was dominated by a strategist called Robby Mook, who had become so enamoured of data and mathematical modelling that he refused to use anything else. He derided Bill Clinton for suggesting he should connect the campaign with white working-class voters in the Midwest, mimicking a ‘Grampa Simpson’ voice to mock the former president* and dismissing another suggestion with the smug ‘my data disagree with your anecdotes’.

  Yet perhaps the anecdotal evidence was right, because the data was clearly wrong. Clinton did not visit Wisconsin once in the entire campaign, wrongly assuming that she would win there easily. Some in her team suggested that she should visit in the last days before the election, but the data told her to go to Arizona instead. Now I’m British, and have only been to Arizona four or five times, and Wisconsin twice. But even I would have said, ‘that decision sounds weird to me’. After all, nothing I have ever seen in Wisconsin suggested that it was a state that would never vote for Donald Trump, and it has always had a strong streak of political eccentricity.

  The need to rely on data can also blind you to important facts that lie outside your model. It was surely relevant that Trump was filling sports halls wherever he campaigned, while Clinton was drawing sparse crowds. It’s important to remember that big data all comes from the same place – the past. A new campaigning style, a single rogue variable or a ‘black swan’ event can throw the most perfectly calibrated model into chaos. However, the losing sides in both these campaigns have never once considered that their reliance on logic might been the cause of their defeats, and the blame was pinned on anyone from ‘Russians’ to ‘Facebook’. Maybe they were blameworthy in part, but no one has spent enough time asking whether an overreliance on mathematical models of decision-making might be to blame for the fact that in each case the clear favourite blew it.

  In theory, you can’t be too logical, but in practice, you can. Yet we never seem to believe that it is possible for logical solutions to fail. After all, if it makes sense, how can it possibly be wrong?

  To solve logic-proof problems requires intelligent, logical people to admit the possibility that they might be wrong about something, but these people’s minds are often most resistant to change – perhaps because their status is deeply entwined with their capacity for reason. Highly educated people don’t merely use logic; it is part of their identity. When I told one economist that you can often increase the sales of a product by increasing its price, the reaction was one not of curiosity but of anger. It was as though I had insulted his dog or his favourite football team.

  Imagine if it were impossible to get a well-paid job, or to hold political office, unless you supported the New York Yankees or Chelsea Football Club. We would regard such partisanship as absurd, yet devoted fans of logic control the levers of power everywhere. The Nobel Prize-winning behavioural scientist Richard Thaler said, ‘As a general rule the US Government is run by lawyers who occasionally take advice from economists. Others interested in helping the lawyers out need not apply.’

  Today it sometimes seems impossible to get a job without first demonstrating that you are in thrall to logic. We flatter such people through our education system, we promote them to positions of power and are subjected every day to their opinions in the newspapers. Our business consultants, accountants, policy-makers and think-tank pundits are all selected and rewarded for their ability to display impressive flights of reason.

  This book is not an attack on the many healthy uses of logic or reason, but it is an attack on a dangerous kind of logical overreach, which demands that every solution should have a convincing rationale before it can even be considered or attempted. If this book provides you with nothing else, I hope it gives you permission to suggest slightly silly things from time to time. To fail a little more often. To think unlike an economist. There are many problems which are logic-proof, and which will never be solved by the kind of people who aspire to go to the World Economic Forum at Davos.* Remember the story of those envelopes.

  We could never have evolved to be rational – it makes you weak.

  Now, as reasonable people, you’re going to hate me saying this, and I don’t feel good saying it myself. But, for all the man’s faults, I think Donald Trump can solve many problems that the more rational Hillary Clinton simply wouldn’t have been able to address. I don’t admire him, but he is a decision maker from a different mould. For example, both candidates wanted manufacturing jobs to return to the United States. Hillary’s solution was logical – engagement in tripartite trade negotiations with Mexico and Canada. But Donald simply said, ‘We’re going to build a wall, and the Mexicans are going to pay.’

  ‘Ah,’ you say. ‘But he’s never going to build that wall.’ And I agree with you – I think it highly unlikely that a wall will be built, and even less likely that the unlucky Mexicans will agree to pay for it. But here’s the thing: he may not need to build the wall to achieve his trade ambitions – he just needs people to believe that he might. Similarly, he doesn’t need to repeal the North American Free Trade Agreement – he just needs to raise it as a possibility. Irrational people are much more powerful than rational people, because their threats are so much more convincing.

  For perhaps thirty years, the prevailing economic consensus meant that no American carmaker felt they owed any patriotic duty to workers in their home country; had you suggested such a thing in any of their board meetings, you would have been viewed as a dinosaur. So pervasive was the belief in untrammelled free trade – on both sides of the American political divide – that manufacturing was shifted overseas without any consideration about whether there might be a risk to losing the support of government or public opinion. All Trump needed to do was to signal that this assumption was no longer safe. No tariffs (or walls) are actually needed: the threat of them alone is enough.*

  A rational leader suggests changing course to avoid a storm. An irrational one can change the weather.

  Being slightly bonkers can be a good negotiating strategy: being rational means you are predictable, and being predictable makes you weak. Hillary thinks like an economist, while Donald is a game theorist, and is able to achieve with one tweet what would take Clinton four years of congressional infighting. That’s alchemy; you may hate it, but it works.

  Some scientists believe that driverless cars will not work unless they learn to be irrational. If such cars stop reliably whenever a pedestrian appears in front of them, pedestrian crossings will be unnecessary and jaywalkers will be able to march into the road, forcing the driverless car to stop suddenly, at great discomfort to its occupants. To prevent this, driverless cars may have to learn to be ‘angry’, and to occasionally maliciously fail to stop in time and strike the pedestrian on the shins.

  If you are wholly predictable, people learn to hack you.

  Crime, Fiction and Post-Rationalism: Or Why Reality Isn’t Nearly as Logical as We Think

  Think of life as like a criminal investigation: a beautifully linear and logical narrative when viewed in r
etrospect, but a fiendishly random, messy and wasteful process when experienced in real time. Crime fiction would be unreadably boring if it accurately depicted events, because the vast majority of it would involve enquiries that led nowhere. And that’s how it’s supposed to be – the single worst thing that can happen in a criminal investigation is for everyone involved to become fixated on the same theory, because one false assumption shared by everyone can undermine the entire investigation. There’s a name for this – it’s called ‘privileging the hypothesis’.

  A recent example of this phenomenon emerged during the bizarre trial of Amanda Knox and Raffaele Sollecito for the murder of Meredith Kercher in Perugia, Italy. It became impossible for the investigator and his team to see beyond their initial suspicion that, after Kercher had been killed, the perpetrator had staged a break-in to ‘make it look like a burglary gone wrong’. Since no burglar from outside would need to stage a break-in, their only conclusion was that the staging took place to divert attention from the other flatmates and to disguise the fact that it was an inside job. Unfortunately, the initial suspicion was incorrect.

  I sympathise a little with their attachment to the theory. After all, the break-in did, at first glance, look as though it might have been faked: there was some broken glass outside the window and an absence of footprints. But the theory of an inside job staged to look like a botched burglary was so doggedly held that all subsequent contradictory evidence was either suppressed or not shared with the press, and the result was a nonsense.

 

‹ Prev