5.2: (I Can’t Get No) Satisficing
In the 1950s, the economist and political scientist Herbert Simon coined the term ‘satisficing’, combining as it does the words ‘satisfy’ and ‘suffice’. It is often used in contrast with the word ‘maximising’, which is an approach to problem-solving where you obtain, or pretend to obtain, a single optimally right answer to a particular question.
As Wikipedia* helpfully explains, ‘Simon used satisficing to explain the behaviour of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterised by computational intractability or a lack of information, both of which preclude the use of mathematical optimisation procedures. Consequently, as he observed in his speech on winning the 1978 Nobel Prize, “decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science.”’ Since then, I aver, the balance has shifted. The former approach – creating a simplified model of the world and applying a logical approach – is in danger of overpowering the other, more nuanced approach, sometimes with potentially dangerous consequences: the 2008 financial crisis arose after people placed unquestioning faith in mathematically neat models of an artificially simple reality.
Big data carries with it the promise of certainty, but in truth it usually provides a huge amount of information about a narrow field of knowledge. Supermarkets may know every single item that their customers buys from them, but they don’t know what these people are buying elsewhere. And, perhaps more importantly, they don’t know why these people are buying these things from them.
A company pursuing only profit but not considering the impact of its profit-seeking upon customer satisfaction, trust or long-term resilience, could do very well in the short term, but its long-term future may be rather perilous.* To take a trivial example, if we all bought cars using only acceleration and fuel economy as a measure, we probably wouldn’t do badly for the first few years, but over time, car manufacturers would take advantage of the system, producing ugly, unsafe, uncomfortable and unreliable vehicles that did fabulously on those two quantified dimensions.
There is a parallel in the behaviour of bees, which do not make the most of the system they have evolved to collect nectar and pollen. Although they have an efficient way of communicating about the direction of reliable food sources, the waggle dance, a significant proportion of the hive seems to ignore it altogether and journeys off at random. In the short term, the hive would be better off if all bees slavishly followed the waggle dance, and for a time this random behaviour baffled scientists, who wondered why 20 million years of bee evolution had not enforced a greater level of behavioural compliance. However, what they discovered was fascinating: without these rogue bees, the hive would get stuck in what complexity theorists call ‘a local maximum’; they would be so efficient at collecting food from known sources that, once these existing sources of food dried up, they wouldn’t know where to go next and the hive would starve to death. So the rogue bees are, in a sense, the hive’s research and development function, and their inefficiency pays off handsomely when they discover a fresh source of food. It is precisely because they do not concentrate exclusively on short-term efficiency that bees have survived so many million years.
If you optimise something in one direction, you may be creating a weakness somewhere else. Intriguingly this very approach is now being considered in the treatment of cancers. I recently spoke to someone working at the cutting edge of cancer treatment. Cancer cells mutate, and therefore evolve, quickly. Trying to kill them with a single poison tends to create new mutations which are highly resistant it. The solution being developed is to target cancer cells with a chemical that causes them to be to develop immunity to it, at the expense of their immunity to other things; at that point you hit them with a different chemical, designed to attack the Achilles heel that you have created, wiping them out second rather than first time around. There is a lesson here.*
In any complex system, an overemphasis on the importance of some metrics will lead to weaknesses developing in other overlooked ones. I prefer Simon’s second type of satisficing; it’s surely better to find satisfactory solutions for a realistic world, than perfect solutions for an unrealistic one. It is all too easy, however, to portray satisficing as ‘irrational’. But just because it’s irrational, it doesn’t mean it isn’t right.
5.3: We Buy Brands to Satisfice
Joel Raphaelson and his wife Marikay worked as copywriters for David Ogilvy in the 1960s. We recently ate dinner at Gibson’s Steakhouse at the Doubletree Hotel near O’Hare Airport in Chicago,* and talked about Joel’s 50-year-old theory concerning brand preference. The idea, most simply expressed, is this: ‘People do not choose Brand A over Brand B because they think Brand A is better, but because they are more certain that it is good.’* This insight is vitally important, but equally important is the realisation that we do not do it consciously. When making a decision, we assume that we must be weighting and scoring various attributes, but we think that only because this is the kind of calculation that the conscious brain understands. Although it suits the argumentative hypothesis to believe something is ‘the best’, our real behaviour shows relatively few signs of our operating in this way.
Someone choosing Brand A over Brand B would say that they thought Brand A is ‘better’, even if really they meant something quite different. They may unconsciously be deciding that they prefer Brand A because the odds of its being disastrously bad are only 1 per cent, whereas the risk with Brand B might be 2.8 per cent. This distinction matters a great deal, and it is borne out in many fields of decision science. We will pay a disproportionately high premium for the elimination of a small degree of uncertainty – why this matters so much is that it finally explains the brand premium that consumers pay. While a brand name is rarely a reliable guarantee that a product is the best you can buy, it is generally a reliable indicator that the product is not terrible. As explained earlier, someone with a great deal of upfront reputational investment in their name has far more to lose from selling a dud product than someone you’ve never heard of, so, as a guarantee of non-crapness, a brand works. This is essentially a heuristic – a rule of thumb. The more reputational capital a seller stands to lose, the more confident I am in their quality control. When people snarkily criticise brand preference with the phrase, ‘you’re just paying for the name’, it seems perfectly reasonable to reply, ‘Yes, and what’s wrong with that?’
Imagine you’re looking at two televisions. Both seem to be equal in size, picture quality and functionality. One is manufactured by Samsung, while the other is manufactured by a brand you’ve never heard of – let’s call it Wangwei – and costs £200 less. Ideally you would like to buy the best television you can, but avoiding buying a television that turns out to be terrible is more important. It is for the second quality and not the first that Samsung earns its £200, and you are absolutely right to pay for the name in this case.
By contrast to a known brand, Wangwei has very little to lose from selling a bad television. They can’t command a price premium for their name, and so their name is worthless. If a manufacturing error had caused them to produce 20,000 dud televisions, the best strategy would be to offload them on unsuspecting buyers. However, had Samsung produced 20,000 sub-par sets, they would be faced with a much greater dilemma: the reputational damage from selling the bad televisions would spill over and damage the sales of every product carrying the Samsung name, which would cost them significantly more than they would gain from the sales. Samsung would be faced with two choices: either destroy the televisions or sell them on to someone else who was less reputationally committed. It might even sell them to Wangwei, though never with its own name attached. So what’s wrong with paying for that name?
The primary r
eason why we have evolved to satisfice in our particularly human way is because we are making decisions in a world of uncertainty, and the rules for making decisions in such times are completely different from those when you have complete and perfect information. If you need to calculate the hypotenuse of a triangle and you know one interior angle and the length of the other two sides, you can be perfectly correct, and many problems in mathematics, engineering, physics and chemistry can achieve this level of certainty. However, this is not appropriate to most of the decisions we have to make. Such questions as whom to marry, where to live, where to work, whether to buy a Toyota or a Jaguar or what to wear to a conference don’t submit to any mathematical solution. There are too many future unknowns and too many variables, many of which are either not mathematically expressible or measurable. Another good example of a decision that has to be taken subjectively is whether we choose to buy an economical or a high-performance car. In general there is a trade-off between these two attributes. Do you sacrifice economy for performance or performance for economy?*
Imagine you’re living in the wild. You notice some extremely attractive cherries high up in a tree, but you know that, delicious and nutritious as they would be, there is a small risk that in attempting to pick them you will fall to your death. Let’s say the risk is one in 1,000, or 0.1 per cent. A crude mathematical model would suggest that this risk only reduces the utility of the cherries by a tenth of a per cent,* but this would be a foolish model to use in real life – if we routinely exposed ourselves to risks of this kind, we would be dead within a year. You’d only take this risk if you were very hungry – if there were a correspondingly high risk that you would die of starvation if you were not to eat the cherries, climbing the tree might make sense. However, if you weren’t starving and you knew that perfectly nourishing, if less tasty, foods were available elsewhere at a lower risk of fatality, you’d wander off and find a safer source of nourishment.* Remember, making decisions under uncertainty is like travelling to Gatwick Airport: you have to consider two things – not only the expected average outcome, but also the worst-case scenario. It is no good judging things on their average expectation without considering the possible level of variance.
Evidence that similar mental mechanisms also apply to human purchasing decisions can be found by looking at the data on eBay. In a simplistically logical world, a seller with an approval rating of 95 per cent* would nonetheless be able to sell goods perfectly successfully if they were 10 per cent cheaper than goods offered by people with 100 per cent approval ratings. However, a quick glance at the data shows this is not the case. People with approval ratings below 97 per cent can barely sell equivalent goods for half the price of sellers with a track record of 100 per cent satisfaction.
Logically you might think we should accept a 5 per cent risk of our goods not arriving in exchange for a 15 per cent reduction in cost, but the lesson proved by these statistics is that we don’t: once the possibility moves beyond a certain threshold, we seem unable to take the risk at any price. If Amazon were to try to operate in a country where 10 per cent of all posted goods were stolen or went missing, virtually no discount would be high enough for them to sell anything at all.
This example illustrates that, when we make decisions, we look not only for the expected average outcome – we also seek to minimise the possible variance, which makes sense in an uncertain world. In some ways, this explains why McDonald’s is still the most popular restaurant in the world. The average quality might be low, compared to a Michelin-listed restaurant, but so is the level of variance – we know exactly what we’re going to get, and we always get it. No one would say that a meal they had had at McDonald’s was among the most spectacular culinary experiences of their lives, but you’re never disappointed, you’re never overcharged and you never get ill. A Michelin three-star restaurant might provide an experience that you will cherish for the rest of your life, but the risk of disappointment, and indeed illness, is also much higher.*
In a world of perfect information and infinite calculating power, it might be slightly suboptimal to use these heuristics, or rules of thumb, to make decisions, but in the real world, where we have limited trustworthy data, time and calculating power, the heuristic approach is better than any other alternative.
For instance, a cricketer catching a high-flying ball does not calculate its trajectory using quadratic equations, but instead uses a rule of thumb known as the ‘angle of gaze’ heuristic, looking upwards at the ball and moving towards it in such a way that the upward angle of their gaze remains constant. In this way, though he may pay the price of moving in a slight arc rather than in a straight line, he will hopefully position himself at the point on the ground where the ball is likely to land. There are several reasons why we use a heuristic of this kind. A fielder would, of course, have no time to perform mathematical calculations, even if a calculator were available, and moreover, even if he had enough time and calculating power, he simply wouldn’t have enough data without knowing the velocity or the angle at which the ball was hit to calculate its trajectory to any level of accuracy. The batsman who hit the ball probably wouldn’t know, either.*
5.4: He’s Not Stupid, He’s Satisficing
On 15 January 2009, in an incident now known as the ‘Miracle on the Hudson’, Captain Chesley Sullenberger demonstrated the value of heuristics when, after his aircraft had both its engines disabled by a bird strike, he reacted quickly and safely landed on the Hudson River. It is possible to listen to Sullenberger’s conversations with air traffic control on YouTube: between attempts to restart the engines, he communicates with the departure airport. Having immediately rejected the possibility of returning to LaGuardia, correctly as it turns out, he is offered the possibility of landing at Teterboro Airport, which is in New Jersey over to his starboard side. In a little more than 20 seconds he decides that this option is also impossible – again, a decision made with a heuristic rather than a calculation. He did not retrieve a scientific calculator from his briefcase, input the flight speed, altitude and rate of descent and then calculate the likely distance to runway one at Teterboro, but instead did something far quicker, easier and more reliable.
A former US Air Force fighter pilot, Sullenberger was a glider pilot in his spare time, and all glider pilots learn a simple instinctive rule which enables them to tell whether a possible landing site on the ground is within their reach. They simply place the glider in the shallowest possible rate of descent and look through the windshield: any place which appears to be moving downwards in the field of view is somewhere they can safely land, while anywhere on the ground that appears to be moving upwards is too far away. It was by deploying this rule that he was able to decide within seconds that the Hudson River was the only feasible landing site.
In the event his decision could not have been bettered. There were no fatalities, and only a few minor injuries. It’s true that, had he successfully landed at Teterboro, he might have saved the aircraft, but had he tried and failed to land there, it is doubtful that anyone could have survived.
It isn’t always clear which heuristic rules are learned and which are innate, but everyday life would be impossible without them. A truck driver reversing an articulated lorry into a narrow driveway achieves what seems like a spectacular feat of judgement through the use of heuristics, not by calculation. We drive our cars heuristically, we choose our houses heuristically – and we probably also choose our partners heuristically.* Even when a solution might be calculable, heuristics are easy, quick and well-aligned with our perceptual equipment, and in the majority of occasions where the right solution is incalculable, they are all we’ve got.
Heuristics look second-best to people who think all decisions should be optimal. In a world where satisficing is necessary, they are often not only the easiest option but the best.
5.5: Satisficing: Lessons from Sport
I have always been intrigued by the scoring systems in different sports, and by the degree to
which they contribute to the enjoyment of any game. As a friend of mine once remarked, had tennis been given the same scoring system as basketball it would be tedious to play and even worse to watch: if you glanced at your TV and saw Djokovic leading Murray ‘by 57 points to 31’, you would shrug and change channels to something more exciting.*
Tennis scoring isn’t quite socialist – one player can demolish another – but, in such uneven cases, the contest is over in a mercifully short time. There is, however, a kind of social security system in the sport’s scoring system, which means that for the duration of any match, the losing player feels he might still be in with a chance. It’s frankly genius.
The system of watertight games and sets means that there is no difference between winning a game to love or after several deuces. A 6–0 set counts as a set, just as a 7–5 win does. This means that the losing player is never faced with an insurmountable mountain to climb. The scoring system also ensures variation in how much is at stake throughout the game; someone serving at 30–0 is a relatively low-engagement moment, while a crucial break point has everyone on the edge of their seats. This varies the pitch of excitement, and consequently makes the game more enjoyable for players and spectators alike.*
Another feature in the scoring system of many compelling games is where aiming for the highest score comes with a high degree of concomitant risk. Shove ha’penny works in this way, as does bar billiards, where the highest-scoring pot sits behind an unstable black mushroom (technically a ‘skittle’), which wipes your entire score if it is knocked over. This jeopardy may explain why darts is an enjoyable spectator sport, while archery isn’t. In archery the scoring is concentric. You simply aim for the bullseye, which scores 10, and if you narrowly miss you get 9. Miss the 9 and you get 8, and so on. The only strategy of the game is to aim for the 10 and hope – it is a perfectly logical scoring system, but it doesn’t make for great television. The dartboard, by contrast, is not remotely logical, but it’s somehow brilliant. The 20-point sector sits between the dismal scores of 5 and 1.
Alchemy Page 20