Gladiators, Pirates and Games of Trust

Home > Other > Gladiators, Pirates and Games of Trust > Page 10
Gladiators, Pirates and Games of Trust Page 10

by Haim Shapira


  What’s the logic here? Why is this even logical? Why should a winning bidder pay less than he’d ultimately offered? Why should the auction house not collect the best price?

  I believe that one reason the Vickrey auction is used is that we know that many people are irrational and may mistakenly bid very high, believing they’ll never actually have to pay that price. An example would be me bidding $20,000 for a copy of the first edition of In Search of Lost Time signed by Marcel Proust, though in truth I’m only willing to spend $10,000 – well, after all it is Proust and I do love buying books. What’s mistaken about this? The ploy, I believe, will guarantee my victory and eventually I’ll pay the second highest bid, which will certainly be more logical than mine. The problem is that Mr Edgar Clinton from Boston has the very same idea, and so he offers $19,000. This means that I win, but end up paying $9,000 more than I actually intended. After all, it’s only one book, not a bookstore.

  Perhaps we should offer our realistic evaluations?

  The answer to this conundrum is simple and surprising and will take us to the aspect that has made the Vickrey auction so important: second-price auctions encourage bidders to offer the (real) highest price that they are willing to pay.

  Let us rephrase this more precisely. In Vickrey auctions the bidders’ dominant strategy is to call the items’ real value for them (strategic dominance occurs when one strategy is better for a player than any other strategy, no matter how other players play the game). In this case, honesty is the best policy. We don’t need to resort to mathematics to prove this. All you need to do is consider what happens to the bidder who offers either more or less than what the auctioned item is worth to that individual, and you’ll see that in both cases the benefit is less than offering the real value.

  This was first analysed by Vickrey back in 1961, but he had to wait till 1996 for the Nobel Prize. Sadly, William Vickrey never made it to the Stockholm’s Konserthus ceremony: he passed away three days after receiving notice that he’d been selected for the honour.

  Intermezzo

  THE NEWCOMB PARADOX

  A famous experiment, closely tied to probabilities and psychology, is the so-called ‘Newcomb Paradox’, named after UCLA physicist William Newcomb.

  This thought experiment, unlike many others, deserves to be called a paradox. It goes like this:

  We are presented with two boxes. One is transparent, and we can see that it contains $1,000; the other is opaque and may or may not contain $1 million – we don’t know. The player is presented with two choices – to take the contents of both boxes, or to take just the opaque box. Of course, the second option is better. The thing is that this experiment is conducted by a Predictor, a person with superpowers who can read minds and knows our choice even before we know it ourselves! Should the Predictor intuit that we’re about to take the opaque box, he’ll fill it with a million green ones; but should he foretell that we’ll be taking both boxes, he’ll leave the opaque box empty.

  Now, let’s suppose that 999 people have already taken part in the experiment, and we know that whenever a player took both boxes, the opaque box was found to be empty; but when players opted for the opaque box alone, they became millionaires. What would you decide?

  Decision Theory includes two principles that seem to be offering us conflicting suggestions: the principle of reasonability, according to which we should take only the opaque box, because we’ve seen what has happened before; and the principle of dominance, according to which we should take both boxes, because they are there, and if the opaque box contains a million, we’ll have it; or if not, we’ll have at least a thousand. The two principles conflict with each other and give us two totally different suggestions.

  This most famous experiment has been discussed by many excellent people, including Harvard University philosopher Robert Nozick, and Martin Gardner, the mathematical editor of Scientific American and a famous interpreter of Alice in Wonderland. Both similarly decided that they would take both boxes, but cited very different reasons.

  If I were confronted with that experiment – given that I believe in prediction (not in prophecy, because I’m a rational scientist) and have seen 999 cases with recurring results – I’d take the opaque box and (probably) collect $1 million. Still, the question is widely debated. Gardner felt there was no paradox here, because no one can predict human behaviour with such accuracy. If, however, you have seen someone who can predict human behaviour with such accuracy, then this is a logical paradox. So what do we do? Take both boxes or just the opaque one?

  You decide.

  Chapter 9

  THE CHICKEN GAME AND THE CUBAN MISSILE CRISIS

  In this chapter we’ll encounter the Chicken Game, which comprises two pure Nash equilibriums – making the results extremely difficult to predict. This game is strongly associated with the art of brinkmanship.

  An easy-to-understand version of the two-player game known as the Chicken Game goes like this. Two motorists drive right at each other (preferably in stolen cars, if we’re making a movie), and the first to turn the wheel and swing out of trouble loses the game and is forever called ‘chicken’. The driver who didn’t flinch wins the game and becomes the toast of the town. If neither of the drivers swerves, both may die in the crash. That game was popularized by the movies when James Dean was around, and featured in quite a few movies (readers of my age may remember the 1955 film Rebel without a Cause starring James Dean and Natalie Wood).

  Naturally, each player wants the other to be chicken, which would make him both brave and a winner. If, however, both players decide they want to be brave, the resulting collision between their vehicles is the worst possible result for the two of them. As with many other dangerous games, my personal choice is the strategy of avoidance: I steer clear. I suppose we all know a few games that are best left not played. But what if we have no choice and are forced to play?

  Imagine the following scenario: I’m standing next to my car, looking down the road; my rival does the same some distance away, looking back towards me; and somewhere in the crowd is a lady I wish to impress, and I somehow feel that she wouldn’t appreciate my mature and sound-minded walking away. What am I to do?

  Our two players (appropriately named A and B) can choose one of two very different strategies: bold or chicken. If both choose chicken, nothing is lost and nothing’s gained. If A chooses to be bold and B plays chicken, A wins 10 fun-units (whatever your definition of fun), while B loses one fun-unit. Player A will be cheered by the crowd (which is fun) and player B will be booed (which isn’t). Should the two decide to be bold at the same time and collide, both will lose 100 fun-units, not to mention their wasted recovery time and expensive bodywork for the cars.

  Bold Chicken

  Bold (-100, -100) (10,-1)

  Chicken (-1,10) (0,0)

  What is the Nash Equilibrium point in this game? Do we have Nash Equilibrium points? Naturally, if both players choose the chicken strategy, this is not the Nash Equilibrium, because if A makes the chicken choice, playing chicken too is not in B’s best interests. B had better be bold and daring, and win 10 fun-units. Yet, if both players choose the bold strategy, that wouldn’t be the Nash Equilibrium either, because if they are both bold, they’ll both lose 100 fun-units, which is the worst possible result, and both players will regret it.

  It should be noted that if A knows for certain that B has chosen the bold strategy, he should play chicken, because he’d lose less than he would if he played bold too.

  What about the other two options? Suppose A chooses the bold strategy and B goes for the chicken strategy. If A opts for bold, he’ll win 10 units. He shouldn’t change his strategy, because he’d gain nothing by playing chicken. If A plays bold and B plays chicken, B will lose a unit, but B should stay put too because if he too should choose to play bold (like A), he’d lose 100 fun-units (that is, 99 units more).

  Hence, if A decides to play bold and B plays chicken, this is (quite surprisingly) the Nas
h Equilibrium – a situation none would forgo. The problem is that the exact opposite is also true. That is, should they choose to reverse their roles (B becomes bold and A plays chicken), it’s the Nash Equilibrium for the very same reasons. When a game has two Nash Equilibrium points, problems start, because there’s no way of knowing how the game will end. After all, if both opt for their favourite Nash Equilibrium point and both choose to be bold, they’ll both end the game in a rather poor state. But then, perhaps understanding that would make both play chicken? Thus, even though this game may appear simple at first glance, it’s actually quite complicated – not to mention what might happen when sentiments are factored in.

  Suppose one of the players wants to impress someone in the crowd. Should he lose, losing a fun-unit would be the easy part. He might lose that spectator’s affection, which might be worth more than the cost of colliding with the other car. Besides, no one enjoys watching others win, and many are really pained by such scenarios.

  Given the complexity of all this, how should this game be played, and how will it end? Naturally, it’s impossible to dictate a winning strategy, but one does exist, and it can be seen in many movies. It’s known as the ‘madman’s strategy’, which goes like this. One of the players arrives dead drunk. Though everyone can see that, he emphasizes his situation by tossing out a few empty bottles from his car upon arrival at the site. To make his point even clearer, he puts on very dark shades, and now it’s clear that he can’t see the road. The mad player may even go all the way, unscrew the steering wheel from its seating, and throw it out the window while driving. That would be the clearest signal, really.

  The mad player thus declares: ‘Playing chicken isn’t an option for me. I can only play bold, bolder, boldest.’ At this stage, the other player gets the point. He knows now that the first player will play bold and, theoretically at least, he himself should play chicken, because logically and mathematically that would be the better option for him. Still, we need to remember that people tend to make irrational choices, and there’s also the worst-case scenario to consider: what happens if both players choose the madman’s strategy? What if they both show up dead drunk, put on dark shades, and toss out their steering wheels?

  They are deadlocked again (pun intended). Once again we can see that a game that appeared rather simple at first is, actually, very complicated.

  Cited in almost every book on Game Theory, the most famous example of the Chicken game is the Cuban missile crisis. On 15 October 1962, USSR President Nikita Sergeyevich Khrushchev announced that the Russians intended to site missiles with nuclear warheads in Cuba, less than 200km from US shores. Khrushchev thus signalled to US President John Fitzgerald Kennedy: ‘Here I am, driving my car right at you. I have my dark glasses on, I’m a bit drunk, and soon I’ll have no steering wheel. Whatcha gonna do?’

  Kennedy summoned his team of advisers and they gave him the following five-option list:

  1 Do nothing.

  2 Call the UN and file a complaint (which is very much like option 1, but 1 is better, since 2 reveals that you know something is happening and yet still you do nothing).

  3 Impose a blockade.

  4 Issue an ultimatum to the Russians: ‘Either you remove your missiles, or the USA will launch a nuclear war against you’ (which I believe is the silliest option: ‘When you have to shoot ... Shoot! Don’t talk’).

  5 Launch a nuclear war against the USSR.

  On 22 October, Kennedy decided to impose a blockade on Cuba, choosing the third of the five options.

  Choosing option 3 was rather risky because it signalled that Kennedy too was drunk, had dark shades in his pocket, and might lose his wheel – thus placing the two states on a collision course. Later Kennedy related that he’d personally estimated that the chances of a nuclear war were between one-third and a half. That’s a rather high probability, considering that it could have meant the end of the world.

  The crisis ended peacefully in the end. Many believe that outcome was thanks to letters that the famous British philosopher and mathematician Bertrand Russell wrote to Khrushchev and found a way to deliver. In any event, Khrushchev stood down, which came as a surprise, because the USSR president had constantly signalled to the West that he might assume the madman’s strategy. Russell realized that, unlike in the ordinary version of the Chicken Game, the Cuban crisis was asymmetrical, because Khrushchev had the advantage of supervised press in his country, which produced the opportunity for him to back down. And this is how the fact that there was no free media coverage in the USSR helped to saving Earth from a nuclear war. When the press is controlled, defeat can be presented as victory, which is precisely how the Russian papers interpreted it. Khrushchev and Kennedy found an honourable solution, agreeing that the Russians would remove their missiles from Cuba and that the USA would, one day, dismantle the missiles it had placed in Turkey.

  VOLUNTEERING: A DILEMMA

  The game known as the ‘Volunteer’s Dilemma’ is an interesting extension of the Chicken Game. We discussed the penguin version earlier (see page 98). In the Chicken Game, a volunteer would be welcome – volunteering to simply steer the car away from the expected collision could do both players a world of good.

  A typical Volunteer’s Dilemma game includes a number of players, of whom at least one has to volunteer and do something at his or her own risk or cost, so that all the players gain; but if no one volunteers, they all lose.

  In his book Prisoner’s Dilemma, William Poundstone presents several examples of the Volunteer’s Dilemma. For example, there’s a power outage in a high-rise block of apartments, and one of the tenants has to volunteer and call the electric company. This is a small act of volunteering, and it’s most likely that someone will act to ensure that light is restored to the entire building. But then Poundstone presents a bigger problem. Suppose that group of tenants lives in an igloo that has no phone. That means that the volunteer will have to trudge 5km through the snow, in sub-zero temperatures, to get help. Who volunteers? How is that problem solved?

  Of course, in some instances volunteers pay quite dearly. In 2006, Israel Defense Forces Captain Roi Klein deliberately fell on a hand grenade that was thrown at his platoon. He was killed on the spot, but saved his men. Several such incidents are listed in American and British war stories. Interestingly, the US Army code includes an instruction for such a situation: soldiers must volunteer and fall on an incoming grenade at once. It’s a rather weird instruction. In a group of soldiers it’s clear that someone should make the sacrifice, but finding which one is a different matter (if there’s only one soldier and he falls on the grenade, that would be the weirdest thing really). It seems that the assumption is that even if such an instruction exists, it wouldn’t be followed by everyone, but someone should obey it, and someone will.

  Poundstone’s book carries another example. In a very rough boarding school, a group of students steals the school bell. The headmaster summons the entire school and says to everyone: ‘If you give me the thief or thieves, I’ll give them an F in one semester while the rest of you will go unpunished. Should none of you step forward, each and every one of you will get an F for the entire year, not just one semester.’

  Rationally, someone should volunteer, because if no one does, everyone fails with an F all year. Theoretically, even the thief could gain here and have an F for one semester instead of losing an entire year of studies. If the students in the story were rational theoreticians, someone (not necessarily the thief ) would volunteer, take the small blow and free his comrades. But then that person might conclude that everyone else thinks the same – and no one would volunteer. The result would be absurd, of course: everyone fails.

  Indeed, it isn’t entirely clear how this game should be played. There is, however, a simple mathematical model for the Volunteer’s Dilemma. Imagine a room with n people in it: they can all win the big prize if at least one of them volunteers, but the volunteer gets a lesser prize.

  Clearly, there
are no pure symmetrical Nash strategies to follow here, because if everyone else volunteers, why should I? After all, if I don’t take the risk, and someone else does, I’ll still have the full prize. Abstaining is not a Nash strategy either, because if no one volunteers, no one gains anything, which is why I should volunteer and receive the prize minus my risk (the assumption is that the cost of the risk is smaller than the value of the prize), which is more than nothing. Yet if no pure Nash strategy exists, a mixed one can be found. That strategy requires that players volunteer at certain probabilities, which can be calculated mathematically and are related to the number of participants and to the gap between the prize and the risk.

  The higher the risk in relation to the prize, the less likely it would be that people would want to volunteer. That’s an expected result. Another valid conclusion would be: the higher the number of players, the lower the desire of players to volunteer, because the expectation that someone else will do that becomes magnified.

  We can find here the roots of the social phenomenon known as the ‘bystander effect’.

  Yet thinking that ‘someone else will do that’ might lead to horrendous results. One of the most famous examples of such a situation, where everyone expects others to step forward, is the story of Catherine Genovese. In 1964 she was murdered in her New York home. Dozens of her neighbours witnessed the crime, but not only did no one intervene in an attempt to help her (because volunteers might pay dearly), but also, none of the neighbours called the police (volunteering for almost no cost at all). It’s hard to understand what those neighbours were thinking, but the fact is that sometimes no one volunteers to do even something as simple as calling the police. Such cases can be explained by the sociological and psychological sciences better than with mathematical models. We may assume that people’s willingness to volunteer depends on the level of solidarity that exists in their community or society, and on their own social values.

 

‹ Prev