Where the economy is led forward by thee into a rate of growth higher than six per cent (real)—
Into that heaven of liberalization, my Father, let my country awake.
APPENDIX 1
Yes, Rats Are Utility Maximizers That lemming-like investors may well be utility maximizers, and hence rational, was reinforced when certain laboratory experiments found rats to be surprisingly rational in the sense assumed by finance theory. In a laboratory, rats were made to scurry through a series of parallel paths to get to their food. On each path, the rodents could expect to receive a certain quantum of food that was a random variable. In other words, on a given path, over several runs, a rat might expect to find on an average, say, 10 grams of food at the end of the run, while in any one run the amount of food could be more than or less than 10 grams. This is not unlike an investor on an average expecting to earn, say, 10 per cent annual return on a security, though in any given year the return may be more or less than 10 per cent.
Different paths had different expected quantum of food. For example, on path one, on an average, the rat could expect to find 10 grams of food; on path two, an average of 15 grams of food; on path three an average of 20 grams and so on, though on any one run, the quantum of food could vary from the average. However, there is never a free lunch. Not even for darting laboratory mice. Each pathway had a certain voltage of electric shock that the rodent had to endure if it wanted to reach the food at the other end. Before long, the rats showed behaviour that clearly indicated that for the same level of expected quantum of food at the other end, they preferred the route with lower voltage of shock; for a given voltage of shock, they preferred the path that held higher expected quantum of food; and finally for every unit of increased shock (say from 10 to 11 volts) they took the path that promised at least the same increase in the expected quantum of food as the previous unit of voltage increase entailed (that is, from 9 to 10 volts). In short, the rodents were perfect utility maximizers or perfectly rational. They knew what was best for them. There is then no reason to believe that human beings, a far more evolved species, should be any less rational.1
APPENDIX 2
Pseudo Dilemmas A few years ago, a law teacher came across a student who was willing to learn but was unable to pay the necessary fees.
The student struck a deal with the teacher saying, ‘I will pay you your fee the day I win my first case in the court.’
The teacher agreed and proceeded with the law course.
Several months after the course the teacher started pressing the student to pay up the fee. The student reminded him that he was yet to win his first case and thus kept postponing the payment.
Fed up with this, the teacher decided to sue the student in the court of law and both of them decided to argue the case for themselves.
The teacher put forward his argument saying, ‘If I win this case, as per the court of law, the student has to pay me as the case is about his non-payment of dues. And if I lose the case, student will still pay me because he would have won his first case. So either way I will have to get the money.’
The student, no less brilliant, argued saying, ‘If I win the case, as per the court of law, I don’t have to pay anything to the teacher as the case is about my non-payment of dues. And if I lose the case, I don’t have to pay him because I haven’t won my first case yet. So either way, I am not going to pay the teacher anything!’
WHAT ARE PSEUDO DILEMMAS?
To understand the significance of prisoner’s dilemma (PD) better, it may be useful to consider some alternative ‘dilemmas’ that mimic prisoner’s dilemma closely. First, let us recall from Chapter 4 that for prisoner’s dilemma to hold, two conditions must hold, namely:
1) T > R > P > S, and
2) (T+S)/2 < R
where T, R, P and S are temptation, reward, punishment and sucker’s payoff respectively.
We may recall that the first condition merely ensured that no matter what one did it was better for the other to defect. The second condition ensured that even if two players locked themselves into an arrangement where one month one defected and the other cooperated and the next month they switched roles, thus alternating roles forever, neither would do better. In fact, both would do worse than if they were both cooperating every month.
Now, consider a twist to the prisoner’s dilemma, which we shall refer to as the pseudo prisoner’s dilemma (PPD), whose payoff matrix is as shown in Figure A.1. In PPD, if you squeal when the accomplice does not, you walk free while the sucker gets three years instead of five as in the original prisoner’s dilemma.
In the above payoff matrix of PPD, neither of the above two conditions for prisoner’s dilemma is satisfied:
T = 0, R = –2, P = –4 and S = –3, so that
1) T > R > P < S, and
2) (T+S)/2 > R
Now why should that make any difference to the dilemma the prisoners face? At first glance, the situation seems to be no different from before, except that in this case the sucker’s payoff is improved from five years behind the slammer to only three. Is it still better for you to squeal (that is, defect) no matter what I do? Of course, we are assuming once again that our only interest is to minimize the jail term for ourselves. No other sentiment is at work.
Suppose I squeal, what is your best course of action? It is obvious that if you squeal as well, we both get four years each. On the other hand if you do not squeal, you get away with only three years. So as one who is out only to minimize the sentence for oneself, it is in your interest not to squeal, given I have squealed. On the other hand, what if I do not squeal? If I have not squealed, it is in your interest to squeal, since that enables you to walk away without any prison sentence. So there is no unique course of action that guides you, unlike in the PD situation, where whatever I did, it was better for you to squeal. Of course the same symmetric reasoning holds for me as well. When faced with this one-time dilemma, you may as well toss a coin and decide whether you should squeal or not, based on whether you got a head or a tail, since you have no way of knowing whether or not I am going to squeal. And by symmetry, the same holds for me! Adopting this strategy, since each of the four scenarios (S–S, S–DS, DS–S, and DS–DS) is equally likely, our expected prison sentence will be 2.25 years [(2+3+0+4)/4] or [(T+R+P+S)/2]. May be a tad worse than getting away with two years each if both of us chose not to squeal but certainly not terribly worse considering the strategy of tossing a coin ensures that 25 per cent of the times you may get away scotfree and another 25 per cent of the times, you will get two years—the same as you will get for doing the ‘right thing’, that is, not squealing on each other (a DS–DS transaction).
This version of dilemma shows an added dimension when we consider an iterative situation with the following payoff matrix (Figure A.2), which is obtained by adding four to each number in Figure A.1.
Figure A.2 The Payoff Matrix for a Pseudo PD-Like Iterative Trade Agreement
Let us consider the goat and cheque trade agreement discussed in Chapter 3, with a payoff matrix as indicated in Figure A.2. It will be obvious to both of us within a few transactions that it is mutually beneficial for us to tacitly get locked into an arrangement where one month I defect and you cooperate, and the next month I cooperate and you defect, so that on an average our payoff is 2.5 each [(1+4)/2 or (T+S)/2)] instead of mere 2 each if we both cooperate. This strategy, arising from the violation of the second necessary condition of prisoner’s dilemma, proves superior to tossing a coin every time to decide whether to cooperate or defect. It can be easily seen that if both of us tossed a coin every time to decide whether to cooperate or defect, all the four scenarios (namely C–C, D–C, C–D and D–D) are equally likely, so that we would earn an average of only 1.75 points each per transaction!
WOLF’S DILEMMA
Here is yet another variation that departs from prisoner’s dilemma, called Wolf’s Dilemma. Consider the payoff matrix in Figure A.3. In this case:
1) T > R > P >
S, and
2) (T+S)/2 > R
In other words, the first condition of prisoner’s dilemma is satisfied, but the second condition is not. Clearly, the incentive to defect is high and, in a one-time play, the ‘logical’ reasoning invariably leads to D–D kind of decision. However, in case of an iterative situation as in PPD, Wolf’s Dilemma also leads to a situation where both of us soon learn that we are better off getting locked into an arrangement where one month I defect and you cooperate and the next month I cooperate and you defect, so that our average payoff is a series of 25 points instead of a mere 5 each time when we both cooperate. This is a situation akin to a cartel-like arrangement in business. I am sure you will want to work it out for yourself!
Figure A.3 The Payoff Matrix for One-time Wolf’s Dilemma
ENDNOTES
Prologue
1 Douglas Hofstadter. Prisoner’s dilemma, computer tournaments and the evolution of cooperation. Scientific American June 1983.
Chapter 1: Why Are We the Way We Are?
1 I am aware that the old Hindu growth rate of 2 per cent has been replaced by what we proudly call as a 7 per cent growth rate today. But we conveniently forget that our population growth rate is a vigorous 2 per cent, so that the net real growth rate today is only about 5 per cent. But even this increase in productivity from 2 per cent to 5 per cent in about half a century is in itself a display of the Hindu rate of growth (of productivity) in action, so that our per capita economic base is among the lowest in the world.
2 Excerpted from my article ‘The idiot savants of India’, The Economic Times 31 October 2005.
3 V.S. Naipaul. 1977. India: A Wounded Civilization . Picador.
Chapter 4: Iterative Prisoner’s Dilemma and We the Squealers!
1 In fact Axelrod was investigating whether cooperation could emerge out of competition. This chapter and chapters elsewhere borrow liberally from his findings.
2 Hofstadter tells the story of this experiment delightfully in ‘Prisoner’s dilemma, computer tournaments and the evolution of cooperation’, Scientific American (May 1983).
Chapter 5: Can Competition Lead to Cooperation?
1 Excerpted from Robert Axelrod’s The Evolution of Cooperation , where he applies the standard prisoner’s dilemma framework to explain the First World War phenomenon.
2 Amartya Sen. 1977. Rational fools: A critique of the behavioral foundations of economic theory. Journal of Philosophy and Public Affairs 6.
Chapter 6: Self-regulation, Fairness and Us
1 Richard H .Thaler. 1994. The Winner’s Curse: Paradoxes and Anomalies of Economic Life . Princeton University Press. p. 3.
2 Daniel Kahneman, Jack L. Knetsch, and Richard H. Thaler. Journal of Economic Perspectives 5 (1): 193–206.
3 Zahira denies filing affidavit in apex court. Times of India 4 January 2005.
Chapter 7: Are We the World’s Biggest Free Riders?
1 Hardin explains the Tragedy of the Commons [ Science, 162 (1968)] along the following lines. Consider a pasture that can sustain only ten cows optimally. Ten cowherds, owning one cow each, graze their cows on this pasture to fatten them for maximum yield of milk. One of the cowherds is tempted to add one more cow to the pasture. While he figures that adding another cow may mean less fodder for each of the cows, and this may reduce the yield of milk per cow somewhat, he is tempted nevertheless to add a cow since the cost of the additional cow will be actually shared by the other nine cowherds as well. But what is smart thinking for our cowherd is smart thinking for other cowherds as well and each tries to exploit the pasture more and more by adding more cows. And before you know it, the pasture, the cows and the cowherds are all losers.
2 Richard H .Thaler. 1994. The Winner’s Curse: Paradoxes and Anomalies of Economic Life . Princeton University Press.
3 Richard H. Thaler. 1988. The ultimatum game. Journal of Economic Perspectives 2 : 195–206; Kim Oliver and Mark Walker. 1984. The free rider problem: Experimental evidence. Public Choice 43 : 3–24; Issac R. Mark, James M. Walker and Susan H. Thomas. 1984. Divergent evidence on free riding: An experimental examination of possible explanations. Public Choice 43 : 113–49; and Jack Hirshleifer. 1985. The expanding domain of economics. American Economic Review 75 (6): 53–70.
4 Leon Felkins. Examples of social dilemma. http://perspicuity.net/sd/ sd-exam.html 5 F. de Zwart. 1994. The Bureaucratic Merry-go-round: Manipulating the Transfer of Indian Civil Servants . Amsterdam University Press.
Chapter 9: Veerappan Dilemma: The Poser Answered
1 The computation of the probability as 38 per cent is as follows:
For any one of the twenty aspirants, the probability that he will draw a specific predetermined number and hence write is 1 / 20 and that he will not draw the predetermined number and hence not write is 19 / 20 .
Thus, the conditional probability that the other nineteen will not draw their predetermined numbers and hence not write, given that one of the aspirants will draw the predetermined number and hence write is:
As there are twenty aspirants in all, the sum total of the probabilities that there will be only one aspirant who will end up drawing the predetermined number and none else is:
2 In the world of quantum physics, the ordinary laws of physics break down, as all events are governed by probabilities and not certainties. For example, when a radioactive atom disintegrates, it might decay, emitting an electron, or it might not, with a certain probability. Schrödinger, much like Einstein, did not believe that God plays dice, and in order to show the ‘absurdness’ of the quantum mechanical implications, conceived the following conceptual experiment.
Imagine a closed carton that contains a live cat and a vial of poison, in addition to some radioactive material, arranged in such a manner that if the radioactive decay occurs, it will smash the poison vial, killing the cat. Now if the experiment is so set that there is exactly 50:50 chance of radioactive decay (which it is possible to set) one could then say there is a 50:50 chance that the cat will be killed. If so, one could equally safely make a statement, even without looking inside the box, that the cat is both dead and alive, each possibility having a 50 per cent chance. So far so good.
But there our intuitive world ends. According to quantum mechanics, neither of the two possibilities for the decaying atom, and hence the cat whose life is linked to the decay, has any reality unless the decay (or the state of the cat) is observed. The atom has neither decayed nor not decayed, so that the cat is neither dead nor not dead, till we take a peek inside the box! Believers in quantum mechanics consider the cat to be in some kind of indeterminate limbo, neither dead nor alive, until you take a look inside the box!
Appendix 1
1 A colleague made a passing reference to this experiment to me several years ago (in the mid 1980s) and even though I have tried hard I have not been able to find the exact reference to this experiment. However, I suspect this to be the work of John Kagel.
100%); -o-filter: grayscale(100%); -ms-filter: grayscale(100%); filter: grayscale(100%); " class="sharethis-inline-share-buttons">share
Games Indians Play Page 14