The Apprentice Economist

Home > Other > The Apprentice Economist > Page 22
The Apprentice Economist Page 22

by Filip Palda


  If each firm is given this sort of choice, will all firms be ultimately led by the government payoff scheme to produce the correct competitive quantity? This is a question of the Nash equilibrium that would emerge. If Baal Telephone believes all others are going to pretend they have high costs, this is just fine by Baal, because that reduces what they get to produce and gives Baal a large segment of the market to indirectly milk through truthfully reporting and getting government subsidies. Others see the danger and try to thus get more output by being more truthful. Everyone being truthful is a Nash equilibrium because no one could deviate from it given the strategies of the other players and come out ahead. The final quantity that results from this game will be efficient in the sense that it has exhausted all possibility for furthering productive exchanges between producers and consumers. It turns out that this is also a Pareto equilibrium.

  Vickrey’s mechanism does not work in all cases of asymmetric information. It only makes sense to look for a Vickrey mechanism when there is some sort of alignment of interests. In other words, the game in question may be non-cooperative but must not be zero-sum. The essence of the Vickrey mechanism is that one player is paying off another, albeit indirectly, in order to get a better outcome. You can only pay someone off and come out ahead yourself if there is a concordance of interests. In the Holmes-Moriarty death chase, no bargaining is possible because one man’s gain is precisely the other’s loss. This is why a zero-sum interaction precludes the implicit bargaining behind the Vickrey formula. The concordance of interests in the telephony example was between Baal Telephone and the government acting on behalf of consumers. The government’s truth revelation mechanism was motivated by the creation of wealth in society that such a mechanism might induce.

  Mechanism design and the size of government

  VICKREY MADE A case that government is needed to prevent rapacious oligopolists from milking consumers of a private good and that mechanism design was the way to reverse engineer the Bayesian game in favour of economic efficiency. Government achieves this objective by devising a reward structure for truth. This structure is characterized by “incentive constraints”. As Myerson explains, “These incentive constraints express the basic fact that individuals will not share private information or exert hidden efforts without appropriate incentives” (2009, 587). The constraint is devised to be “incentive compatible”, a phrase signifying that being honest and obedient is compatible with your private, selfish objectives. Ten years after Vickrey, a swarm of economists used his insights to argue that government may also fruitfully apply mechanism design to government spending.

  Suppose government must decide between building a hospital or creating a national park, each of which will cost the same amount of money. How does it know which one to finance? It could ask voters how much they are willing to pay above the per-person cost to see their preferred project go through. The project for which people are willing to pay the most would be the one creating the most wealth in society. If you expected only to pay this cost, you would be tempted to overstate the value of your choice because by doing so you could skew the government decision towards your desired position without a concomitant increase in your personal cost. Here we have a commonality of interest in that the government is seeking to create the most value in society. There is also the problem of asymmetric information because government has no way of knowing whether people are telling it the truth about their valuations. This is a Bayesian game ripe for conversion to a truth-telling equilibrium through the application of an incentive-compatible mechanism. The revelation principle at work.

  The mechanism is similar to Vickrey’s and is generally called a Vickrey-Clarke-Groves auction. It works by asking each person what the net benefit to him or her is above the per-person cost of providing the preferred alternative. You add up the dollar votes for the hospital and if the sum is greater than that for the park, the hospital gets built. But there is a catch. As well as getting charged the per-person cost of building the hospital, any voter who was “pivotal” in forcing the decision will pay an extra cost equal to the loss of net benefit to the other voters who did not see the park get created. “Pivotal” means that by announcing a high valuation on a certain outcome, it was you who tipped the political balance in its favour.

  As Tideman and Tullock explained, “A nontruthful response cannot benefit the respondent, and it carries a risk of making him worse off than he would have been with the truth. If he understates his value, he may pass up an opportunity to obtain the result he desires at an attractive price. If he overstates his value, he may wind up paying more than it is worth to him to have his choice” (1976, 1148).

  Correctly revealing your preferences is a Nash equilibrium because if everyone is expected to tell the truth, there is no profit for any single person to deviate from the truth. If we lie to get our way and others are telling the truth, we will be punished with an extra tax. If we lie by understating our preferences while others are honest, the compensation they pay will be proportional to our understated loss and thus not really enough to compensate us for our true loss of not seeing our preferred alternative go through.

  Centralized (Vickrey-Clark-Groves) vs. decentralized (Spence) mechanisms

  ENTHUSIASM FOR MECHANISMS that reveal voter preferences was quite high among researchers during the 1970s, but it eventually waned. The catch with the Vickrey-Clarke-Groves scheme is that the money collected by the government has to be literally destroyed. If you give the money back to voters through a transfer scheme, then the impact of the user-fee revelation scheme on truth telling is blunted. Even if money is used to pay off the national debt, or build a road somewhere, people in general will still feel that some of it is coming back to them through lower future taxes needed to pay off the debt, or increased services through the use of roads. There are similar revelation schemes that pay voters to tell the truth but these can rack up large deficits, as Groves and Ledyard (1977) proved.

  Enthusiasm also waned because the schemes being proposed were not only bewildering but also impractical. The difficulty and impracticality of Vickrey-Clarke-Groves mechanisms stemmed from trying to elicit truth-telling through complicated reward schemes and the use of a multi-stage arbitration process by some impartial government figure. Mechanism designs of this type relied on a central authority to make them work, because individuals had no way of by themselves proving that they were telling the truth. Only a straightjacket of rewards or punishments designed by the government could squeeze the truth out of them. In contrast, the signaling games that Spence discovered allowed individuals to make private decisions to invest in say, education, to send a signal to employers that was credible because it was costly. The employer did have a role in eliciting this signal through the salary, or more generally some reward. The ability of the other parties to send signals gave Spence’s theory a degree of latitude that made it simpler to understand and to see at work in the real world.

  Compare, for example, the contortions required by Vickrey-Clarke-Groves mechanisms to elicit truth from voters. Compare it with the way politics seems actually to be done via the use of costly signals. Voters who want to honestly express their preferences may do so by bearing costs. Campaign contributions are a signal of preferences, as are the lobbying efforts and the efforts of those who organize rallies and letter-writing campaigns.

  As in the case of job-market signaling, politicians, like employers, do not want to over-reward investment in signals. Too strong a reward dilutes the value of the signal by encouraging everyone to invest in sending the signal. Employers then lose the means of divining the type of person they are considering for a job, and politicians have trouble trying to decide which interest groups should get a grant. This is why laws that restrict campaign contributions, and which award start-up money to budding interest groups may actually interfere with the duty of a democracy. For without credible signals from the people, politicians have little to go on in deciding how to spend money.


  In societies where voters are not able to send credible signals by their own efforts, some attempt to coax the truth from them must be made. Vickrey-Clark-Groves mechanisms are one such possible attempt. They do not rely on signals but rather upon enticements to elicit the truth about voter preferences. As such, they have to be imposed from the top. In sum, if we do not allow mechanisms to work whereby individuals may send costly signals to politicians, we may need to rely on schemes that politicians concoct to bribe the truth out of us.

  The mechanism zoo

  BY GETTING PEOPLE to reveal the truth and act honestly, Vickrey-Clark-Groves schemes and signaling games make people coordinate their actions in a mutually fruitful manner.

  Each mechanism took a different path. Vickrey-Clarke-Groves was in a centralist tradition. Spence offered a decentralized solution to lying and cheating. Of mechanism designs in the Vickrey-Clark-Groves approach, Nobellist Eric Maskin noted that, “The theory of mechanism design can be thought of as the ‘engineering’ side of economic theory. Much theoretical work, of course, focuses on existing economic institutions. The theorist wants to explain or forecast the economic or social outcomes that these institutions generate. But in mechanism design theory the direction of inquiry is reversed. We begin by identifying our desired outcome or social goal. We then ask whether or not an appropriate institution (mechanism) could be designed to attain that goal” (2008, 567). True. But there were other mechanism designs that worked in a decentralized manner. Spence’s signaling model was one such example. There were others.

  In fact there was an entire zoo of mechanisms for getting people to behave and coordinate their actions towards a mutually profitable outcome. Game theorists Claude d’Aspremont and Louis-André Gérard-Varet explained that, “game theory can … propose a solution: some cooperative transformation may be introduced creating a new game with equilibria having better welfare properties. Such a transformation can come about through a ‘regulation,’ a ‘mediation’ or an ‘audit.’ It may be obtained by ‘repeating’ the game, by adding a ‘communication scheme,’ or by ‘contractually’ modifying the original payoff structure” (1995). To most people, the term “cooperative transformation” would be gobbledygook, but having come this far we are in a position to understand it effortlessly. The cooperative transformation these authors speak of is a means of converting liars and cheats into honest, obedient folk. Vickrey-Clark-Groves did this through a “mediation”. Spence did this through a “communication scheme”. One mechanism we have not explored was discovered by Roger Myerson and Mark Satterthwaite (1983) who focused on how the expected payoffs could be modified by manipulating the probabilities of success. There is a proliferation of other schemes, but to best round out our survey we can be satisfied with looking at the “repeating the game” phrase from the above paragraph. What does a mechanism like that look like?

  Repeated interaction is perhaps the only game theoretical idea that non-economists seem intuitively comfortable with, perhaps due to the 2005 French historical movie Joyeux Noël. In the first Christmas of the European conflict of 1914-1918, soldiers in opposing trenches dialled down their aggressive acts, knowing that a night raid or a mortar lobbed on the heads of lunching enemies would provoke an outraged retaliation. Those making these decisions were low-level people, junior and non-commissioned officers on opposing sides of a line across which higher-ups tolerated no communication except the roar of guns and the thrust of bayonets. Yet using primitive signals, soldiers managed, through repeated interactions in no-man’s land, to implicitly agree to a neutral posture. So complete was this informal truce that on Christmas Day 1914, French, Scottish, and German soldiers came out of their trenches to play football and exchange trinkets.

  The emergence of peace from war is captured in the repeated play of a game called the “prisoner’s dilemma”. It is one of those rare games where you do not need a Nash solution concept to see where matters are heading because one solution clearly dominates all others. To stay with the warfare analogy, imagine two soldiers on either side of the barbed wire, armed with grenades. If the French soldier throws and the German does not, then the German dies, and vice versa. There are two other possibilities. If both soldiers throw simultaneously then the blast of two grenades partially neutralizes their effect but both soldiers still get some injuries. If neither throws, then neither is injured. What to do? If you are only playing this game once, then there is only one option. Throw your grenade. This is a “dominant” strategy because no matter what the other guy does, your best decision is to throw. For if the other throws and you do not, then you die. It is better to throw if you think this is going to happen. If the other does not throw and you do, then he dies. Better to throw in this situation, too. In other words, no matter what the other guy decides, it is better for you to throw. But because he is thinking in the same way, then he throws too. What you get is an inferior equilibrium in which both are injured. If only they could have somehow communicated and then committed to not throwing, then both would be unscathed. Repeated play allows both players to communicate in the sense that if someone throws this time around and you do not, then you will punish him next time by throwing.

  Computer simulations have shown that this sort of game can converge to a cooperative equilibrium, although game theorist David Kreps (1990) has also shown there are many other possible equilibria to this repeated game. The point for game theorists is that repeated interaction is a way of signaling intentions and coordinating actions in such a way that the game “converges” to a cooperative interaction. We may think of many societies where government is not present as experiencing such convergence between isolated pockets of individuals living far away from a mechanism-minded law-giver.

  Are free markets better or worse?

  THUS WE COME to the end of game theory. The first half of the subject consists of learning how people play games. The second half consists of learning how to neutralize these games to prevent lying and cheating. Ex ludis probitas et oboedentia. Out of games, honesty and obedience. We have examined three main categories of neutralization techniques (though others exist): Vickrey-Clark-Groves mechanisms, signaling mechanisms, and repeated game play. All seek to fruitfully coordinate behavior by enticing people to reveal their personal information. Understanding what leads to communication and coordination between groups of people may seem like thin gruel to those who struggle through the basics of game theory and mechanism design. David Kreps (1990) and Roger Myerson (2008) urge us not to despair. Despite some frustrating features, game theory enables us to understand what forces are at work in some of the really big questions of political economy.

  One of these big questions is whether private markets are more or less efficient at coordinating people than is control by a central authority. In the 1930s this was known as the “socialist calculation debate” because knowing how to calculate what people needed and what factories could produce seemed like the essence of the debate between free marketers and socialists. Louis Makowski and Joseph Ostroy (1992) describe how socialists cleverly turned free-market logic against capitalism. So-called “market socialists” argued that a government that controlled all resources could find an economically efficient way of producing by imitating the free market. The government needed to know the value people attached to consuming some product and at what cost firms could produce. Willingness to pay and ability to produce at low cost are the essence of demand and supply relations. In free markets a price is supposed to emerge that equates consumption and output in such a way as to unite consumers who are willing to pay the most with producers who are able to produce at the lowest cost. A government that knew consumer needs and producer capabilities could manipulate prices until an equilibrium emerged.

  Friedrich Hayek accepted the socialist premise that a central planner could ape the free market provided it had all the relevant information on needs and abilities. But according to him this presumption of knowledge was a “fatal conceit”. In his 1945 article, “The
Use of Knowledge in Society”, he argued that each person holds private information about his desires and abilities, information of “time and place” that he or she is either unable or unwilling to share with the central planner. If, instead, the individual may own his or her own property, then through a process of competitive bidding for this property and its fruits, people reveal personal information. The equilibrium price that results is a compression into one number of all the dispersed economic data needed to guide the economy to an efficient equilibrium. Socialists countered that by allowing factory managers to experiment with prices in their local markets, they could also arrive at the “knowledge of time and place” that Hayek said was only available to private individuals.

  The debate became a stalemate that lasted for forty years. It seemed neither side was really speaking the same language as the other. This is not surprising, as the language needed had not yet been invented. What was needed was a better understanding of the informational problems that prevent coordination between people. By fusing game theory and information economics, mechanism design provided the language, or framework, in which both socialists and free marketers could compare the merits of their arguments.

  It seemed that socialism and capitalism were good at different things. Socialism suffered from cheating, or “moral hazard”, more than capitalism because it did not allow company managers to own shares in their own companies. In socialist systems managers would readily sell raw materials needed by their firm on the black market because the manager had no part in the ownership of the company. By way of contrast, in capitalist economies, allowing managers to own shares in their firm discouraged them from slacking or corruptly selling at too low a price. This aspect of private property provided managers with the “incentive constraint” necessary to make them behave honestly. Knowing that the manager would not be pilfering the company stockpiles gave outside investors the confidence to coordinate their financial support with the entrepreneurial drive of the company managers. Of course Soviets were not blind to the moral hazard problem. They dealt with it by investing in propaganda that would inculcate a sense of public service in managers. Failing that, there was the gulag.

 

‹ Prev