Against the Gods: The Remarkable Story of Risk

Home > Other > Against the Gods: The Remarkable Story of Risk > Page 12
Against the Gods: The Remarkable Story of Risk Page 12

by Peter L. Bernstein


  Bernoulli introduced another novel idea that economists today consider a driving force in economic growth-human capital. This idea emerged from his definition of wealth as "anything that can contribute to the adequate satisfaction of any sort of want .... There is then nobody who can be said to possess nothing at all in this sense unless he starves to death."

  What form does most people's wealth take? Bernoulli says that tangible assets and financial claims are less valuable than productive capacity, including even the beggar's talent. He suggests that a man who can earn 10 ducats a year by begging will probably reject an offer of 50 ducats to refrain from begging: after spending the 50 ducats, he would have no way of supporting himself. There must, however, be some amount that he would accept in return for a promise never to beg again. If that amount were, for instance, 100 ducats, "we might say that [the beggar] is possessed of wealth worth one hundred."

  Today, we view the idea of human capital-the sum of education, natural talent, training, and experience that comprise the wellspring of future earnings flows-as fundamental to the understanding of major shifts in the global economy. Human capital plays the same role for an employee as plant and equipment play for the employer. Despite the enormous accretions of tangible wealth since 1738, human capital is still by far the largest income-producing asset for the great majority of people. Why else would so many breadwinners spend their hard-earned money on life-insurance premiums?

  For Bernoulli, games of chance and abstract problems were merely tools with which to fashion his primary case around the desire for wealth and opportunity. His emphasis was on decision-making rather than on the mathematical intricacies of probability theory. He announces at the outset that his aim is to establish "rules [that] would be set up whereby anyone could estimate his prospects from any risky undertaking in light of one's specific financial circumstances." These words are the grist for the mill of every contemporary financial economist, business manager, and investor. Risk is no longer something to be faced; risk has become a set of opportunities open to choice.

  Bernoulli's notion of utility-and his suggestion that the satisfaction derived from a specified increase in wealth would be inversely related to the quantity of goods previously possessed-were sufficiently robust to have a lasting influence on the work of the major thinkers who followed. Utility provided the underpinnings for the Law of Supply and Demand, a striking innovation of Victorian economists that marked the jumping-off point for understanding how markets behave and how buyers and sellers reach agreement on price. Utility was such a powerful concept that over the next two hundred years it formed the foundation for the dominant paradigm that explained human decision-making and theories of choice in areas far beyond financial matters. The theory of games-the innovative twentieth century approach to decision-making in war, politics, and business management-makes utility an integral part of its entire system.

  Utility has had an equally profound influence on psychology and philosophy, for Bernoulli set the standard for defining human rationality. For example, people for whom the utility of wealth rises as they grow richer are considered by most psychologists-and moralists-as neurotic; greed was not part of Bernoulli's vision, nor is it included in most modern definitions of rationality.

  Utility theory requires that a rational person be able to measure utility under all circumstances and to make choices and decisions accordingly-a tall order given the uncertainties we face in the course of a lifetime. The chore is difficult enough even when, as Bernoulli assumed, the facts are the same for everyone. On many occasions the facts are not the same for everyone. Different people have different information; each of us tends to color the information we have in our own fashion. Even the most rational among us will often disagree about what the facts mean.

  Modern as Bernoulli may appear, he was very much a man of his times. His concept of human rationality fitted neatly into the intellectual environment of the Enlightenment. This was a time when writers, artists, composers, and political philosophers embraced the classical ideas of order and form and insisted that through the accumulation of knowledge mankind could penetrate the mysteries of life. In 1738, when Bernoulli's paper appeared, Alexander Pope was at the height of his career, studding his poems with classical allusions, warning that "A little learning is a dangerous thing," and proclaiming that "The proper study of mankind is man." Denis Diderot was soon to start work on a 28-volume encyclopedia, and Samuel Johnson was about to fashion the first dictionary of the English language. Voltaire's unromantic viewpoints on society occupied center stage in intellectual circles. By 1750, Haydn had defined the classical form of the symphony and sonata.

  The Enlightenment's optimistic philosophy of human capabilities would show up in the Declaration of Independence and would help shape the Constitution of the newly formed United States of America. Carried to its violent extreme, the Enlightenment inspired the citizens of France to lop off the head of Louis XVI and to enthrone Reason on the altar of Notre Dame.

  Bernoulli's boldest innovation was the notion that each of useven the most rational-has a unique set of values and will respond accordingly, but his genius was in recognizing that he had to go further than that. When he formalizes his thesis by asserting that utility is inversely proportionate to the quantity of goods possessed, he opens up a fascinating insight into human behavior and the way we arrive at decisions and choices in the face of risk.

  According to Bernoulli, our decisions have a predictable and systematic structure. In a rational world, we would all rather be rich than poor, but the intensity of the desire to become richer is tempered by how rich we already are. Many years ago, one of my investment counsel clients shook his finger at me during our first meeting and warned me: "Remember this, young man, you don't have to make me rich. I am rich already!"

  The logical consequence of Bernoulli's insight leads to a new and powerful intuition about taking risk. If the satisfaction to be derived from each successive increase in wealth is smaller than the satisfaction derived from the previous increase in wealth, then the disutility caused by a loss will always exceed the positive utility provided by a gain of equal size. That was my client's message to me.

  Think of your wealth as a pile of bricks, with large bricks at the foundation and with the bricks growing smaller and smaller as the height increases. Any brick you remove from the top of the pile will be larger than the next brick you might add to it. The hurt that results from losing a brick is greater than the pleasure that results from gaining a brick.

  Bernoulli provides this example: two men, each worth 100 ducats, decide to play a fair game, like tossing coins, in which there is a 50-50 chance of winning or losing, with no house take or any other deduction from the stakes. Each man bets 50 ducats on the throw, which means that each has an equal chance of ending up worth 150 ducats or of ending up worth only 50 ducats.

  Would a rational person play such a game? The mathematical expectation of each man's wealth after the game has been played with this 50-50 set of alternatives is precisely 100 ducats (150 + 50 divided by 2), which is just what each player started with. The expected value for each is the same as if they had not decided to play the game in the first place.

  Bernoulli's theory of utility reveals an asymmetry that explains why an even-Steven game like this is an unattractive proposition. The 50 ducats that the losing player would drop have greater utility than the 50 ducats that the winner would pocket. Just as with the pile of bricks, los ing 50 ducats hurts the loser more than gaining 50 ducats pleases the winner.* In a mathematical sense a zero-sum game is a loser's game when it is valued in terms of utility. The best decision for both is to refuse to play this game.

  Bernoulli uses his example to warn gamblers that they will suffer a loss of utility even in a fair game. This depressing result, he points out, is:

  Nature's admonition to avoid the dice altogether .... [E]veryone who bets any part of his fortune, however small, on a mathematically fair game of chance acts irrational
ly .... [T]he imprudence of a gambler will be the greater the larger part of his fortune which he exposes to a game of chance.

  Most of us would agree with Bernoulli that a fair game is a loser's game in utility terms. We are what psychologists and economists call "risk-averse" or "risk averters." The expression has a precise meaning with profound implications.

  Imagine that you were given a choice between a gift of $25 for certain or an opportunity to play a game in which you stood a 50% chance of winning $50 and a 50% chance of winning nothing. The gamble has a mathematical expectation of $25-the same amount as the gift-but that expectation is uncertain. Risk-averse people would choose the gift over the gamble. Different people, however, are risk-averse in different degrees.

  You can test your own degree of risk aversion by determining your "certainty equivalent." How high would the mathematical expectation of the game have to go before you would prefer the gamble to the gift? Thirty dollars from a 50% chance of winning $60 and a 50% chance of winning nothing? Then the $30 expectation from the gamble would be the equivalent of the $25 for certain. But perhaps you would take the gamble for an expectation of only $26. You might even discover that at heart you are a risk-seeker, willing to play the game even when the mathematical expectation of the payoff is less than the certain return of $25. That would be the case, for example, in a game where the payoff differs from 50-50 so that you would win $40 if you toss tails and zero if you toss heads, for an expected value of only $20. But most of us would prefer a game in which the expected value is something in excess of the $50 in the example. The popularity of lottery games provides an interesting exception to this statement, because the state's skim off the top is so large that most lotteries are egregiously unfair to the players.

  A significant principle is at work here. Suppose your stockbroker recommends a mutual fund that invests in a cross section of the smallest stocks listed on the market. Over the past 69 years, the smallest 20% of the stock market has provided an income of capital appreciation plus dividend that has averaged 18% a year. That is a generous rate of return. But volatility in this sector has also been high: two-thirds of the returns have fallen between -23% and +59%; negative returns over twelvemonth periods have occurred in almost one out of every three years and have averaged 20%. Thus, the outlook for any given year has been extremely uncertain, regardless of the high average rewards generated by these stocks over the long run.

  As an alternative, suppose a different broker recommends a fund that buys and holds the 500 stocks that comprise the Standard & Poor's Composite Index. The average annual return on these stocks over the past 69 years has been about 13%, but two-thirds of the annual returns have fallen within the narrower range of -11% and +36%; negative returns have averaged 13%. Assuming the future will look approximately like the past, but also assuming that you do not have 70 years to find out how well you did, is the higher average expected return on the small-stock fund sufficient to justify its much greater volatility of returns? Which mutual fund would you buy?

  Daniel Bernoulli transformed the stage on which the risk-taking drama is played out. His description of how human beings employ both measurement and gut in making decisions when outcomes are uncertain was an impressive achievement. As he himself boasts in his paper, "Since all our propositions harmonize perfectly with experience, it would be wrong to neglect them as abstractions resting upon precarious hypotheses."

  A powerful attack some two hundred years later ultimately revealed that Bernoulli's propositions fell short of harmonizing perfectly with experience, in large part because his hypotheses about human rational ity were more precarious than he as a man of the Enlightenment might want to believe. Until that attack was launched, however, the concept of utility flourished in the philosophical debate over rationality that prevailed for nearly two hundred years after Bernoulli's paper was published. Bernoulli could hardly have imagined how long his concept of utility would survive-thanks largely to later writers who came upon it on their own, unaware of his pioneering work.

  ne winter night during one of the many German air raids on Moscow in World War II, a distinguished Soviet professor of statistics showed up in his local air-raid shelter. He had never appeared there before. "There are seven million people in Moscow," he used to say. "Why should I expect them to hit me?" His friends were astonished to see him and asked what had happened to change his mind. "Look," he explained, "there are seven million people in Moscow and one elephant. Last night they got the elephant."

  This story is a modern version of the thunderstorm phobias analyzed in the Port-Royal Logic, but it differs at a critical point from the moral of the example cited there. In this case, the individual involved was keenly aware of the mathematical probability of being hit by a bomb. What the professor's experience really illuminates, therefore, is the dual character that runs throughout everything to do with probability: past frequencies can collide with degrees of belief when risky choices must be made.

  The story has more to it than that. It echoes the concerns of Graunt, Petty, and Halley, When complete knowledge of the futureor even of the past-is an impossibility, how representative is the information we have in hand? Which counts for more, the seven million humans or the elephant? How should we evaluate new information and incorporate it into degrees of belief developed from prior information? Is the theory of probability a mathematical toy or a serious instrument for forecasting?

  Probability theory is a serious instrument for forecasting, but the devil, as they say, is in the details-in the quality of information that forms the basis of probability estimates. This chapter describes a sequence of giant steps over the course of the eighteenth century that revolutionized the uses of information and the manner in which probability theory can be applied to decisions and choices in the modern world.

  The first person to consider the linkages between probability and the quality of information was another and older Bernoulli, Daniel's uncle Jacob, who lived from 1654 to 1705.1 Jacob was a child when Pascal and Fermat performed their mathematical feats, and he died when his nephew Daniel was only five years old. Talented like all the Bernoullis, he was a contemporary of Isaac Newton and had sufficient Bernoullian ill temper and hubris to consider himself a rival of that great English scientist.

  Merely raising the questions that Jacob raised was an intellectual feat in itself, quite apart from offering answers as well. Jacob undertook this task, he tells us, after having meditated on it for twenty years; he completed his work only when he was approaching the age of 80, shortly before he died in 1705.

  Jacob was an exceptionally dour Bernoulli, especially toward the end of his life, though he lived during the bawdy and jolly age that followed the restoration of Charles II in 1660.* One of Jacob's more distinguished contemporaries, for example, was John Arbuthnot, Queen Anne's doctor, a Fellow of the Royal Society, and an amateur mathematician with an interest in probability that he pepped up with a generous supply of off-color examples to illustrate his points. In one of Arbuthnot's papers, he considered the odds on whether "a woman of twenty has her maidenhead" or whether "a town-spark of that age `has not been clap'd."'2

  Jacob Bernoulli had first put the question of how to develop probabilities from sample data in 1703. In a letter to his friend Leibniz, he commented that he found it strange that we know the odds of throwing a seven instead of an eight with a pair of dice, but we do not know the probability that a man of twenty will outlive a man of sixty. Might we not, he asks, find the answer to this question by examining a large number of pairs of men of each age?

  In responding to Bernoulli, Leibniz took a dim view of this approach. "[N]ature has established patterns originating in the return of events," he wrote, "but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary."3 Although Leibniz wrote this letter in Latin, he put the expression, "but only
for the most part" into Greek: CO; E7tt 'CO nob j. Perhaps this was to emphasize his point that a finite number of experiments such as Jacob suggested would inevitably be too small a sample for an exact calculation of nature's intentions.*

  Jacob was not deterred by Leibniz's response, but he did change the manner in which he went about solving the problem. Leibniz's admonition in Greek would not be forgotten.

  Jacob's effort to uncover probabilities from sample data appears in his Ars Conjectandi (The Art of Conjecture), the work that his nephew Nicolaus finally published in 1713, eight years after Jacob's death.' His interest was in demonstrating where the art of thinking-objective analysis-ends and the art of conjecture begins. In a sense, conjecture is the process of estimating the whole from the parts.

  Jacob's analysis begins with the observation that probability theory had reached the point where, to arrive at a hypothesis about the likelihood of an event, "it is necessary only to calculate exactly the number of possible cases, and then to determine how much more likely it is that one case will occur than another." The difficulty, as Jacob goes on to point out, is that the uses of probability are limited almost exclusively to games of chance. Up to that point, Pascal's achievement had amounted to little more than an intellectual curiosity.

  For Jacob, this limitation was extremely serious, as he reveals in a passage that echoes Leibniz's concerns:

  But what mortal ... could ascertain the number of diseases, counting all possible cases, that afflict the human body ... and how much more likely one disease is to be fatal than another-plague than dropsy ... or dropsy than fever-and on that basis make a prediction about the relationship between life and death in future generations?

 

‹ Prev