"In fact, I would advance the even stronger claim that the theory of natural selection is, in essence, Adam Smith's economics transferred to nature," Gould wrote. "Individual organisms engaged in the ‘struggle for existence' act as the analog of firms in competition. Reproductive success becomes the analog of profit."16 In other words, as Smith argued, there is no need to design an efficient economy (and in fact, a designer would be a bad idea). The economy designs itself quite well if left alone, so that the individuals within that economy are free to pursue their self-interest. Darwin saw a similar picture in biology: Organisms pursuing their own interest (survival and reproduction) can create, over time, complexities of life that mirror the complexities of an economy. In one passage, Darwin refers specifically to the concept of "division of labor," a favorite topic of Smith's. In his famous example of the pin factory, Smith described how specialization breeds efficiency. It seemed to Darwin quite analogous to the origin of new species in nature.
"No naturalist doubts the advantage of what has been called the ‘physiological division of labour'; hence we may believe that it would be advantageous to a plant to produce stamens alone in one flower or on one whole plant, and pistils alone in another flower or on another plant," Darwin wrote in Origin of Species. Similar advantages of such specialization, he noted, apply to diversity among organisms.
"We may, I think, assume that the modified descendants of any one species will succeed by so much the better as they become more diversified in structure, and are thus enabled to encroach on places occupied by other beings," Darwin commented. "So in the general economy of any land, the more widely and perfectly the animals and plants are diversified for different habits of life, so will a greater number of individuals be capable of there supporting themselves."17
Clearly Darwin's "general economy" of life reflected sentiments similar to those expressed in the "political economy" described by Adam Smith. As Gould summed it up, Smith's ideas may not work so well in economics, but they are perfect for biology. And via Smith's insights, Paley's argument for the necessity of a creator is refuted.18
"The very phenomena that Paley had revered as the most glorious handiwork of God … ‘just happen' as a consequence of causes operating at a lower level among struggling individuals," Gould asserted.19
THE GAME'S AFOOT
In a way, Darwin's Origin of Species represents the third work in a trilogy summarizing the scientific understanding of the world at the end of the 19th century. Just as Newton had tamed the physical world in the 17th century, and Smith had codified economics in the 18th, Charles Darwin in the 19th century added life to the list. Where Smith followed in Newton's footsteps, Darwin followed in Smith's. So by the end of the 19th century, the groundwork was laid for a comprehensive rational understanding of just about everything.
Oddly, it seems, the 20th century produced no such book of similar impact and fame.20 No volume arrived, for instance, to articulate the long-sought Code of Nature. But one book that appeared in midcentury may someday be remembered as the first significant step toward such a comprehensive handbook of human social behavior: Theory of Games and Economic Behavior, by John von Neumann and Oskar Morgenstern.
* * *
2
Von Neumann's Games Game theory's origins
Games combining chance and skill give the best representation of human life, particularly of military affairs and of the practice of medicine which necessarily depend partly on skill and partly on chance…. It would be desirable to have a complete study made of games, treated mathematically.
—Gottfried Wilhelm von Leibniz (quoted by Oskar Morgenstern, Dictionary of the History of Ideas)
It's no mystery why economics is called the dismal science.
With most sciences, experts make pretty accurate predictions. Mix two known chemicals, and a chemist can tell you ahead of time what you'll get. Ask an astronomer when the next solar eclipse will be, and you'll get the date, time, and best viewing locations, even if the eclipse won't occur for decades.
But mix people with money, and you generally get madness. And no economist really has any idea when you'll see the next total eclipse of the stock market. Yet many economists continue to believe that they will someday practice a sounder science. In fact, some would insist that they are already practicing a sounder science—by viewing the economy as basically just one gigantic game.
At first glance, building economic science on the mathematical theory of games seems about as sensible as forecasting real-estate trends by playing Monopoly. But in the past half century, and particularly the past two decades, game theory has established itself as the precise mathematical tool that economists had long lacked.
Game theory provides precision to the once fuzzy economic notion about how consumers compare their preferences (a measure labeled by the deceptively simple term utility). Even more important, game theory shows how to determine the strategies necessary to achieve the maximum possible utility—that is, to acquire the highest payoff—the presumed goal of every rational participant in the dogfights of economic life.
Yet while people have played games for millennia, and have engaged in economic exchange for probably just as long, nobody had ever made the connection explicit—mathematically—until the 20th century. This merger of games with economics—the mathematical mapping of the real world of choices and money onto the contrived realm of poker and chess—has revolutionized the use of math to quantify human behavior. And most of the credit for game theory's invention goes to one of the 20th century's most brilliant thinkers, the magical mathman John von Neumann.
LACK OF FOCUS
If any one person of the previous century personified the word polymath, it was von Neumann. I'm really sorry he died so young.
Had von Neumann lived to a reasonably old age—say, 80 or so—I might have had the chance to hear him talk, or maybe even interview him. And that would have given me a chance to observe his remarkable genius for myself. Sadly, he died at the age of 53. But he lived long enough to leave a legendary legacy in several disciplines. His contributions to physics, mathematics, computer science, and economics rank him as one of the all-time intellectual giants of each field. Imagine what he could have accomplished if he'd learned to focus himself!
Of course, he accomplished plenty anyway. Von Neumann produced the standard mathematical formulation of quantum mechanics, for instance. He didn't exactly invent the modern digital computer, but he improved it and pioneered its use for scientific research. And, apparently just for kicks, he revolutionized economics.
Born in 1903 in Hungary, von Neumann was given the name Janos but went by the nickname Jancsi. He was the son of a banker (who had paid for the right to use the honorific title von). As a child, Jancsi dazzled adults with his mental powers, telling jokes in Greek and memorizing the numbers in phone books. Later he enrolled in the University of Budapest as a math major, but didn't bother to attend the classes—at the same time, he was majoring in chemistry at the University of Berlin. He traveled back to Budapest for exams, aced them, and continued his chemical education, first at Berlin and then later at the University of Zurich.
I've recounted some of von Neumann's adult intellectual escapades before (in my book The Bit and the Pendulum), such as the time when he was called in as a consultant to determine whether the Rand Corporation needed a new computer to solve a difficult problem. Rand didn't need a new computer, von Neumann declared, after solving the problem in his head. In her biography of John Nash, Sylvia Nasar relates another telling von Neumann anecdote, about a famous trick-question math problem. Two cyclists start out 20 miles apart, heading for each other at 10 miles an hour. Mean-while a fly flies back and forth between the bicycles at 15 miles an hour. How far has the fly flown by the time the bicycles meet? You can solve it by adding up the fly's many shorter and shorter paths between bikes (this would be known in mathematical terms as summing the infinite series). If you detect the trick, though, you can solve the problem in an i
nstant—it will take the bikes an hour to meet, so the fly obviously will have flown 15 miles.
When jokesters posed this question to von Neumann, sure enough, he answered within a second or two. Oh, you knew the trick, they moaned. "What trick?" said von Neumann. "All I did was sum the infinite series."
Before von Neumann first came to America in 1930, he had established himself in Europe as an exceptionally brilliant mathematician, contributing major insights into such topics as logic and set theory, and he lectured at the University of Berlin. But he was not exactly a bookworm. He enjoyed Berlin's cabaret-style nightlife, and more important for science, he enjoyed poker. He turned his talent for both math and cards into a new paradigm for economics—and in so doing devised mathematical tools that someday may reveal deep similarities underlying his many diverse scientific interests. More than that, he showed how to apply rigorous methods to social questions, not unlike Asimov's Hari Seldon.
"Von Neumann was a brilliant mathematician whose contributions to other sciences stem from his belief that impartial rules could be found behind human interaction," writes one commentator. "Accordingly, his work proved crucial in converting mathematics into a key tool to social theory."1
UTILITY AND STRATEGY
By most accounts, the invention of modern game theory came in a technical paper published by von Neumann in 1928. But the roots of game theory reach much deeper. After all, games are as old as humankind, and from time to time intelligent thinkers had considered how such games could most effectively be played. As a branch of mathematics, though, game theory did not appear in its modern form until the 20th century, with the merger of two rather simple ideas. The first is utility—a measure of what you want; the second is strategy—how to get what you want.
Utility is basically a measure of value, or preference. It's an idea with a long and complex history, enmeshed in the philosophical doctrine known as utilitarianism. One of the more famous expositors of the idea was Jeremy Bentham, the British social philosopher and legal scholar. Utility, Bentham wrote in 1780, is "that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness … or … to prevent the happening of mischief, pain, evil, or unhappiness."2 So to Bentham, utility was roughly identical to happiness or pleasure—in "maximizing their utility," individual people would seek to increase pleasure and diminish pain. For society as a whole, maximum utility meant "the greatest happiness of the greatest number."3 Bentham's utilitarianism incorporated some of the philosophical views of David Hume, friend to Adam Smith. And one of Bentham's influential followers was the British economist David Ricardo, who incorporated the idea of utility into his economic philosophy.
In economics, utility's usefulness depends on expressing it quantitatively. Happiness isn't easily quantifiable, for example, but (as Bentham noted) the means to happiness can also be regarded as a measure of utility. Wealth, for example, provides a means of enhancing happiness, and wealth is easier to measure. So in economics, the usual approach is to measure self-interest in terms of money. It's a convenient medium of exchange for comparing the value of different things. But in most walks of life (except perhaps publishing), money isn't everything. So you need a general definition that makes it possible to express utility in a useful mathematical form.
One mathematical approach to quantifying utility came along long before Bentham, in a famous 1738 result from Daniel Bernoulli, the Swiss mathematician (one of many famous Bernoullis of that era). In solving a mathematical paradox about gambling posed by his cousin Nicholas, Daniel realized that utility does not simply equate to quantity. The utility of a certain amount of money, for instance, depends on how much money you already have. A million-dollar lottery prize has less utility for Bill Gates than it would for, say, me. Daniel Bernoulli proposed a method for calculating the reduction in utility as the amount of money increased.4
Obviously the idea of utility—what you want to maximize—can sometimes get pretty complicated. But in many ordinary situations, utility is no mystery. If you're playing basketball, you want to score the most points. In chess, you want to checkmate your opponent's king. In poker, you want to win the pot. Often your problem is not defining utility, but choosing a good strategy to maximize it. Game theory is all about figuring out which strategy is best.5
The first substantial mathematical attempt to solve that part of the problem seems to have been taken by an Englishman named James Waldegrave in 1713. Waldegrave was analyzing a two-person card game called "le Her," and he described a way to find the best strategy, using what today is known as the "minimax" (or sometimes "minmax") approach. Nobody paid much attention to Waldegrave, though, so his work didn't affect later developments of game theory. Other mathematicians also occasionally dabbled in what is now recognized to be game theory math, but there was no one coherent approach or clear chain of intellectual influence. Only in the 20th century did really serious work begin on devising the mathematical principles behind games of strategy. First was Ernst Zermelo, a German mathematician, whose 1913 paper examining the game of chess is sometimes cited as the beginning of real game theory mathematics. He chose chess merely as an illustration of the more general idea of a two-person game of strategy where the players choose all the moves with no contribution from chance. And that is an important distinction, by the way. Poker involves strategy, but also includes the luck of the draw. If you get a bum hand, you're likely to lose no matter how clever your strategy. In chess, on the other hand, all the moves are chosen by the players—there's no shuffling of cards, tossing of dice, flipping coins, or spinning the wheel of fortune. Zermelo limited himself to games of pure strategy, games without the complications of random factors.
Zermelo's paper on chess apparently confused some of its readers, as many secondary reports of his results are vague and contradictory.6 But it seems he tried to show that if the White player managed to create an advantageous arrangement of pieces—a "winning configuration"—it would then be possible to end the game within fewer moves than the number of possible chessboard arrangements. (Having an "advantageous arrangement" means achieving a situation from which White would be sure to win—assuming no dumb moves—no matter what Black does.)
Using principles of set theory (one of von Neumann's mathematical specialties, by the way), Zermelo proved that proposition. His original proof required some later tweaking by other mathematicians and Zermelo himself. But the main lesson from it all was not so important for strategy in chess as it was to show that math could be used to analyze important features of any such game of strategy.
As it turns out, chess was a good choice because it is a perfect example of a particularly important type of game of strategy, known as a two-person zero-sum game. It's called "zero-sum" because whatever one player wins, the other loses. The interests of the two competitors are diametrically opposed. (Chess is also a game where the players have "perfect information." That means the game situation and all the decisions of all the players are known at all times—like playing poker with all the cards always dealt face up.)
Zermelo did not address the question of exactly what the best strategy is to play in chess, or even whether there actually is a surefire best strategy. The first move in that direction came from the brilliant French mathematician Émile Borel. In the early 1920s, Borel showed that there is a demonstrable best strategy in two-person zero-sum games—in some special cases. He doubted that it would be possible to prove the existence of a certain best strategy for such games in general.
But that's exactly what von Neumann did. In two-person zero-sum games, he determined, there is always a way to find the best strategy possible, the strategy that will maximize your winnings (or minimize your losses) to whatever extent is possible by the rules of the game and your opponent's choices. That's the modern minimax7 theorem, which von Neumann first presented in December 1926 to the Göttingen Mathematical Society and then developed fully in his 1928 paper called "Zur Theorie der Gesellshaftsspiele" (Theory
of Parlor Games), laying the foundation for von Neumann's economics revolution.8
GAMES INVADE ECONOMICS
In his 1928 paper, von Neumann did not attempt to do economics9—it was strictly math, proving a theorem about strategic games. Only years later did he merge game theory with economics, with the assistance of an economist named Oskar Morgenstern.
Morgenstern, born in Germany in 1902, taught economics at the University of Vienna from 1929 to 1938. In a book published in 1928, the same year as von Neumann's minimax paper, Morgenstern discussed problems of economic forecasting. A particular point he addressed was the "influence of predictions on predicted events." This, Morgenstern knew, was a problem peculiar to the social sciences, including economics. When a chemist predicts how molecules will react in a test tube, the molecules are oblivious. They do what they do the same way whether a chemist correctly predicts it or not. But in the social sciences, people display much more independence than molecules do. In particular, if people know what you're predicting they will do, they might do something else just to annoy you. More realistically, some people might learn of a prediction and try to turn that foreknowledge to their advantage, upsetting the conditions that led to the prediction and so throwing random factors into the outcome. (By the way, in the Foundation Trilogy, that's why Seldon's Plan had to be so secret. It wouldn't work if anybody knew what it was.)
A Beautiful Math Page 4