Darwin Among the Machines

Home > Other > Darwin Among the Machines > Page 22
Darwin Among the Machines Page 22

by George B. Dyson


  Baran’s packet-switched data network did eventually materialize—not out of whole cloth, as envisioned in 1960, but by the gradual association of many different levels of digital communication systems that ultimately converged, more or less closely, on Baran’s original design. “The process of technological development is like building a cathedral,” said Baran. “Over the course of several hundred years: new people come along and each lays down a block on top of the old foundations, each saying, ‘I built a cathedral.’ Next month another block is placed atop the previous one. Then comes along an historian who asks, ‘Well, who built the cathedral?’ But the reality is that each contribution has to follow onto previous work. Everything is tied to everything else.”63

  9

  THEORY OF GAMES AND ECONOMIC BEHAVIOR

  The game that nature seems to be playing is difficult to formulate. When different species compete, one knows how to define a loss: when one species dies out altogether, it loses, obviously. The defining win, however, is much more difficult because many coexist and will presumably for an infinite time; and yet the humans in some sense consider themselves far ahead of the chicken, which will also be allowed to go on to infinity.

  —STANISLAW ULAM1

  “Unifications of fields which were formerly divided and far apart,” counseled John von Neumann in 1944, “are rare and happen only after each field has been thoroughly explored.”2 So went his introduction (with Oskar Morgenstern) to Theory of Games and Economic Behavior, a mathematical vision whose brilliance was eclipsed only by the developments that his work in atomic weapons and digital computers was about to bring to light.

  An interest in economics ran deeply through von Neumann’s life. At the time of the Institute for Advanced Study’s electronic computer project, von Neumann maintained an incongruous appearance, wearing a three-piece suit among casually dressed logicians and electrical engineers. This costume was a memento of his background as the son of an investment banker and the omen of a future in which the world of money and the world of logic, thanks to computers, would meet on equal terms. In his Theory of Games and Economic Behavior, von Neumann laid the foundations for a unified view of information theory, economics, evolution, and intelligence, whose implications continue to emerge.

  Among von Neumann’s predecessors was André-Marie Ampère, who published Considérations sur la théorie mathématique du jeu (On the mathematical theory of games) at the age of twenty-seven in 1802. Ampère began his study by crediting Georges Louis Buffon (“an author in whom even errors bear the imprint of genius”) as the forefather of mathematical game theory, citing his (1777) Essai d’ Arithmétique Morale. Buffon (1707–1788) was a celebrated naturalist whose evolutionary theories preceded both Charles and Erasmus Darwin, advancing ideas that were risky at the time. “Buffon managed, albeit in a somewhat scattered fashion,” wrote Loren Eiseley, “at least to mention every significant ingredient which was to be incorporated into Darwin’s great synthesis of 1859.”3 The elder Buffon and the young Ampère shared in the tragedy that swept postrevolutionary France: Button’s son and Ampère’s father both died under the guillotine, equally innocent of any crime.

  Ampère analyzed the effects of probability rather than strategy, ignoring more deliberate collusion among the players of a game. Having suffered the first of a series of misfortunes that would follow him through life, Ampère saw games of chance as “certain ruin” to those who played indefinitely or indiscriminately against multiple opponents, “who must then be considered as a single opponent whose fortune is infinite.”4 He observed that a zero-sum game (where one player’s loss equals the other players’ gain) will always favor the wealthier player, who has the advantage of being able to stay longer in the game.

  Von Neumann’s initial contribution to the theory of games, extending the work of Émile Borel, was published in 1928. Where Ampère saw chance as holding the upper hand, von Neumann sought to make the best of fate by determining the optimum strategy for any game. The results of his collaboration with Princeton economist Oskar Morgenstern were completed in the midst of wartime and published in 1944. “The main achievement of the [von Neumann-Morgenstern] book lies, more than in its concrete results, in its having introduced into economics the tools of modern logic and in using them with an astounding power of generalization,” wrote Jacob Marschak in the Journal of Political Economy in 1946.5 Von Neumann’s central insight was his proof of the “minimax” theorem on the existence of good strategies, demonstrating for a wide class of games that a determinable strategy exists that minimizes the expected loss to a player when the opponent tries to maximize the loss by playing as well as possible. This conclusion has profound but mathematically elusive consequences; many complexities of nature, not to mention of economics or politics, can be treated formally as games. A substantial section of the 625-page book is devoted to showing how seemingly intractable situations can be rendered solvable through the assumption of coalitions among the players, and how non-zero-sum games can be reduced to zero-sum games by including a fictitious, impartial player (sometimes called Nature) in the game.

  Game theory was applied to fields ranging from nuclear deterrence to evolutionary biology. “The initial reaction of the economists to this work was one of great reserve, but the military scientists were quick to sense its possibilities in their field,” wrote J. D. Williams in The Compleat Strategyst, a RAND Corporation best-seller that made game theory accessible through examples drawn from everyday life.6 The economists gradually followed. When John Nash was awarded a Nobel Prize for the Nash equilibrium in 1994, he became the seventh Nobel laureate in economics whose work was influenced directly by von Neumann’s ideas. Nash and von Neumann had collaborated at RAND. In 1954, Nash authored a short report on the future of digital computers, in which the von Neumann influence was especially pronounced. “The human brain is a highly parallel setup. It has to be,” concluded Nash, predicting that optimal performance of digital computers would be achieved by coalitions of processors operating under decentralized parallel control.7

  In 1945 the Review of Economic Studies published von Neumann’s “Model of General Economic Equilibrium,” a nine-page paper read to a Princeton mathematics seminar in 1932 and first published in German in 1937. With characteristic dexterity, von Neumann managed to elucidate the behavior of an economy where “goods are produced not only from ‘natural factors of production,’ but . . . from each other,” thereby shedding light on processes whose interdependence otherwise appears impenetrably complex. Because equilibrium was shown to depend on growth, this became known as von Neumann’s expanding economic model. The conclusions were universally debated by economists; mathematicians were universally impressed. Von Neumann derived his conclusions from the topology of convex sets, noting that “the connection with topology may be very surprising at first, but the author thinks that it is natural in problems of this kind.”8

  Von Neumann was laying the groundwork for a unified theory of information dynamics, applicable to free-market economies, self-reproducing organisms, neural networks, and, ultimately, the relations between mind and brain. The confluence of the theory of games with the theory of information and communication invites the construction of such a bridge. In his notes for a series of lectures that were preempted by his death, von Neumann drew a number of parallels—and emphasized a greater number of differences—between the computer and the brain. He said little about mind. He leaves us with an impression, but no exact understanding, of how the evolution of languages (the result of increasing economy in the use of symbols) refines a flow of information into successively more meaningful forms—a hierarchy leading to levels of interpretation manifested as visual perception, natural language, mathematics, and semantic phenomena beyond. Von Neumann was deeply interested in mind. But he wasn’t ready to dismantle a concept that could not be reconstructed with the tools available at the time.

  Von Neumann’s view of the operation of the human nervous system bears more resemblanc
e to the statistically determined behavior of an economic system than to the precisely logical behavior of a digital computer, whether of the 1950s or of today. “The message-system used in the nervous system . . . is of an essentially statistical character,” he wrote in his Silliman lecture notes, published posthumously in 1958. “In other words, what matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence. . . . Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics: instead of the precise systems of markers where the position—and presence or absence—of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. . . . Clearly, other traits of the (statistical) message could also be used: indeed, the frequency referred to is a property of a single train of pulses whereas every one of the relevant nerves consists of a large number of fibers, each of which transmits numerous trains of pulses. It is, therefore, perfectly plausible that certain (statistical) relationships between such trains of pulses should also transmit information. . . . Whatever language the central nervous system is using, it is characterized by less logical and arithmetical depth than what we are normally used to [and] must structurally be essentially different from those languages to which our common experience refers.”9

  Despite the advances of neurobiology and cognitive science over the past forty years, this fundamental picture of the brain as a mechanism for evolving meaning from statistics has not changed. Higher levels of language produce a coherent residue as this underlying flow of statistical information is processed and refined. Information flow in the brain is pulse-frequency coded, rather than digitally coded as in a computer. The resulting tolerance for error is essential for reliable computation by a web of electrically noisy and chemically sensitive neurons bathed in a saline fluid (or, perhaps, a web of microprocessors bathed in the distractions of the real world). Whether a particular signal is accounted for as excitation or inhibition depends on the individual nature of the synapses that mediate its journey through the net. A two-valued logic, to assume the simplest of possible models, is inherent in the details of the neural architecture—a more robust mechanism than a two-valued code.

  Von Neumann’s name remains synonymous with serial processing, now implemented by microprocessors adhering to a logical architecture unchanged from that developed at the Institute for Advanced Study in 1946. He was, however, deeply interested in information-processing architectures of a different kind. In 1943, Warren McCulloch and Walter Pitts demonstrated that any computation performed by a network of (idealized) neurons is formally equivalent to some Turing-machine computation that can be performed one step at a time. Von Neumann recognized that in actual practice (of either electronics or biology) combinatorial complexity makes it prohibitively expensive, if not impossible, to keep this correspondence two-way. “Obviously, there is on this level no more profit in the McCulloch-Pitts result,” he noted in 1948, discussing the behavior of complicated neural nets. “There is an equivalence between logical principles and their embodiment in a neural network, and while in the simpler cases the principles might furnish a simplified expression of the network, it is quite possible that in cases of extreme complexity the reverse is true.”10

  Von Neumann believed that a complex network formed its own simplest behavioral description; to attempt to describe its behavior using formal logic might be an intractable problem, no matter how much computational horsepower was available for the job. Many years—and many, many millions of artificial-intelligence research dollars later—Stan Ulam asked Gian-Carlo Rota, “What makes you so sure that mathematical logic corresponds to the way we think?”11 Ulam’s question echoed what von Neumann had concluded thirty years earlier, that “a new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse.”12 Computers, by the 1980s, had evolved perfect memories, but the memory of the computer industry was short. “If your friends in AI persist in ignoring their past, they will be condemned to repeat it, at a high cost that will be borne by the taxpayers,” warned Ulam, who turned out to be right.13

  For a neural network to perform useful computation, pattern recognition, associative memory, or other functions a system of value must be established, assigning the raw material of meaning on an equitable basis to the individual units of information—whether conveyed by marbles, pulses of electricity, hydraulic fluid, charged ions, or whatever else is communicated among the components of the net. This process corresponds to defining a utility function in game theory or mathematical economics, a problem to which von Neumann and Morgenstern devoted a large portion of their book. Only by such a uniform valuation of internal signals can those configurations that represent solutions to external problems be recognized when some characteristic maximum, minimum, or otherwise identifiable value is evolved. These fundamental tokens coalesce into successively more complex structures conveying more and more information at every stage of the game. In the seventeenth century, Thomas Hobbes referred to these mental particles as “parcels,” believing their physical existence to be as demonstrable as that of atoms, using the same logic by which we lead ourselves to believe in the physical existence of “bits” of information today.

  Higher-level representations, symbols, abstractions, and perceptions are constructed in a neural network not from solutions arrived at by algorithmic (step-by-step) processing, as in a digital computer, but from the relations between dynamic local maxima and minima generated by a real-time, incomprehensibly complex version of one of von Neumann’s games. It is what is known as an n-person game, involving, in our case, a subset of the more than 100 billion neurons, interlaced by trillions of synapses, that populate the brain. Von Neumann and Morgenstern demonstrated how to arrive at reasonable solutions among otherwise hopeless combinatorics by means of a finite but unbounded series of coalitions that progressively simplify the search. A successful, if fleeting, coalition, in our mental universe, may surface to be perceived—and perhaps communicated, via recourse to whatever symbolic channels are open at the time—as an idea. It is a dynamic, relational process, and the notion of a discrete idea or mental object possessed of absolute meaning is fundamentally contradictory, just as the notion of a bit having independent existence is contradictory. Each bit represents the difference between two alternatives, not any one thing at one time.

  In a neural network, the flow of information behaves like the flow of currency in an economy. Signals do not convey meaning through encoded symbols; they generate meaning depending on where they come from, where they go, and how frequently they arrive. A dollar is a dollar, whether coming in or going out, and you can choose to spend that same dollar on either gasoline or milk. The output of one neuron can be either debited or credited to another neuron’s account, depending on the type of synapse at which it connects. The faint pulses of electric current that flow through a nervous system and the pulses of currency that flow through an economy share a common etymology and a common destiny that continues to unfold. The metaphor has been used both ways. “The currency of our systems is not symbols, but excitation and inhibition,” noted D. E. Rumelhart and J. E. McClelland in their introduction to Parallel Distributed Processing: Explorations in the Microstructure of Cognition, a collection of papers that focused a revival of neural network research ten years ago.14 “Each of those little ‘wires’ in the optic nerve sends messages akin to a bank statement where it tells you how much interest was paid this month,” wrote neurophysiologist William Calvin in The Cerebral Symphony, a recent tour inside the human mind and brain. “You have to imagine, instead of an eye, a giant bank that is busily mailing out a million statements every
second. What maximizes the payout on a single wire? That depends on the bank’s rules, and how you play the game.”15

  Raw data generated by the first layer of retinal photoreceptors is refined, through ingenious statistical transformations, into a condensed flow of information conveyed by the optic nerve. This flow of information is then refined, over longer intervals of time, into a representation perceived as vision by the brain. There is no coherent encoding of the image, as generated by a television camera, just a stream of statistics, dealt out, like cards, to the brain. Vision is a game in which the brain bids a continuous series of models and the model that is most successful in matching the next hand wins. Finally, vision is refined into knowledge, and if all goes well, knowledge is condensed into wisdom, over a lifetime, by the mind. These elements of economy associated with the workings of our intelligence are mirrored by elements of intelligence associated with the workings of an economy—a convergence growing more visible as economic processes take electronic form.

  This convergence has its origins in the seventeenth century, just as the foundations of electronic logic date back to Hobbes’s observation that, given the existence of addition and subtraction, otherwise mindless accounting leads to everything else. “Monies are the sinews of War, and Peace,” observed Hobbes in 1642.16 In his Leviathan of 1651 he elaborated: “Mony passeth from Man to Man, within the Commonwealth; and goes round about, Nourishing (as it passeth) every part thereof. . . . Conduits, and Wayes by which it is conveyed to the Publique use, are of two sorts; One, that Conveyeth it to the Publique Coffers; The other, that Issueth the same out againe for publique payments. . . . And in this also, the Artificiall Man maintains his resemblance with the Naturall; whose Veins receiving the Bloud from the severall Parts of the Body, carry it to the Heart; where being made Vitall, the Heart by the Arteries sends it out again.”17

 

‹ Prev