The Eudaemonic Pie
Page 7
The Projectors mounted a series of tests known as the Human vs. Machine experiments, which were conducted with an eye-toe coordination device that also functioned as a biofeedback machine for improving human reflexes. While clocking the ball around the track, humans tapping microswitches were pitted against optrons. The humans lost, of course. But their performance looked remediable. That the more athletic among them did better than the merely cerebral suggested that with proper training, humans using microswitches would be adequate to the task.
Without knowing what a roulette computer would look like, or the final content of its program, the Projectors decided in theory that the system should be operated by a two-person team. A data taker, standing near the wheel, would spend a few minutes adjusting the necessary parameters, and then clock the ball and rotor. An apparently unrelated bettor, standing far down the table but actually linked to the roulette computer by radio or some other connection, would receive the predictions and use them to play a high-stakes game.
While brainstorming output devices and various ways to link data taker and bettor, the Projectors wrote to hearing aid manufacturers and collected brochures on ultrasonic technology. They visited a hospital supply company for a demonstration of electroshock equipment. They considered polarized eyeglasses, laser detectors, light-emitting diodes built into wrist watches, and radio waves of every conceivable frequency. Finding the perfect link—reliable yet undetectable by the casinos—would require some ingenuity.
Working around the clock on guillotine and high-speed photos, optrons and histograms, the Project generated a mass of data on everything from wind resistance to human perception. As chief theoretician, Doyne was supposed to figure out what it all meant. His assignment was to isolate the equations of motion that govern the individual parts of roulette, and then integrate these into one synthetic equation, or algorithm, capable of prediction. It was at this point that Doyne, for the first time in his life, resorted to using a computer.
“He had made it a policy throughout his undergraduate career,” said Norman, “never to use a computer. It was a matter of honor. Physicists disdain calculator-minded people, the class of which is epitomized by the engineer. This disdain is rooted in the fact that true understanding of the physics of a problem does not depend on the particular numbers involved. Doyne hadn’t even used a calculator in doing his labs. He always preferred a pencil and paper.”
Others in the group who were conversant, if not fluent, in its several languages, nudged Doyne toward the computer. Working at the university on a PDP 11/45 built by the Digital Equipment Corporation, he took a week to teach himself how to program in BASIC. Doyne and other physicists with whom he worked at Santa Cruz would later emerge as masters of the new technology, but even at this stage he was impressed by what computers could do. Simple, yet vital things: like simulate human error.
Eudaemonic research proceeded with the casual mania peculiar to this part of the world. Nude sunbathing on the back deck was combined with phone calls to Advanced Kinetics in Costa Mesa, American Laser Systems in Goleta, Automation Industries in Danbury, Connecticut, Arenberg Ultrasonics in Jamaica Plain, Massachusetts, and Hewlett-Packard in Sunnyvale, California, where Norman Packard’s cousin, David, presided as chairman of the board. The trick was to make these phone calls at noon, in the hope that out-to-lunch executives would return them at their own expense. Eudaemonic Enterprises, for all they knew, might be a fast-growing computer company branching out of the Silicon Valley. Sniffing the possibility of high-volume sales, these executives little suspected that they were talking on the other end of the line to a naked physicist crazed over roulette.
By the end of the summer, Professor Nauenberg’s house had been transformed from top to bottom into a working physics laboratory. Resident researchers could be found day and night huddled around their roulette wheel laden with a guillotine, stroboscopic flashes, cameras, optrons, microswitches, an electronic clock, and roulette balls of every size and composition. Another problem in applied physics—how to keep two particles from colliding—resulted in the house being divided into separate zones: one for Rembrandt, the professor’s German shepherd, and the other for Pate, Jack Biles’s thirteen-year-old beagle, who scored a decisive victory in the summer’s running dog fight.
The dogs savaged each other and their respective domains. The lawn turned brown and died. The house fell into a state of chaos. “Pie, spaghetti, ice cream, anything,” said Lawton, “would be consumed for breakfast. It was like living with a gaping void into which all food disappeared. There was that basic physicist’s oblivion. They could eat a bowl of caviar or potato chips and an hour later not know the difference.”
Living and working together, the Projectors got to know each other better than they might have wished. As the feasibility studies expanded to measure differences in bounce between nylon and Teflon roulette balls, Jack Biles grew impatient. He was in a rush to head for the casinos with a calculator in a paper bag and do his feasibility studies with a pile of chips in front of him. “Jack was content with making a brilliant point or two,” said Lawton, “but not much help in putting it together. He preferred to talk about ideas rather than implement them.”
“There were two schools of thought that summer,” said Norman. “The ‘Be Careful School,’ under whose auspices the tests were being performed, wanted to know whether prediction in roulette is theoretically possible before we went off and built a microcomputer. The other school said, ‘We don’t have time for that. You can tell from looking at the regularity of the ball that the game is predictable. Let’s build a computer and measure our advantage from how well it does.’
“The problem with Jack’s approach is that you wouldn’t know in principle whether prediction is feasible. This means you could never distinguish between a computer that didn’t work and a game that isn’t predictable in the first place, and that sort of thing made us very uncomfortable.”
Described by one observer as “jazzed all the time about new ideas, your classic boy inventor having fun with his off-the-wall enthusiasms,” Jack became increasingly disenchanted. “He was feeling jilted by it,” said Norman. “The Project had been taken out of his hands. He left for Oregon at the end of the summer, and the longer the Project dragged on in Santa Cruz, the farther removed he got from it.”
If Biles was always in a rush, his antithesis in Santa Cruz was John Boyd, commonly known as Juano. Tall, weedy, with smudgy black glasses and lank hair falling into his face, “Juano was your basic seen-but-not-heard character,” according to Lawton. “Once you got him started on a project, he would work limitlessly on it. He was the opposite of Jack. Once in motion, he would continue in motion, but when he came to a stop, he had to be recharged.”
As for the two dynamos on the Project, Doyne and Norman, “We really buzzed,” said Doyne. “We were working like hell and enjoying it thoroughly. Norman especially amazed me. I couldn’t believe how anyone could work so many hours a day and be so congenial about it. That summer he even managed to begin his romance with Lorna. For me at least, Norman set the pace.”
Where Norman as a worker was rock-steady and indefatigable, Doyne was manic. His mind yo-yoed from Newtonian mechanics down to the minutiae of ordering transistors from jobbers in Sunnyvale. “His organizing ability came to dominate,” said Lawton. “He could organize five people at once when they were barely able to organize themselves. He had the ability to provide a kernel of initiative, while still leaving room for other people to act.”
By the end of the summer, the feasibility studies into bounce, scatter, friction, wind resistance, tilt, and other parameters unique to individual roulette wheels showed conclusively that the game was predictable. Given a powerful algorithm capable of integrating these forces, Eudaemonic Enterprises confirmed that one could gain a whopping 44 percent advantage over the casinos. Now all that remained was to formulate the algorithm and build the predictive device itself.
Jack Biles suggested they strip down
and reprogram an electronic calculator, although he agreed later that this would have been a kludge, a baling-wire approach to a problem that called for something more elegant. The Project opted instead for a relatively new and esoteric technology. They would devise a program for a microprocessor and build it into a computer small enough to operate in the casinos without detection. As far as they knew, their microprocessor would be the first to pioneer a trail from the Silicon Valley to the roulette tables of Glitter Gulch.
“Given the existent technology,” said Norman, “the computer we built is the ultimate predictive machine. You could ask any state-of-the-art electronics engineer to design it, and this is what he’d come up with. Ours is the rare example of a task solved by exactly the optimum technology.”
“By the end of the summer,” said Doyne, “we had a vague idea of the components we needed to get the computer going. We counted up the chips, and it looked as if by designing our own system, without any superfluous chips for a keyboard or tape recorder or LED display, we could build a computer as small as a cigarette pack, which turned out to be true.”
Except for the roulette wheel, everything required for the Project had been homemade. So too would their first computer. From a mail-order catalogue Eudaemonic Enterprises ordered a microprocessor chip and a development kit that promised everything needed for building a bare-bones computer. The beauty of it lay in its flexibility. This chip could be programmed to do anything. The horror of it lay in its ignorance. Arriving with no program at all, even the multiplication of one times one would be unknown to it. Before engaging it in the higher math of roulette, someone would have to teach this computer how to do arithmetic.
3
Driving Around the Mode Map
Berlin is a nice town and there were many opportunities for a student to spend his time in an agreeable manner, for instance with the nice girls. But instead of that we had to perform big and awful calculations.
Konrad Zuse
It is a curious fact that the inception of the computer, the game of roulette, and the basic laws of probability are all attributed to the seventeenth-century French mathematician and philosopher Blaise Pascal. Pascal was also the gambler who proposed in his famous “wager” on the existence of God that “One must bet on it.” It was a less existential Pascal who in 1642, at the age of nineteen, invented the mechanical adding machine. His father, provincial administrator in Rouen, had drafted him into tallying the year’s income tax. For Pascal junior the business was sheer drudgery, but out of this compost of necessity and boredom sprang his first great invention.
Pascal realized that numerical digits could be arranged on wheels in such a way that each wheel, in making a complete revolution, would turn its neighbor one-tenth of a revolution. While viewing the mechanism through a little window, one dialed the answer to problems in summation or subtraction. The genius in Pascal’s invention—which remained the basic concept employed by mechanical calculators from his day to the present—lay in the transposition of arithmetic functions onto the physical locations of a machine. To get from there to the modern computer, one simply adds electricity and converts the cogs of a Pascaline into electronic charges stored in the crystalline structure of silicon.
Thirty years after its invention, Gottfried Wilhelm von Leibnitz made the first major improvement to the Pascaline by adding what came to be known as the Leibnitz wheel. This enabled the machine to do multiplication and division, as well as addition and subtraction. The next significant redesign was attempted in the nineteenth century by Charles Babbage, inspired inventor of the speedometer, the cowcatcher, and the first reliable life expectancy tables. Babbage spent the last thirty-seven years of his life, until his death in 1871, forging the cogs and rods of a great Analytical Engine. The machine possessed, in its essential design, all the features of a modern computer. These consisted of a logic center, which Babbage called the “mill,” a memory, known as the “store,” a control unit for carrying out instructions, and a system of punched cards similar to those used in Joseph-Marie Jacquard’s looms for inputting data. But it was not for the age of steam power and mechanical gears to realize a machine as complex as this, and Babbage died with most of his project unfulfilled.
At this stage in its history the computer joins its destiny to the fortunes of war. Born to Athena—patron of spinning, weaving, cities, and bureaucracies—the computer grows up the stepchild of Mars. Babbage’s dream materialized only to defend national interests on either side of the Maginot Line. In 1936 the young engineer Konrad Zuse filled his parents’ Berlin apartment with a computer built out of scrap parts and a German Erector Set. Successors to Zuse’s Z1 computer, constructed more reliably with electromagnetic telephone relays, were used by the Nazi war machine for aircraft and missile design, where their prowess in number crunching brought them to the attention of Adolph Hitler. Advised to mount a crash program for building more of Zuse’s computers, the Führer made a tactical—and, for us, fortunate—error in thinking he could win the war without them.
In the meantime, the British early in the war had gathered an ace team of mathematicians and chess players at a country house in Hertfordshire known as Bletchley Park. Their assignment was to crack the German codes generated by the so-called Enigma Machine, one of which had been captured and sent to England by the Polish secret service. Only a computer could unscramble codes as complex as the Enigma’s, and the Bletchley Park crew succeeded admirably in building a number of crude but effective decoding machines. These included the Colossus and its ten successors, which were the first computers to use vacuum tubes rather than switches or relays for shuttling the on-off charges by means of which modern computers think. Unlike Pascal, Leibnitz, and Babbage, both Zuse and the Bletchley Parkers—with Alan Turing the most notable among them—had substituted base two for base ten as the principal counting unit in their computers. This allowed for a dramatic leap in the speed with which information could be processed. Flashed from tube to tube in series of 1’s and 0’s, the stuff of missile design and code breaking could now be pulsed through electronic circuitry at the rate of two hundred logical decisions per second.
Along with Zuse and the British, the Americans also understood the wider application of computers to war. Howard Aiken, on leave from the navy, finished building the Mark I at Harvard in 1943. This was an electromechanical machine in which “the gentle clicking of the relays” sounded to one observer “like a roomful of ladies knitting.” Intended for the computation of ballistics tables, Aiken’s computer was quickly outmoded by a far more efficient machine built with vacuum tubes at the University of Pennsylvania. The ENIAC, or Electronic Numerical Integrator and Calculator, weighing thirty tons and holding eighteen thousand tubes, spent its early life crunching out gunnery tables for the Aberdeen Proving Ground in Maryland.
In describing the advent of the computer as a “revolution,” we tend to forget what it initially revolutionized. As Joseph Weizenbaum, professor of computer science at MIT, put it: “The computer in its modern form was born from the womb of the military. As with so much other modern technology of the same parentage, almost every technological advance in the computer field, including those motivated by the demands of the military, has had its residual payoff—fallout—in the civilian sector. Still, computers were first constructed in order to enable efficient calculations of how most precisely and effectively to drop artillery shells in order to kill people. It is probably a fair guess, although no one could possibly know, that a very considerable fraction of computers devoted to a single purpose today are still those dedicated to cheaper, more nearly certain ways to kill ever larger numbers of human beings.”
Computers were held in bureaucratic captivity from the 1940s to the 1970s, but somewhere along the line—one can date their ultimate escape from the invention of the microprocessor—they broke free of martial law and opened themselves up to the zaniness and democratic efflorescence whose truly revolutionary applications we are only now beginning to see. Th
e microprocessor, which is nothing more than a chip of silicon etched with the geometries of memory, gave the slip to the authorities and their central processing units. This new technology effected a fundamental shift away from mainframe, centralized, stationary computers protected by hierarchies of protocol toward bite-sized, transportable, independent, and democratic computers capable of functioning entirely on their own. With the advent of the microprocessor in 1970, anyone, at least in theory, could walk around with the power of an ENIAC snuggled into a shoe. Once liberated for the work of eros and free play, the computer could develop a talent for games, poetry, music, and—as befits its Pascalian origin—the playing of roulette.
The prerequisite for building an ENIAC in a shoe was the computer’s miniaturization. This was another technological shift spun off from the military, particularly its space wing. As they went in hot pursuit of Sputnik and other galactic menaces, NASA and the air force needed computers light enough for liftoff. Prime contractors for the military obliged by reducing the size of their product with three remarkable advances in as many decades, and it was the third of these advances that allowed the computer to make its final, definitive break for freedom.
In the 1950s the transistor replaced the vacuum tube. Shuttled among junctions located in the structure of silicon crystals, current pulsed through a transferre sistor, or transistor, could amplify sound or switch signals in a hundredth the space of the old tube technology. The second breakthrough came with large-scale integration (LSI), a new technique that allowed for circuits made up of thousands of transistors to be etched onto wafers of silicon the size of a fingernail. The final, liberating stroke came when Ted Hoff, an engineer at Intel Corporation in Santa Clara, California, figured out how to fit all the math and logic circuitry of a computer onto a single chip of silicon. “A true revolution,” is how Robert Noyce, co-inventor of the integrated circuit and a founder of Intel, described this culminating event. “A qualitative change in technology, the integrated microelectronic circuit has given rise to a qualitative change in human capabilities.”