The Innovators

Home > Memoir > The Innovators > Page 5
The Innovators Page 5

by Walter Isaacson


  * * *

  The Industrial Revolution was based on two grand concepts that were profound in their simplicity. Innovators came up with ways to simplify endeavors by breaking them into easy, small tasks that could be accomplished on assembly lines. Then, beginning in the textile industry, inventors found ways to mechanize steps so that they could be performed by machines, many of them powered by steam engines. Babbage, building on ideas from Pascal and Leibniz, tried to apply these two processes to the production of computations, creating a mechanical precursor to the modern computer. His most significant conceptual leap was that such machines did not have to be set to do only one process, but instead could be programmed and reprogrammed through the use of punch cards. Ada saw the beauty and significance of that enchanting notion, and she also described an even more exciting idea that derived from it: such machines could process not only numbers but anything that could be notated in symbols.

  Over the years, Ada Lovelace has been celebrated as a feminist icon and a computer pioneer. For example, the U.S. Defense Department named its high-level object-oriented programming language Ada. However, she has also been ridiculed as delusional, flighty, and only a minor contributor to the “Notes” that bear her initials. As she herself wrote in those “Notes,” referring to the Analytical Engine but in words that also describe her fluctuating reputation, “In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case.”

  The reality is that Ada’s contribution was both profound and inspirational. More than Babbage or any other person of her era, she was able to glimpse a future in which machines would become partners of the human imagination, together weaving tapestries as beautiful as those from Jacquard’s loom. Her appreciation for poetical science led her to celebrate a proposed calculating machine that was dismissed by the scientific establishment of her day, and she perceived how the processing power of such a device could be used on any form of information. Thus did Ada, Countess of Lovelace, help sow the seeds for a digital age that would blossom a hundred years later.

  * * *

  I. It was in a review of this book that one of Babbage’s friends, William Whewell, coined the term scientist to suggest the connection among these disciplines.

  II. Specifically, he wanted to use the method of divided differences to closely approximate logarithmic and trigonometric functions.

  III. Named after the seventeenth-century Swiss mathematician Jacob Bernoulli, who studied the sums of powers of consecutive integers, they play an intriguing role in number theory, mathematical analysis, and differential topology.

  IV. Ada’s example involved tabulating polynomials using difference techniques as a subfunction, which required a nested loop structure with a varying range for the inner loop.

  Vannevar Bush (1890–1974), with his Differential Analyzer at MIT.

  Alan Turing (1912–54), at the Sherborne School in 1928.

  Claude Shannon (1916–2001) in 1951.

  CHAPTER TWO

  * * *

  THE COMPUTER

  Sometimes innovation is a matter of timing. A big idea comes along at just the moment when the technology exists to implement it. For example, the idea of sending a man to the moon was proposed right when the progress of microchips made it possible to put computer guidance systems into the nose cone of a rocket. There are other cases, however, when the timing is out of kilter. Charles Babbage published his paper about a sophisticated computer in 1837, but it took a hundred years to achieve the scores of technological advances needed to build one.

  Some of those advances seem almost trivial, but progress comes not only in great leaps but also from hundreds of small steps. Take for example punch cards, like those Babbage saw on Jacquard’s looms and proposed incorporating into his Analytical Engine. Perfecting the use of punch cards for computers came about because Herman Hollerith, an employee of the U.S. Census Bureau, was appalled that it took close to eight years to manually tabulate the 1880 census. He resolved to automate the 1890 count.

  Drawing on the way that railway conductors punched holes in various places on a ticket in order to indicate the traits of each passenger (gender, approximate height, age, hair color), Hollerith devised punch cards with twelve rows and twenty-four columns that recorded the salient facts about each person in the census. The cards were then slipped between a grid of mercury cups and a set of spring-loaded pins, which created an electric circuit wherever there was a hole. The machine could tabulate not only the raw totals but also combinations of traits, such as the number of married males or foreign-born females. Using Hollerith’s tabulators, the 1890 census was completed in one year rather than eight. It was the first major use of electrical circuits to process information, and the company that Hollerith founded became in 1924, after a series of mergers and acquisitions, the International Business Machines Corporation, or IBM.

  One way to look at innovation is as the accumulation of hundreds of small advances, such as counters and punch-card readers. At places like IBM, which specialize in daily improvements made by teams of engineers, this is the preferred way to understand how innovation really happens. Some of the most important technologies of our era, such as the fracking techniques developed over the past six decades for extracting natural gas, came about because of countless small innovations as well as a few breakthrough leaps.

  In the case of computers, there were many such incremental advances made by faceless engineers at places like IBM. But that was not enough. Although the machines that IBM produced in the early twentieth century could compile data, they were not what we would call computers. They weren’t even particularly adroit calculators. They were lame. In addition to those hundreds of minor advances, the birth of the computer age required some larger imaginative leaps from creative visionaries.

  DIGITAL BEATS ANALOG

  The machines devised by Hollerith and Babbage were digital, meaning they calculated using digits: discrete and distinct integers such as 0, 1, 2, 3. In their machines, the integers were added and subtracted using cogs and wheels that clicked one digit at a time, like counters. Another approach to computing was to build devices that could mimic or model a physical phenomenon and then make measurements on the analogous model to calculate the relevant results. These were known as analog computers because they worked by analogy. Analog computers do not rely on discrete integers to make their calculations; instead, they use continuous functions. In analog computers, a variable quantity such as electrical voltage, the position of a rope on a pulley, hydraulic pressure, or a measurement of distance is employed as an analog for the corresponding quantities of the problem to be solved. A slide rule is analog; an abacus is digital. Clocks with sweeping hands are analog, and those with displayed numerals are digital.

  Around the time that Hollerith was building his digital tabulator, Lord Kelvin and his brother James Thomson, two of England’s most distinguished scientists, were creating an analog machine. It was designed to handle the tedious task of solving differential equations, which would help in the creation of tide charts and of tables showing the firing angles that would generate different trajectories of artillery shells. Beginning in the 1870s, the brothers devised a system that was based on a planimeter, an instrument that can measure the area of a two-dimensional shape, such as the space under a curved line on a piece of paper. The user would trace the outline of the curve with the device, which would calculate the area by using a small sphere that was slowly pushed across the surface of a large rotating disk. By calculating the area under the curve, it could thus solve equations by integration—in other words, it could perform a basic task of calculus. Kelvin and his brother were able to use this method to create a “harmonic synthesizer” that could churn out an annual tide chart in four hours. But they were never able to conquer the mechanical difficulties of linking together many of these devices in order
to solve equations with a lot of variables.

  That challenge of linking together multiple integrators was not mastered until 1931, when an MIT engineering professor, Vannevar (rhymes with beaver) Bush—remember his name, for he is a key character in this book—was able to build the world’s first analog electrical-mechanical computer. He dubbed his machine a Differential Analyzer. It consisted of six wheel-and-disk integrators, not all that different from Lord Kelvin’s, that were connected by an array of gears, pulleys, and shafts rotated by electric motors. It helped that Bush was at MIT; there were a lot of people around who could assemble and calibrate complex contraptions. The final machine, which was the size of a small bedroom, could solve equations with as many as eighteen independent variables. Over the next decade, versions of Bush’s Differential Analyzer were replicated at the U.S. Army’s Aberdeen Proving Ground in Maryland, the Moore School of Electrical Engineering at the University of Pennsylvania, and Manchester and Cambridge universities in England. They proved particularly useful in churning out artillery firing tables—and in training and inspiring the next generation of computer pioneers.

  * * *

  Bush’s machine, however, was not destined to be a major advance in computing history because it was an analog device. In fact, it turned out to be the last gasp for analog computing, at least for many decades.

  New approaches, technologies, and theories began to emerge in 1937, exactly a hundred years after Babbage first published his paper on the Analytical Engine. It would become an annus mirabilis of the computer age, and the result would be the triumph of four properties, somewhat interrelated, that would define modern computing:

  DIGITAL. A fundamental trait of the computer revolution was that it was based on digital, not analog, computers. This occurred for many reasons, as we shall soon see, including simultaneous advances in logic theory, circuits, and electronic on-off switches that made a digital rather than an analog approach more fruitful. It would not be until the 2010s that computer scientists, seeking to mimic the human brain, would seriously begin working on ways to revive analog computing.

  BINARY. Not only would modern computers be digital, but the digital system they would adopt would be binary, or base-2, meaning that it employs just 0s and 1s rather than all ten digits of our everyday decimal system. Like many mathematical concepts, binary theory was pioneered by Leibniz in the late seventeenth century. During the 1940s, it became increasingly clear that the binary system worked better than other digital forms, including the decimal system, for performing logical operations using circuits composed of on-off switches.

  ELECTRONIC. In the mid-1930s, the British engineer Tommy Flowers pioneered the use of vacuum tubes as on-off switches in electronic circuits. Until then, circuits had relied on mechanical and electromechanical switches, such as the clacking electromagnetic relays that were used by phone companies. Vacuum tubes had mainly been employed to amplify signals rather than as on-off switches. By using electronic components such as vacuum tubes, and later transistors and microchips, computers could operate thousands of times faster than machines that had moving electromechanical switches.

  GENERAL PURPOSE. Finally, the machines would eventually have the ability to be programmed and reprogrammed—and even reprogram themselves—for a variety of purposes. They would be able to solve not just one form of mathematical calculation, such as differential equations, but could handle a multiplicity of tasks and symbol manipulations, involving words and music and pictures as well as numbers, thus fulfilling the potential that Lady Lovelace had celebrated when describing Babbage’s Analytical Engine.

  Innovation occurs when ripe seeds fall on fertile ground. Instead of having a single cause, the great advances of 1937 came from a combination of capabilities, ideas, and needs that coincided in multiple places. As often happens in the annals of invention, especially information technology invention, the time was right and the atmosphere was charged. The development of vacuum tubes for the radio industry paved the way for the creation of electronic digital circuits. That was accompanied by theoretical advances in logic that made circuits more useful. And the march was quickened by the drums of war. As nations began arming for the looming conflict, it became clear that computational power was as important as firepower. Advances fed on one another, occurring almost simultaneously and spontaneously, at Harvard and MIT and Princeton and Bell Labs and an apartment in Berlin and even, most improbably but interestingly, in a basement in Ames, Iowa.

  Underpinning all of these advances were some beautiful—Ada might call them poetic—leaps of mathematics. One of these leaps led to the formal concept of a “universal computer,” a general-purpose machine that could be programmed to perform any logical task and simulate the behavior of any other logical machine. It was conjured up as a thought experiment by a brilliant English mathematician with a life story that was both inspiring and tragic.

  ALAN TURING

  Alan Turing had the cold upbringing of a child born on the fraying fringe of the British gentry.1 His family had been graced since 1638 with a baronetcy, which had meandered down the lineage to one of his nephews. But for the younger sons on the family tree, which Turing and his father and grandfather were, there was no land and little wealth. Most went into fields such as the clergy, like Alan’s grandfather, and the colonial civil service, like his father, who served as a minor administrator in remote regions of India. Alan was conceived in Chhatrapur, India, and born on June 23, 1912, in London, while his parents were on home leave. When he was only one, his parents went back to India for a few years, and handed him and his older brother off to a retired army colonel and his wife to be raised in a seaside town on the south coast of England. “I am no child psychologist,” his brother, John, later noted, “but I am assured that it is a bad thing for an infant in arms to be uprooted and put into a strange environment.”2

  When his mother returned, Alan lived with her for a few years and then, at age thirteen, was sent to boarding school. He rode there on his bicycle, taking two days to cover more than sixty miles, alone. There was a lonely intensity to him, reflected in his love of long-distance running and biking. He also had a trait, so common among innovators, that was charmingly described by his biographer Andrew Hodges: “Alan was slow to learn that indistinct line that separated initiative from disobedience.”3

  In a poignant memoir, his mother described the son whom she doted upon:

  Alan was broad, strongly built and tall, with a square, determined jaw and unruly brown hair. His deep-set, clear blue eyes were his most remarkable feature. The short, slightly retroussé nose and humorous lines of his mouth gave him a youthful—sometimes a childlike—appearance. So much so that in his late thirties he was still at times mistaken for an undergraduate. In dress and habits he tended to be slovenly. His hair was usually too long, with an overhanging lock which he would toss back with a jerk of his head. . . . He could be abstracted and dreamy, absorbed in his own thoughts which on occasion made him seem unsociable. . . . There were times when his shyness led him into extreme gaucherie. . . . Indeed he surmised that the seclusion of a mediaeval monastery would have suited him very well.4

  At the boarding school, Sherborne, he realized that he was homosexual. He became infatuated with a fair-haired, slender schoolmate, Christopher Morcom, with whom he studied math and discussed philosophy. But in the winter before he was to graduate, Morcom suddenly died of tuberculosis. Turing would later write Morcom’s mother, “I simply worshipped the ground he trod on—a thing which I did not make much attempt to disguise, I am sorry to say.”5 In a letter to his own mother, Turing seemed to take refuge in his faith: “I feel that I shall meet Morcom again somewhere and that there will be work for us to do together there as I believed there was for us to do here. Now that I am left to do it alone, I must not let him down. If I succeed I shall be more fit to join his company than I am now.” But the tragedy ended up eroding Turing’s religious faith. It also turned him even more inward, and he never again found it
easy to forge intimate relationships. His housemaster reported to his parents at Easter 1927, “Undeniably he’s not a ‘normal’ boy; not the worse for that, but probably less happy.”6

  In his final year at Sherborne, Turing won a scholarship to attend King’s College, Cambridge, where he went in 1931 to read mathematics. One of three books he bought with some prize money was The Mathematical Foundations of Quantum Mechanics, by John von Neumann, a fascinating Hungarian-born mathematician who, as a pioneer of computer design, would have a continuing influence on his life. Turing was particularly interested in the math at the core of quantum physics, which describes how events at the subatomic level are governed by statistical probabilities rather than laws that determine things with certainty. He believed (at least while he was young) that this uncertainty and indeterminacy at the subatomic level permitted humans to exercise free will—a trait that, if true, would seem to distinguish them from machines. In other words, because events at the subatomic level are not predetermined, that opens the way for our thoughts and actions not to be predetermined. As he explained in a letter to Morcom’s mother:

  It used to be supposed in science that if everything was known about the Universe at any particular moment then we can predict what it will be through all the future. This idea was really due to the great success of astronomical prediction. More modern science however has come to the conclusion that when we are dealing with atoms and electrons we are quite unable to know the exact state of them; our instruments being made of atoms and electrons themselves. The conception then of being able to know the exact state of the universe then really must break down on the small scale. This means then that the theory which held that as eclipses etc. are predestined so were all our actions breaks down too. We have a will which is able to determine the action of the atoms probably in a small portion of the brain, or possibly all over it.7

 

‹ Prev