Book Read Free

The Innovators

Page 14

by Walter Isaacson


  They formed what became the Eckert-Mauchly Computer Corporation, based in Philadelphia, and were pioneers in turning computing from an academic to a commercial endeavor. (In 1950 their company, along with the patents they would be granted, became part of Remington Rand, which morphed into Sperry Rand and then Unisys.) Among the machines they built was UNIVAC, which was purchased by the Census Bureau and other clients, including General Electric.

  With its flashing lights and Hollywood aura, UNIVAC became famous when CBS featured it on election night in 1952. Walter Cronkite, the young anchor of the network’s coverage, was dubious that the huge machine would be much use compared to the expertise of the network’s correspondents, but he agreed that it might provide an amusing spectacle for viewers. Mauchly and Eckert enlisted a Penn statistician, and they worked out a program that compared the early results from some sample precincts to the outcomes in previous elections. By 8:30 p.m. on the East Coast, well before most of the nation’s polls had closed, UNIVAC predicted, with 100-to-1 certainty, an easy win for Dwight Eisenhower over Adlai Stevenson. CBS initially withheld UNIVAC’s verdict; Cronkite told his audience that the computer had not yet reached a conclusion. Later that night, though, after the vote counting confirmed that Eisenhower had won handily, Cronkite put the correspondent Charles Collingwood on the air to admit that UNIVAC had made the prediction at the beginning of the evening but CBS had not aired it. UNIVAC became a celebrity and a fixture on future election nights.77

  Eckert and Mauchly did not forget the importance of the women programmers who had worked with them at Penn, even though they had not been invited to the dedication dinner for ENIAC. They hired Betty Snyder, who, under her married name, Betty Holberton, went on to become a pioneer programmer who helped develop the COBOL and Fortran languages, and Jean Jennings, who married an engineer and became Jean Jennings Bartik. Mauchly also wanted to recruit Kay McNulty, but after his wife died in a drowning accident he proposed marriage to her instead. They had five children, and she continued to help on software design for UNIVAC.

  Mauchly also hired the dean of them all, Grace Hopper. “He let people try things,” Hopper replied when asked why she let him talk her into joining the Eckert-Mauchly Computer Corporation. “He encouraged innovation.”78 By 1952 she had created the world’s first workable compiler, known as the A-0 system, which translated symbolic mathematical code into machine language and thus made it easier for ordinary folks to write programs.

  Like a salty crew member, Hopper valued an all-hands-on-deck style of collaboration, and she helped develop the open-source method of innovation by sending out her initial versions of the compiler to her friends and acquaintances in the programming world and asking them to make improvements. She used the same open development process when she served as the technical lead in coordinating the creation of COBOL, the first cross-platform standardized business language for computers.79 Her instinct that programming should be machine-independent was a reflection of her preference for collegiality; even machines, she felt, should work well together. It also showed her early understanding of a defining fact of the computer age: that hardware would become commoditized and that programming would be where the true value resided. Until Bill Gates came along, it was an insight that eluded most of the men.IV

  * * *

  Von Neumann was disdainful of the Eckert-Mauchly mercenary approach. “Eckert and Mauchly are a commercial group with a commercial patent policy,” he complained to a friend. “We cannot work with them directly or indirectly in the same open manner in which we would work with an academic group.”80 But for all of his righteousness, von Neumann was not above making money off his ideas. In 1945 he negotiated a personal consulting contract with IBM, giving the company rights to any inventions he made. It was a perfectly valid arrangement. Nevertheless, it outraged Eckert and Mauchly. “He sold all our ideas through the back door to IBM,” Eckert complained. “He spoke with a forked tongue. He said one thing and did something else. He was not to be trusted.”81

  After Mauchly and Eckert left, Penn rapidly lost its role as a center of innovation. Von Neumann also left, to return to the Institute for Advanced Study in Princeton. He took with him Herman and Adele Goldstine, along with key engineers such as Arthur Burks. “Perhaps institutions as well as people can become fatigued,” Herman Goldstine later reflected on the demise of Penn as the epicenter of computer development.82 Computers were considered a tool, not a subject for scholarly study. Few of the faculty realized that computer science would grow into an academic discipline even more important than electrical engineering.

  Despite the exodus, Penn was able to play one more critical role in the development of computers. In July 1946 most of the experts in the field—including von Neumann, Goldstine, Eckert, Mauchly, and others who had been feuding—returned for a series of talks and seminars, called the Moore School Lectures, that would disseminate their knowledge about computing. The eight-week series attracted Howard Aiken, George Stibitz, Douglas Hartree of Manchester University, and Maurice Wilkes of Cambridge. A primary focus was the importance of using stored-program architecture if computers were to fulfill Turing’s vision of being universal machines. As a result, the design ideas developed collaboratively by Mauchly, Eckert, von Neumann, and others at Penn became the foundation for most future computers.

  * * *

  The distinction of being the first stored-program computers went to two machines that were completed, almost simultaneously, in the summer of 1948. One of them was an update of the original ENIAC. Von Neumann and Goldstine, along with the engineers Nick Metropolis and Richard Clippinger, worked out a way to use three of ENIAC’s function tables to store a rudimentary set of instructions.83 Those function tables had been used to store data about the drag on an artillery shell, but that memory space could be used for other purposes since the machine was no longer being used to calculate trajectory tables. Once again, the actual programming work was done largely by the women: Adele Goldstine, Klára von Neumann, and Jean Jennings Bartik. “I worked again with Adele when we developed, along with others, the original version of the code required to turn ENIAC into a stored-program computer using the function tables to store the coded instructions,” Bartik recalled.84

  This reconfigured ENIAC, which became operational in April 1948, had a read-only memory, which meant that it was hard to modify programs while they were running. In addition, its mercury delay line memory was sluggish and required precision engineering. Both of these drawbacks were avoided in a small machine at Manchester University in England that was built from scratch to function as a stored-program computer. Dubbed “the Manchester Baby,” it became operational in June 1948.

  Manchester’s computing lab was run by Max Newman, Turing’s mentor, and the primary work on the new computer was done by Frederic Calland Williams and Thomas Kilburn. Williams invented a storage mechanism using cathode-ray tubes, which made the machine faster and simpler than ones using mercury delay lines. It worked so well that it led to the more powerful Manchester Mark I, which became operational in April 1949, as well as the EDSAC, completed by Maurice Wilkes and a team at Cambridge that May.85

  As these machines were being developed, Turing was also trying to develop a stored-program computer. After leaving Bletchley Park, he joined the National Physical Laboratory, a prestigious institute in London, where he designed a computer named the Automatic Computing Engine in homage to Babbage’s two engines. But progress on ACE was fitful. By 1948 Turing was fed up with the pace and frustrated that his colleagues had no interest in pushing the bounds of machine learning and artificial intelligence, so he left to join Max Newman at Manchester.86

  Likewise, von Neumann embarked on developing a stored-program computer as soon as he settled at the Institute for Advanced Study in Princeton in 1946, an endeavor chronicled in George Dyson’s Turing’s Cathedral. The Institute’s director, Frank Aydelotte, and its most influential faculty trustee, Oswald Veblen, were staunch supporters of what became
known as the IAS Machine, fending off criticism from other faculty that building a computing machine would demean the mission of what was supposed to be a haven for theoretical thinking. “He clearly stunned, or even horrified, some of his mathematical colleagues of the most erudite abstraction, by openly professing his great interest in other mathematical tools than the blackboard and chalk or pencil and paper,” von Neumann’s wife, Klára, recalled. “His proposal to build an electronic computing machine under the sacred dome of the Institute was not received with applause to say the least.”87

  Von Neumann’s team members were stashed in an area that would have been used by the logician Kurt Gödel’s secretary, except he didn’t want one. Throughout 1946 they published detailed papers about their design, which they sent to the Library of Congress and the U.S. Patent Office, not with applications for patents but with affidavits saying they wanted the work to be in the public domain.

  Their machine became fully operational in 1952, but it was slowly abandoned after von Neumann left for Washington to join the Atomic Energy Commission. “The demise of our computer group was a disaster not only for Princeton but for science as a whole,” said the physicist Freeman Dyson, a member of the Institute (and George Dyson’s father). “It meant that there did not exist at that critical period in the 1950s an academic center where computer people of all kinds could get together at the highest intellectual level.”88 Instead, beginning in the 1950s, innovation in computing shifted to the corporate realm, led by companies such as Ferranti, IBM, Remington Rand, and Honeywell.

  That shift takes us back to the issue of patent protections. If von Neumann and his team had continued to pioneer innovations and put them in the public domain, would such an open-source model of development have led to faster improvements in computers? Or did marketplace competition and the financial rewards for creating intellectual property do more to spur innovation? In the cases of the Internet, the Web, and some forms of software, the open model would turn out to work better. But when it came to hardware, such as computers and microchips, a proprietary system provided incentives for a spurt of innovation in the 1950s. The reason the proprietary approach worked well, especially for computers, was that large industrial organizations, which needed to raise working capital, were best at handling the research, development, manufacturing, and marketing for such machines. In addition, until the mid-1990s, patent protection was easier to obtain for hardware than it was for software.V However, there was a downside to the patent protection given to hardware innovation: the proprietary model produced companies that were so entrenched and defensive that they would miss out on the personal computer revolution in the early 1970s.

  CAN MACHINES THINK?

  As he thought about the development of stored-program computers, Alan Turing turned his attention to the assertion that Ada Lovelace had made a century earlier, in her final “Note” on Babbage’s Analytical Engine: that machines could not really think. If a machine could modify its own program based on the information it processed, Turing asked, wouldn’t that be a form of learning? Might that lead to artificial intelligence?

  The issues surrounding artificial intelligence go back to the ancients. So do the related questions involving human consciousness. As with most questions of this sort, Descartes was instrumental in framing them in modern terms. In his 1637 Discourse on the Method, which contains his famous assertion “I think, therefore I am,” Descartes wrote:

  If there were machines that bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real humans. The first is that . . . it is not conceivable that such a machine should produce arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding.

  Turing had long been interested in the way computers might replicate the workings of a human brain, and this curiosity was furthered by his work on machines that deciphered coded language. In early 1943, as Colossus was being designed at Bletchley Park, Turing sailed across the Atlantic on a mission to Bell Laboratories in lower Manhattan, where he consulted with the group working on electronic speech encipherment, the technology that could electronically scramble and unscramble telephone conversations.

  There he met the colorful genius Claude Shannon, the former MIT graduate student who wrote the seminal master’s thesis in 1937 that showed how Boolean algebra, which rendered logical propositions into equations, could be performed by electronic circuits. Shannon and Turing began meeting for tea and long conversations in the afternoons. Both were interested in brain science, and they realized that their 1937 papers had something fundamental in common: they showed how a machine, operating with simple binary instructions, could tackle not only math problems but all of logic. And since logic was the basis for how human brains reasoned, then a machine could, in theory, replicate human intelligence.

  “Shannon wants to feed not just data to [a machine], but cultural things!” Turing told Bell Lab colleagues at lunch one day. “He wants to play music to it!” At another lunch in the Bell Labs dining room, Turing held forth in his high-pitched voice, audible to all the executives in the room: “No, I’m not interested in developing a powerful brain. All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company.”89

  When Turing returned to Bletchley Park in April 1943, he became friends with a colleague named Donald Michie, and they spent many evenings playing chess in a nearby pub. As they discussed the possibility of creating a chess-playing computer, Turing approached the problem not by thinking of ways to use brute processing power to calculate every possible move; instead he focused on the possibility that a machine might learn how to play chess by repeated practice. In other words, it might be able to try new gambits and refine its strategy with every new win or loss. This approach, if successful, would represent a fundamental leap that would have dazzled Ada Lovelace: machines would be able to do more than merely follow the specific instructions given them by humans; they could learn from experience and refine their own instructions.

  “It has been said that computing machines can only carry out the purposes that they are instructed to do,” he explained in a talk to the London Mathematical Society in February 1947. “But is it necessary that they should always be used in such a manner?” He then discussed the implications of the new stored-program computers that could modify their own instruction tables. “It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence.”90

  When he finished his speech, his audience sat for a moment in silence, stunned by Turing’s claims. Likewise, his colleagues at the National Physical Laboratory were flummoxed by Turing’s obsession with making thinking machines. The director of the National Physical Laboratory, Sir Charles Darwin (grandson of the evolutionary biologist), wrote to his superiors in 1947 that Turing “wants to extend his work on the machine still further towards the biological side” and to address the question “Could a machine be made that could learn by experience?”91

  Turing’s unsettling notion that machines might someday be able to think like humans provoked furious objections at the time—as it has ever since. There were the expected religious objections and also those that were emotional, both in content and in tone. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain,” declared a famous brain surgeon, Sir Geoffrey Jefferson, in the prestigious Lister Oration in 1949.92 Turing’s response to a reporter from the London Times seemed somewhat flippant, but also
subtle: “The comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”93

  The ground was thus laid for Turing’s second seminal work, “Computing Machinery and Intelligence,” published in the journal Mind in October 1950.94 In it he devised what became known as the Turing Test. He began with a clear declaration: “I propose to consider the question, ‘Can machines think?’ ” With a schoolboy’s sense of fun, he then invented a game—one that is still being played and debated—to give empirical meaning to that question. He proposed a purely operational definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine is not “thinking.”

  Turing’s test, which he called “the imitation game,” is simple: An interrogator sends written questions to a human and a machine in another room and tries to determine from their answers which one is the human. A sample interrogation, he wrote, might be the following:

  Q: Please write me a sonnet on the subject of the Forth Bridge.

  A: Count me out on this one. I never could write poetry.

  Q: Add 34957 to 70764.

  A: (Pause about 30 seconds and then give as answer) 105621.

  Q: Do you play chess?

  A: Yes.

  Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

  A: (After a pause of 15 seconds) R–R8 mate.

  In this sample dialogue, Turing did a few things. Careful scrutiny shows that the respondent, after thirty seconds, made a slight mistake in addition (the correct answer is 105,721). Is that evidence that the respondent was a human? Perhaps. But then again, maybe it was a machine cagily pretending to be human. Turing also flicked away Jefferson’s objection that a machine cannot write a sonnet; perhaps the answer above was given by a human who admitted to that inability. Later in the paper, Turing imagined the following interrogation to show the difficulty of using sonnet writing as a criterion of being human:

 

‹ Prev