Book Read Free

The Human Story

Page 46

by James C. Davis

Above all they did not foresee the unbelievables that are the subjects of this final chapter.

  ONE OF THESE incredibles is of course the computer, whose concept is much older than you may have thought. In the 1830s and ’40s an Englishman designed (but didn’t make) an “Analytical Engine” that he declared would make any calculation that a person wished. In most respects Charles Babbage’s engine would have had the same internal “logic” as a modern-day computer, and experts of today believe it would have worked. But Babbage didn’t have the cabbage. The government supported his research a while, then stopped. Babbage blamed his problems on the English mind: “If you speak to him [an Englishman] of a machine for peeling a potato, he will pronounce it impossible; if you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple.”

  As usual, need impelled us humans to invent. In the latter 1800s businesses and governments began to grow so big they needed help in dealing with their data. Processing the 1880 U.S. census took so long — seven years — that the results were out of date when published. In 1890, therefore, the government used a punched-card tabulator to sort and count, and did the job under budget by the deadline. And they did that without electricity. In the early 1900s electricity led to speedy calculators for scientists and engineers, and business machines for billing and accounting. Babbage’s vision of an “Analytical Engine” neared fulfillment.

  On the eve of World War II inventors in America, Germany, and England were already working on computers. Then the needs of warfare hurried them along. Who was first to make a real computer? That’s hard to say, since everything depends on how you use the word computer. This writer, who taught at the University of Pennsylvania for many years, gives Penn the credit.

  In the early years of World War II the U.S. Army needed a device to calculate the “firing tables” that gunners used when they aimed their cannons. At Penn two engineers set out to make this calculating tool. One of them, John Mauchly, was an assistant professor who dreamed about machines that ran on nothing but electrons. Presper Eckert, a “research associate” and only twenty-two, was known (as much as anything) for having made a system to play chimes in graveyards to drown the noise of crematoriums. These two collected some researchers and began their work.

  In 1946 (when the war was over) Eckert and Mauchly finished making their computer. They named it ENIAC, for Electronic Numerical Indicator and Computer. ENIAC was eight feet tall, eighty feet long, and weighed as much as eight ordinary cars. By 1940s standards it was very fast. It carried out 5,000 operations in a second, and computed an artillery shell’s trajectory faster than the shell could fly. (But it could be troublesome. It’s said that a moth once flew into an ENIAC and short-circuited it, giving rise to the computer expression “bug.”)

  As soon as brilliant people invented the computer, other brilliant people made it better. One of these was John von Neumann of the Institute for Advanced Study at Princeton. Von Neumann was a universal genius. Certain unsolved problems of the ENIAC entranced him, and he wrote a paper laying out what he believed should be the “architecture” or logic of computers. This included a control to tell the computer what to do and when. Von Neumann’s paper was extremely influential, and some have (wrongly) called him “father of the computer.”

  The astounding new machine began to win attention. In the presidential election of 1952 a computer predicted for national television that Eisenhower would win 438 electoral votes. He won 432.

  Just the same, the first computers (ENIAC and those that quickly followed) were too big and, yes, too dumb. A writer for Popular Mechanics magazine speculated hopefully in 1949 that the computer might shrink one day to the size of a car. The writer was too pessimistic. By the 1960s, computers were already smaller, smarter, and easier to use. IBM was making sections of computers that could fit in elevators. By 1969 NASA had a computer small enough to squeeze aboard the little craft that landed on the moon — yet smart enough to do its job. However, even in the early 1970s, most computers still were costly, big, and hard to use. Computer makers, such as DEC and IBM, had recognized these problems, and some had started making smaller, more convenient “minicomputers.” But even these were difficult to use, and they cost more than ordinary people could afford.

  The answer to these problems was the personal computer, or PC, which would transform many lives. The facts about its origin are much debated; we shall simplify. In 1975, a tiny firm named MITS began to sell the kits (MITS kits?) for make-it-by-yourself “microcomputers.” The fact that these were hard to assemble, often wouldn’t work, and did nothing useful didn’t matter. What mattered was the price, which was under $400. The kits sold very well among the horde of young computer hobbyists and helped to increase them to an army, many on America’s West Coast. These clever, casual people chatted at the Homebrew Computer Club, bought components at the Byte Shop and ComputerLand, and subscribed to Byte and Dr. Dobb’s Journal of Computer Calisthenics and Orthodontics. People like them thought up acronyms like GUI (graphical user interface), WIMP (windows, icon, mouse, and pull-down), and POTS (plain old telephone service).

  In 1975 Paul Allen, a computer whiz about to graduate from Washington State University, saw a story on the new computer kits in Popular Electronics. The news that these were on the market galvanized both Allen and a friend of his at Harvard, a pre-law student named Bill Gates. The two of them liked making money nearly as much as they liked designing software. They decided to write a programming language for the low-priced kits, based on an existing program known as BASIC. They finished their program in six weeks of hard work and named it GW[for Gee Whiz]-BASIC. They also formed a software firm, one of hundreds — maybe thousands — at that time, and named it Micro-Soft.

  Their programming language was successful, and they sold or licensed it to hobbyists and computer firms, one of which was Apple. Apple was a tiny firm based in a garage in California owned by Steve Jobs’s parents. Jobs was then a twenty-one-year-old computer hobbyist with striking confidence. (As a boy of thirteen, he had telephoned William Hewlett of Hewlett-Packard, the big electronics firm, and asked him for components. Hewlett didn’t merely give this prodigy the parts he asked for; he offered him a part-time job.) Jobs’s friend and partner, Stephen Wozniak, then twenty-six, was a self-taught engineer.

  In 1975 Wozniak built his own computer — basically a naked circuit board. He and Jobs christened it the Apple and began to hand-build Apples in the Jobs garage. In 1976 the Byte Shop sold two hundred Apples for them.

  Jobs was far from satisfied. Others too were making small computers, mainly for the hobbyists, but Jobs had vision. He foresaw a broader market for the small computers, provided they were made for ordinary people. A computer, Jobs believed, should be useful, fun, and friendly, and small enough to find a place amid the clutter on your desk. Using Jobs’s suggestions Wozniak began to build a friendlier computer, Apple II. And this is where Microsoft (the former Micro-Soft) came in because, for a while, Wozniak and Jobs found GW-BASIC what they needed for their easy-to-use personal computer. By this time, 1977, Apple had outgrown the Jobs garage and employed about a dozen people.

  The giant IBM Corporation, by contrast, had battalions of men in dark blue suits, and they were not asleep. The firm held back for several years to see if personal computers would replace electric typewriters, of which it sold a million every year. By early 1980 IBM could see the future, and it quickly made a prototype PC and then began production. However, rather than develop its own PC software, IBM bought from Microsoft a license to let buyers of its PCs use Microsoft’s operating system. As we’ll see, this action — or inaction — had significant results.

  The IBM PC immediately enjoyed a huge success, and the company quickly quadrupled production. Alas for IBM, however, other firms could buy most of the same parts that IBM used, so they began to manufacture less expensive “clones” of IBM’s PC. IBM did not go under, but it did retrench. The clones reduced the price, and this encourage
d wider use of personal computers. Apple, meanwhile, stuck with what it had already; it wouldn’t clone the IBM PC. Jobs competed with the clones by making even better software and developing an ultra user-friendly computer, the Macintosh.

  By 1982 the personal computer was available to anyone who could afford its fairly modest, always falling, price. In 1983 enthusiasts could find roughly thirty magazines for users of personal computers. The triumph of the small machines was official in January 1983 when Time selected the PC as its “Machine of the Year.”

  In the meantime, though, attention shifted from computers to their software. This happened as the uses for computers rose, but could not have happened if computers hadn’t grown so smart. By 1985 the “chips” inside computers that really do the work held up to 275,000 transistors that could carry out 6,000,000 instructions in a second. One little chip was at least 1,200 times as fast as ENIAC, that pea-brained dinosaur, and the processing power of chips was doubling every eighteen to twenty-four months. The growing power of their brains allowed PCs to use a lot more software.

  From 1981 to 1984 the PC software market rose from $140 million to $1.6 billion, a more than tenfold surge. The clearest illustration of the crucial role of software is the tale of Microsoft. In 1980 Microsoft employees numbered only thirty-two, but luck was with Bill Gates. When IBM decided (as we saw) not to make the software for its personal computer, it first approached a firm called Digital Research, Inc. When this did not work out mighty IBM went to little Microsoft.

  As we saw above, Gates and Allen agreed to license IBM to let users of IBM PCs use Microsoft’s operating system. (The deal was complicated, but it turned out to be very good for Microsoft and bad for IBM.) Gates, who didn’t have a system ready, bought the software from another firm for $30,000, improved it, and provided it to IBM. Eventually this system would be “bundled” into nearly every IBM PC and every clone. Microsoft would earn from $10 to $50 on each bundle, and Gates would quickly make (what else?) a bundle. At one point in the 1990s Gates’s fortune equaled all the worth of the 106 million poorest Americans. By this time Microsoft was making other software that was widely used. It had 14,000 workers, and the value of its stock was nearing that of IBM’s.

  Americans of course were not the only gainers from computers. They stimulated economic growth on every continent. As we saw in chapter 22, the world now had a rising global economic system, and rising (though more and more unequal) incomes. Personal computers often were the fairy queens who flourished wands and changed the poorest Third World hamlets into glowing symbols of the dawning age. For example, outside Bombay, India, were villages where hunger and infanticide were common. Amid this destitution in the year 2000 was a new development known as SCEEPZ — the Santa Cruz Electronic Export Processing Zone. Here, in SCEEPZ’s air-conditioned rooms, well-fed Indian computer programmers turned out software for multinationals on other continents.

  The biggest change in the computer was not the way it shrank in body nor the way it grew in mind, but the ways we humans used it. In the 1940s its inventors had intended it for crunching numbers, which is why they named it a “computer.” But fairly soon computer makers and business users began to change computers into data storers and manipulators.

  Those who made and those who used computers now began to find a million uses for them. By the 1970s computers “processed” words, which for them was just a cinch. In the Gulf War in 1991 computers were using satellites in the sky above them to guide allied soldiers through the trackless deserts. Up to now we have not been able to create computers that can think like humans, but they have begun to master chess. In 1996 the world champion chess player Gary Kasparov defeated a powerful computer named Deep Blue, but in 1997 an improved Deep Blue beat Kasparov in the deciding game of a six-game series. In 2003 Kasparov played a six-game series with the world’s best chess computer, Deep Junior, which can analyze 3 million moves per second. The match ended in a draw.

  By the year 2000 ordinary humans carried digital assistants in their pockets that contained a million times the memory of the computer on Eagle, the moon lander. Chips now held a million transistors in an area slightly bigger than a postage stamp, and by the time you read this they will be out of date.

  Now that we had made electronic brains, the next step was to link them. This was something like what happened long ago to early humans: they first developed bigger brains, and then they linked their brains to other human brains by using speech. In the same way, software engineers in the 1960s and 1970s found ways to link networks of computer users to other networks. The Internet (as it was later named) is thus a net of nets. In the 1990s it enormously expanded and in theory became a way for one to be in touch with all, instantly, around the world.

  The linking or connecting of computers was as vital to the human story as the making of them was. And yet the linkers are as thoroughly forgotten as von Neumann, Gates, and Jobs are known and lionized. For the record, one father of the Internet was J. C. R. Licklider, an MIT professor. “Lick,” as he was known, foresaw the benefits of linking nets and envisioned what he called “mechanically extended man.”

  As the Internet expanded, the problem of discovering the riches in it also grew. In 1989 a British physicist, Tim Berners-Lee, devised the World Wide Web, a means of sharing global information via Internet. Berners-Lee has written, “The vision I have for the Web is about anything being potentially connected with anything.” With the Web one navigates among discussion groups, useful information, press releases, pornography, libraries, “club rooms,” and miles and miles of junk. Computer users may post data (images, video, words, or sound), or use the data others anywhere have posted. Everywhere on earth people use the Web to buy and sell, amuse, inform and influence, and penetrate the worldwide store of knowledge.

  At the latest count the Web probably contained at least five billion pages.

  OUR VENTURE INTO space began in dreams. When Russians, Germans, Americans — even a Rumanian — pioneered in making rockets in the 1920s, they were not concerned about their service to their country or the money they might make. No, they dreamed of human flight in space. They had read the fantasies of earlier generations, such as Jules Verne’s From the Earth to the Moon and H. G. Wells’s The War of the Worlds (about invading Martians) and The First Men in the Moon. The rockets they were building were to be man’s means to see the dark side of the moon and learn if there were men on Mars.

  As late as the 1920s and 1930s many found the very thought of rockets quite absurd. In 1919 Robert Goddard, an American inventor who now is sometimes called the “father of rocketry,” published his classic work, A Method of Reaching Extreme Altitudes. The New York Times responded with a jocular editorial. “That Professor Goddard…does not know the relation of action to reaction and of the need to have something better than a vacuum against which to react — to say that would be absurd. Of course he only seems to lack the knowledge ladled out in high schools.” People nicknamed him “Moony” Goddard.

  Even after World War II, the dream of human flight in space seemed much like science fiction. Science Digest guessed in 1948 that “Landing and moving around the moon offers so many serious problems for human beings that it may take science another 200 years to lick them.”

  But German engineers had run a rocket — that is, missile — program both before and during World War II. In the spring of 1944 they had fired the world’s first medium-range ballistic missiles across the English Channel at anything in England they might chance to hit. At the end of the war, when the allied armies entered Germany, both Russia and America were planning missile programs of their own. The Russians reached the German rocket center first, but the finest of the German rocketeers had fled westward and surrendered to the Americans. (Stalin menacingly demanded, “How and why was this allowed to happen?”)

  Among the Germans whom the Americans carried off was Wernher von Braun. As a little boy in Germany, von Braun had been already space-obsessed, and one time he had fastened ro
ckets to his wagon, causing an explosion. When he was eighteen, an article he read on travel to the moon provided his vocation. Later he was technical director at a weapons center, and he and his team of engineers developed the missiles that the Nazis made and fired across the Channel. (A satirical American, Tom Lehrer, later wrote, somewhat unfairly, “Once the rockets are up / Who cares where they come down? / That’s not my department / Says Wernher von Braun.” Soon the former German weapon maker was working for America.

  But it was Russia that was first to enter space, and this happened in the context of the Cold War. In the decade after World War II Russian scientists and engineers had beaten the Americans in the race to build an intercontinental missile. What they made was basically a mighty rocket, impelled by twenty separate engines, carrying a two-ton atom bomb. But now the Russians used the rocket not for warfare but to enter space.

  In the middle of the night on October 4, 1957, a bugle sounded, flames erupted, and, with a roar like rolling thunder, Russia’s rocket lifted off. It bore aloft the earth’s first artificial satellite, a shiny sphere the size of a basketball. Its name was Sputnik, meaning “companion” or “fellow traveler” (through space). The watchers shouted, “Off. She’s off. Our baby’s off!” Someone danced; others kissed and waved their arms. The rocket and the Sputnik disappeared and then they parted from each other. For several months Sputnik circled earth every hour and a half, transmitting beeps.

 

‹ Prev