Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human

Home > Other > Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human > Page 6
Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human Page 6

by Joel Garreau


  As Richard Satava, a DSO program manager whose portfolio includes cyborg moths and robot surgeons, puts it, we are entering the “biointelligence age.” If we master this revolution, he believes, “we will be the first species to control our own evolution.”

  AMONG THOSE AT DARPA who are working on changing what it means to be human, the word you most commonly hear is fun.

  Fun comes up all the time. Program managers view what they’re doing as the greatest fun of their lives. Whatever day of the weekend or time of the night you e-mail them, it is common to get a quick response. They seem always to be on. Their tours of duty are usually only three or four years, and they clearly view it as the most intense experience of their lives. They know they will never see its like again. They describe what they’re doing in terms that make it sound like the greatest adventure since Tom and Huck.

  What you don’t get is much of a sense of introspection. The program managers at DARPA can clearly see the individual steps that it would take to achieve telepathy, for example. But they don’t talk much about what the impact of telepathy would be. Or what a world full of telepaths might be like.

  If you point out that technology has a history of biting back, delivering unintended consequences, and ask whether DARPA worries about that, Goldblatt replies, “Yes, of course. It’s your job. We even have a bioethicist on staff. But you can’t let the fear of the future inhibit exploring the future.”

  Are there no limits on what we should try based on potential for evil?

  “I don’t think you should stop yourself because you can dream up scenarios where things didn’t go the way you wanted them to go. We probably wouldn’t be flying people into space if we really understood the risks, and now that we understand the risks more clearly, I guess there’s a question of whether we will put more people in space.”

  If you ask Joe Bielitzki, the self-proclaimed pacifist who’s creating the metabolically dominant soldier, about the implications of creating supermen, he sounds tortured. He replies, “There’s potential for contradictions in all of science, but the intent is not to create a superman. The intent is to send the war fighter out there best equipped to come back alive. And those are big differences. I mean, the results may look similar, but the intent is not to create a superhuman. There’s no reason to have a superhuman. But get somebody who can carry a little more, go on a little longer, drag their butt off the battlefield even if they’re injured—keeping people alive is really what it’s about. And this is coming from probably the ultimate pacifist. War is not a good thing to be in. But if people are going to fight you might as well give them every chance to come home to the people who love ’em.”

  If you ask Kurt Henry, who’s trying to regrow arms that are blown off, about the meaning of what he’s doing, he replies with a grin, “That’s above my pay grade. That’s not my department.”

  GINA GOLDBLATT is not at all phobic about technology. She’s accustomed to relying on it. “Technology is assisting a disabled person to reach her full potential,” she says. “That means that I started using computers in third grade. A lot of people don’t think of their computer as their pen and paper, where I did. So therefore it allowed me to remain in mainstream classes.”

  So what about brain implants like Belle’s? I ask her. Are you looking forward to getting one of those? Cyberkinetics, that Massachusetts company funded by DARPA, has received the Food and Drug Administration’s permission to test just such a device on humans.

  “Like, people are asking me that, too,” she said. “My friends will ask me, ‘do you ever look at the future as being able to find a cure for cerebral palsy?’ But I don’t know. I know my cerebral palsy is—whether or not I want to admit it—part of me. It always has been and it always will be.”

  Gina Goldblatt sees her cerebral palsy as part of her human nature.

  WHEN MICHAEL GOLDBLATT and I first met, we ended up at a nearby restaurant called Tara Thai. There he started to open up about the importance of the work DARPA was doing, creating bolder, better, stronger, faster, smarter human beings.

  He mentioned the impact DARPA’s work would have on us all. For example, he said, he had a daughter with cerebral palsy. She had spent her whole life in a wheelchair. While her accomplishments were many and remarkable, he was actually spending many millions of taxpayer dollars to save his daughter, and mentioned the work with Belle, the North Carolina monkey. Thus I heard the story for the first time.

  So, I said, in order to save your daughter, you’re willing to fundamentally alter human nature?

  There was a four-beat pause.

  “Fundamentally altering human nature,” Michael Goldblatt finally said, “would be an unintended consequence.”

  CHAPTER THREE

  The Curve

  Ch-ch-ch-changes, turn and face the strange.

  —“Changes,” by David Bowie

  ONCE UPON A TIME, a peasant rescued from death the daughter of a very rich king. The king, overcome with gratitude, offered to grant the peasant any wish, whether it be gold, jewels or even a tenth of all his lands.

  The peasant, however—who was not as simple as he seemed—asked only for a chessboard and some corn. “Tomorrow,” he told the king, “I would like you to put a single kernel of corn on the first square. The next day, I would like you to put two kernels of corn on the second square. Then each additional day, I would like you to double the number of kernels you place on each of the succeeding squares.”

  The king, who in his youth had spent more time learning to joust than to cipher, was baffled by such a humble-appearing request. But a promise is a promise, so he agreed.

  You probably know how this story ends. On the 3rd day, the peasant got 4 kernels. On the 4th day, 8 kernels. On the 5th day, 16. Even onto the 10th day, the peasant had barely received enough corn to make a decent porridge. But by then The Curve of accelerating returns was taking off. By the 20th day, the king owed the peasant 524,288 kernels of corn that day alone. All the king’s horses and all the king’s men were consumed with bringing the peasant his corn. By the 30th day the peasant was owed half a billion kernels of corn just that day, and the entire king’s navy was devoted to importing corn from far and near to add to the peasant’s vast store. The king finally realized that there were 64 squares on a chessboard, and 34 more doublings to go. On the 40th day alone, he would have to deliver 549 billion kernels of corn. He became appropriately distraught and summoned the peasant. “What can I do to end this?” he asked.

  “Tell you what,” the savvy peasant said. “I’ll take your crown, and your scepter, and your throne. In fact, I’ll take the entire kingdom. And by the way, what did you say was the name of your daughter?”

  In such a fashion are vast upheavals in society and its values caused by agreeing to ride such a curve.

  It is just such a period in which we now find ourselves.

  Down the peninsula from San Francisco lies Santa Clara County. As late as the sixties it was still most famous for its apricots and prunes. When its orchards erupted into bloom in the spring, they attracted tourists like the leaves in New England’s autumn. Since the 1920s, this area just south of Palo Alto and Stanford University had been known as the Valley of Heart’s Delight.

  By the mid-1960s, however, the Santa Clara Valley was changing. In 1938, two young Stanford grads, Bill Hewlett and Dave Packard, started the area’s first technology company in their now-famous garage in Palo Alto. Their first big customer was Walt Disney, who bought eight of their “audio oscillators” for use in his new animated film, Fantasia. In 1959, Robert Noyce of Fairchild Semiconductors figured out how to etch thousands of transistors on one piece of silicon and mass-produce these, thereby sharing credit with Jack Kilby for inventing the computer chip as we know it. Not for a dozen more years, nonetheless, would the area acquire the name that would make it legendary. Only in 1971 was the Valley of Heart’s Delight first referred to in print as Silicon Valley.

  Well before then, in 1965, the
36-year-old director of Fairchild’s Research and Development Laboratories, Gordon E. Moore, made an interesting discovery.

  Moore defied many of the stereotypes we have today of nerds. He was an outdoorsman who loved to camp and fish. He had an athletic build from his days as a 5-foot-11 football player and gymnast. He was a California native, having grown up just over the Santa Cruz Mountains on the Pacific coast side of the San Andreas Fault, in Pescadero. He was thoughtful and cautious, hardly an egomaniacal blowhard. Nor was he born to geekdom. When he went to Caltech, he was the first of his family to go to college. He never dreamed of coming to work in jeans and a T-shirt. He always wore a tie and a dress shirt with the top button buttoned, although his white shirts were sometimes known to have short sleeves and looked suspiciously like they might feature polyester. The only reason he didn’t view a slide rule as a routine fashion accessory (remember, this was before the pocket calculator) was that he was a physical chemist, not a constantly enumerating engineer.

  “You know the movie Apollo 13? You know those guys in NASA mission control?” says Howard I. High, a longtime associate. “He looked just like that. He would have fit right in. He would have made a good spy. He just looked so normal.”

  Moore helped move the earth on July 18, 1968, when he and Noyce left Fairchild to found a company called Intel. (Moore’s assistant director of R&D at Fairchild, Andy Grove, was their first employee.) They would usher in a new age. The desks of the whole world would wind up featuring strange new appliances, these dun-colored boxes with “Intel inside.” Moore and his early compatriots would become billionaires many times over.

  But it was back in 1965 that Moore made the observation that may truly secure his place in history, for it may have the most consequence for the future of the human race. What he noted, in an article for the 35th anniversary issue of Electronics magazine, was that the complexity of “minimum cost semiconductor components” had been doubling once a year, every year, since the first prototype microchip had been produced six years before. Then came the breathtaking prediction. He claimed this doubling would continue every year for the next 10 years. Carver Mead, a professor at Caltech, would come to christen this claim “Moore’s Law.”

  Moore’s Law has sprouted many variations over time. As the core faith of the entire global computer industry, however, it has come to be stated this way: The power of information technology will double every 18 months, for as far as the eye can see.

  Sure enough, in 2002, the 27th doubling occurred right on schedule with a billion-transistor chip. A doubling is an amazing thing. It means the next step is as tall as all the previous steps put together. Twenty-seven consecutive doublings of anything man-made, an increase of well over 100 million times—especially in so short a period—is unprecedented in human history. To put this in the context of our peasant and his corn, the 27th doubling just precedes the moment when even the king begins to realize the magnitude of his situation.

  Doublings of this extent have never before happened in the real world. This is exponential change. It’s a curve that goes straight up. The closest most people had come to the idea of such doublings was when, back in grammar school, they first tried to wrap their minds around the “miracle of compound interest.” In that version of exponential growth, if you put a dollar in your savings account and you, the tax man and catastrophe don’t mess with it, in several lifetimes it will wonderfully turn into a million dollars. It really will. The curve of accelerated returns is the principle that underlies saving early for your retirement. Time produces astonishing and transformative results when you can count on a doubling and redoubling curve.

  But such continuity usually doesn’t happen. Take the time before the American Civil War. The number of miles of railroad track doubled nearly seven times in the 10 years between 1830 and 1840, from 23 miles to 2,808. That was impressive. It was a curve of exponential change that would be as steep and world-altering as was the chip in the 1960s. Nonetheless, in the beginning, most people still viewed railroads as a curiosity for the elite. They still traveled by water, on horseback or on foot. Most people didn’t use computers in the 1960s, either.

  Here’s the difference. For railroads, the pace of growth was not sustainable. You needed more and more land and steel and coal to expand the system. Those are finite resources. There’s only so much of any of those. So The Curve began to level off.

  It took 40 more years, until 1880, to get the next five doublings, to almost 100,000 miles. By then, The Curve was really losing steam. It took another 36 years, to 1916, for U.S. railroads to make their final doubling plus a bit, reaching their peak mileage of over 254,037 miles.

  Make no mistake. The railroads changed whatever they touched. America was transformed. A struggling, backward, rural civilization mostly hugging the East Coast was converted into a continent-spanning, world-challenging, urban behemoth. New York went from a collection of villages to a world capital. Chicago went from a frontier outpost to a brawny goliath. The trip to San Francisco went from four months to six days, and that Spanish-mission gold-rush town became a sophisticated anchor of the Pacific. Not for nothing do historians still celebrate the driving of the Golden Spike in Promontory, Utah, on May 10, 1869, uniting the continent. The West became a huge vacuum, sucking record numbers of immigrants across the Atlantic. The frontier was settled. Distance was marked in minutes. Suddenly, every farm boy needed a pocket watch. For many of them, catching the train meant riding the crest of a new era that was mobile and national. A voyage to a new life cost 25 cents.

  Of course, these railroad doublings, like most transformative curves, soon ran out of critical fuel—including money and demand for the services. At this point, things leveled off, and society tried to adjust to the astounding changes it had seen during the rise of The Curve. Historically, adapting to this sort of upheaval has been like shooting the rapids. We start in the calm waters to which we are accustomed, bump and scream and flail through the unprecedented, then emerge around the bend into a very different patch of calm water, where we catch our breath and assess what we’ve done.

  This process is represented by an S-curve. At the flat bottom of the S, you have a period of stability such as the early 1800s. You leave that for the rapid change represented by the steep middle of the S. That’s when The Curve rises exponentially, as in the mid-1800s. Then things level out at the top of the S. The last transcontinental railroad completed in the United States was the Milwaukee Road in 1909. After that, the market for transcontinental rail was saturated. In part, that was because of the rise of a new transformative technology: The one millionth Model T rolled off the assembly line in 1915.

  Moore’s Law would have been revolutionary enough if chip power had leveled out in the 1980s at the top of an S-curve of 141⁄2 doublings—comparable to that of the railroads over 85 years. Our world today, marked by ubiquitous personal computers, would have ensured that.

  But The Curve did not stop. In 1975, Gordon Moore revised his Law to predict doublings “only” every two years. But he turned out to be too conservative. The computer industry regularly beats its clockworklike 18-month schedule for price-performance doubling.

  Another way of expressing Moore’s Law is far more recognizable to many people. The price of any given piece of silicon can be expected to drop by half every 18 months. Who hasn’t eyed a whiz-bang $2,000 computer as a Christmas present, only to see an equivalent machine drop in price to $1,300 by the next holiday? Before 10 Christmases pass, the gift becomes a ghost. It has been cast aside. Not because it doesn’t work; it chugs along just fine. But we have changed. It now seems so clunky. The power that could have only been bought with $2,000 10 years before can be expected to be available for $31.25, according to Moore’s Law. By then the power is so unremarkable that you can get it for free with a subscription to Newsweek. Of course it no longer sits on a desktop. It has disappeared into watches, cell phones, jewelry and even refrigerator magnets with more power than was available to the entire
North American Air Defense Command when Moore first prophesied in 1965. In some cases that power seems to dissolve into pocket lint—so unremarkable you don’t even register that it’s there. It essentially disappears. Take smart cards. You may have some and not even know it. They frequently look like credit cards. But they have chips in them, so they have significant powers. They allow you to enter especially secure buildings or store your medical records or pay for your subway fare. Passports come equipped with them. Full-blown versions are tiny computers without a keyboard or a screen. By 2002 those smart cards matched the processing power of a 1980 Apple II computer. By the middle of the decade they matched the power of a 386-class PC, circa 1990. Before 2010 they will have Pentium-class power. All for under $5 apiece. Think about that—a $4 Pentium. Retail items such as disposable razors increasingly come with radio identification chips, smaller than a grain of rice, that deter shoplifting. Those chips have the power of the state-of-the-art commercial computers of the 1970s. They cost pennies. They are designed to be thrown away.

  This astonishing power has become almost free because, unlike the railroads, its expansion does not have the material limits of, say, Grand Central Station. The cost of shipping a ton of grain was halved perhaps three times during the railroads’ heyday. The cost of computing had halved almost 30 times by the early 21st century. There are only four limits to computer evolution: quantum physics, human ingenuity, the market and our will. Actually, it’s not at all clear that there are any practical limits represented by quantum physics, human ingenuity and the market, at least not in our lifetimes. Whether our will can shape limits is the core issue of the rest of this book.

  You can see the effects all around you. In the same April 1965 issue of Electronics in which Gordon Moore laid out Moore’s Law, Daniel E. Noble of Motorola also made some stunning predictions. In less than 50 years, he boldly prophesied, not only would computers become common in the home, but “the housewife will sit at home and shop by dialing the selected store for information about the merchandise wanted.”

 

‹ Prev