Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100

Home > Other > Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 > Page 4
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 Page 4

by Michio Kaku


  Back then (it seems like a lifetime ago), personal computers were new, just beginning to penetrate people’s lives, as they slowly warmed up to the idea of buying large, bulky desktop computers in order to do spreadsheet analysis and a little bit of word processing. The Internet was still largely the isolated province of scientists like me, cranking out equations to fellow scientists in an arcane language. There were raging debates about whether this box sitting on your desk would dehumanize civilization with its cold, unforgiving stare. Even political analyst William F. Buckley had to defend the word processor against intellectuals who railed against it and refused to ever touch a computer, calling it an instrument of the philistines.

  It was in this era of controversy that Weiser coined the expression “ubiquitous computing.” Seeing far past the personal computer, he predicted that the chips would one day become so cheap and plentiful that they would be scattered throughout the environment—in our clothing, our furniture, the walls, even our bodies. And they would all be connected to the Internet, sharing data, making our lives more pleasant, monitoring all our wishes. Everywhere we moved, chips would be there to silently carry out our desires. The environment would be alive.

  For its time, Weiser’s dream was outlandish, even preposterous. Most personal computers were still expensive and not even connected to the Internet. The idea that billions of tiny chips would one day be as cheap as running water was considered lunacy.

  And then I asked him why he felt so sure about this revolution. He calmly replied that computer power was growing exponentially, with no end in sight. Do the math, he implied. It was only a matter of time. (Sadly, Weiser did not live long enough to see his revolution come true, dying of cancer in 1999.)

  The driving source behind Weiser’s prophetic dreams is something called Moore’s law, a rule of thumb that has driven the computer industry for fifty or more years, setting the pace for modern civilization like clockwork. Moore’s law simply says that computer power doubles about every eighteen months. First stated in 1965 by Gordon Moore, one of the founders of the Intel Corporation, this simple law has helped to revolutionize the world economy, generated fabulous new wealth, and irreversibly altered our way of life. When you plot the plunging price of computer chips and their rapid advancements in speed, processing power, and memory, you find a remarkably straight line going back fifty years. (This is plotted on a logarithmic curve. In fact, if you extend the graph, so that it includes vacuum tube technology and even mechanical hand-crank adding machines, the line can be extended more than 100 years into the past.)

  Exponential growth is often hard to grasp, since our minds think linearly. It is so gradual that you sometimes cannot experience the change at all. But over decades, it can completely alter everything around us.

  According to Moore’s law, every Christmas your new computer games are almost twice as powerful (in terms of the number of transistors) as those from the previous year. Furthermore, as the years pass, this incremental gain becomes monumental. For example, when you receive a birthday card in the mail, it often has a chip that sings “Happy Birthday” to you. Remarkably, that chip has more computer power than all the Allied forces of 1945. Hitler, Churchill, or Roosevelt might have killed to get that chip. But what do we do with it? After the birthday, we throw the card and chip away. Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon. Video games, which consume enormous amounts of computer power to simulate 3-D situations, use more computer power than mainframe computers of the previous decade. The Sony PlayStation of today, which costs $300, has the power of a military supercomputer of 1997, which cost millions of dollars.

  We can see the difference between linear and exponential growth of computer power when we analyze how people viewed the future of the computer back in 1949, when Popular Mechanics predicted that computers would grow linearly into the future, perhaps only doubling or tripling with time. It wrote: “Where a calculator like the ENIAC today is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and weigh only 1½ tons.”

  (Mother Nature appreciates the power of the exponential. A single virus can hijack a human cell and force it to create several hundred copies of itself. Growing by a factor of 100 in each generation, one virus can generate 10 billion viruses in just five generations. No wonder a single virus can infect the human body, with trillions of healthy cells, and give you a cold in just a week or so.)

  Not only has the amount of computer power increased, but the way that this power is delivered has also radically changed, with enormous implications for the economy. We can see this progression, decade by decade:

  • 1950s. Vacuum tube computers were gigantic contraptions filling entire rooms with jungles of wires, coils, and steel. Only the military was rich enough to fund these monstrosities.

  • 1960s. Transistors replaced vacuum tube computers, and mainframe computers gradually entered the commercial marketplace.

  • 1970s. Integrated circuit boards, containing hundreds of transistors, created the minicomputer, which was the size of a large desk.

  • 1980s. Chips, containing tens of millions of transistors, made possible personal computers that can fit inside a briefcase.

  • 1990s. The Internet connected hundreds of millions of computers into a single, global computer network.

  • 2000s. Ubiquitous computing freed the chip from the computer, so chips were dispersed into the environment.

  So the old paradigm (a single chip inside a desktop computer or laptop connected to a computer) is being replaced by a new paradigm (thousands of chips scattered inside every artifact, such as furniture, appliances, pictures, walls, cars, and clothes, all talking to one another and connected to the Internet).

  When these chips are inserted into an appliance, it is miraculously transformed. When chips were inserted into typewriters, they became word processors. When inserted into telephones, they became cell phones. When inserted into cameras, they became digital cameras. Pinball machines became video games. Phonographs became iPods. Airplanes became deadly Predator drones. Each time, an industry was revolutionized and was reborn. Eventually, almost everything around us will become intelligent. Chips will be so cheap they will even cost less than the plastic wrapper and will replace the bar code. Companies that do not make their products intelligent may find themselves driven out of business by their competitors that do.

  Of course, we will still be surrounded by computer monitors, but they will resemble wallpaper, picture frames, or family photographs, rather than computers. Imagine all the pictures and photographs that decorate our homes today; now imagine each one being animated, moving, and connected to the Internet. When we walk outside, we will see pictures move, since moving pictures will cost as little as static ones.

  The destiny of computers—like other mass technologies like electricity, paper, and running water—is to become invisible, that is, to disappear into the fabric of our lives, to be everywhere and nowhere, silently and seamlessly carrying out our wishes.

  Today, when we enter a room, we automatically look for the light switch, since we assume that the walls are electrified. In the future, the first thing we will do on entering a room is to look for the Internet portal, because we will assume the room is intelligent. As novelist Max Frisch once said, “Technology [is] the knack of so arranging the world that we don’t have to experience it.”

  Moore’s law also allows us to predict the evolution of the computer into the near future. In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control. They will, to a degree, recognize the human voice and face and converse in a formal language. They will be able to create entire virtual worlds that we can only dream of today. Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper. Then we will have millions of
chips distributed everywhere in our environment, silently carrying out our orders.

  Ultimately, the word computer itself will disappear from the English language.

  In order to discuss the future progress of science and technology, I have divided each chapter into three periods: the near future (today to 2030), the midcentury (from 2030 to 2070), and finally the far future, from 2070 to 2100. These time periods are only rough approximations, but they show the time frame for the various trends profiled in this book.

  The rapid rise of computer power by the year 2100 will give us power like that of the gods of mythology we once worshipped, enabling us to control the world around us by sheer thought. Like the gods of mythology, who could move objects and reshape life with a simple wave of the hand or nod of the head, we too will be able to control the world around us with the power of our minds. We will be in constant mental contact with chips scattered in our environment that will then silently carry out our commands.

  I remember once watching an episode from Star Trek in which the crew of the starship Enterprise came across a planet inhabited by the Greek gods. Standing in front of them was the towering god Apollo, a giant figure who could dazzle and overwhelm the crew with godlike feats. Twenty-third-century science was powerless to spar with a god who ruled the heavens thousands of years ago in ancient Greece. But once the crew recovered from the shock of encountering the Greek gods, they soon realized that there must be a source of this power, that Apollo must simply be in mental contact with a central computer and power plant, which then executed his wishes. Once the crew located and destroyed the power supply, Apollo was reduced to an ordinary mortal.

  This was just a Hollywood tale. However, by extending the radical discoveries now being made in the laboratory, scientists can envision the day when we, too, may use telepathic control over computers to give us the power of this Apollo.

  INTERNET GLASSES AND CONTACT LENSES

  Today, we can communicate with the Internet via our computers and cell phones. But in the future, the Internet will be everywhere—in wall screens, furniture, on billboards, and even in our glasses and contact lenses. When we blink, we will go online.

  There are several ways we can put the Internet on a lens. The image can be flashed from our glasses directly through the lens of our eyes and onto our retinas. The image could also be projected onto the lens, which would act as a screen. Or it might be attached to the frame of the glasses, like a small jeweler’s lens. As we peer into the glasses, we see the Internet, as if looking at a movie screen. We can then manipulate it with a handheld device that controls the computer via a wireless connection. We could also simply move our fingers in the air to control the image, since the computer recognizes the position of our fingers as we wave them.

  For example, since 1991, scientists at the University of Washington have worked to perfect the virtual retinal display (VRD) in which red, green, and blue laser light are shone directly onto the retina. With a 120-degree field of view and a resolution of 1600 × 1,200 pixels, the VRD display can produce a brilliant, lifelike image that is comparable to that seen in a motion picture theater. The image can be generated using a helmet, goggles, or glasses.

  Back in the 1990s, I had a chance to try out these Internet glasses. It was an early version created by the scientists at the Media Lab at MIT. It looked like an ordinary pair of glasses, except there was a cylindrical lens about ½ inch long, attached to the right-hand corner of the lens. I could look through the glasses without any problem. But if I tapped the glasses, then the tiny lens dropped in front of my eye. Peering into the lens, I could clearly make out an entire computer screen, seemingly only a bit smaller than a standard PC screen. I was surprised how clear it was, almost as if the screen were staring me in the face. Then I held a device, about the size of a cell phone, with buttons on it. By pressing the buttons, I could control the cursor on the screen and even type instructions.

  In 2010, for a Science Channel special I hosted, I journeyed down to Fort Benning, Georgia, to check out the U.S. Army’s latest “Internet for the battlefield,” called the Land Warrior. I put on a special helmet with a miniature screen attached to its side. When I flipped the screen over my eyes, suddenly I could see a startling image: the entire battlefield with X’s marking the location of friendly and enemy troops. Remarkably, the “fog of war” was lifted, with GPS sensors accurately locating the position of all troops, tanks, and buildings. By clicking a button, the image would rapidly change, putting the Internet at my disposal on the battlefield, with information concerning the weather, disposition of friendly and enemy forces, and strategy and tactics.

  A much more advanced version would have the Internet flashed directly through our contact lenses by inserting a chip and LCD display into the plastic. Babak A. Parviz and his group at the University of Washington in Seattle are laying the groundwork for the Internet contact lens, designing prototypes that may eventually change the way we access the Internet.

  He foresees that one immediate application of this technology might be to help diabetics regulate their glucose levels. The lens will display an immediate readout of the conditions within their body. But this is just the beginning. Eventually, Parviz envisions the day when we will be able to download any movie, song, Web site, or piece of information off the Internet into our contact lens. We will have a complete home entertainment system in our lens as we lie back and enjoy feature-length movies. We can also use it to connect directly to our office computer via our lens, then manipulate the files that flash before us. From the comfort of the beach, we will be able to teleconference to the office by blinking.

  By inserting some pattern-recognition software into these Internet glasses, they will also recognize objects and even some people’s faces. Already, some software programs can recognize preprogrammed faces with better than 90 percent accuracy. Not just the name, but the biography of the person you are talking to may flash before you as you speak. At a meeting this will end the embarrassment of bumping into someone you know whose name you can’t remember. This may also serve an important function at a cocktail party, where there are many strangers, some of whom are very important, but you don’t know who they are. In the future, you will be able to identify strangers and know their backgrounds, even as you speak to them. (This is somewhat like the world as seen through robotic eyes in The Terminator.)

  This may alter the educational system. In the future, students taking a final exam will be able to silently scan the Internet via their contact lens for the answers to the questions, which would pose an obvious problem for teachers who often rely on rote memorization. This means that educators will have to stress thinking and reasoning ability instead.

  Your glasses may also have a tiny video camera in the frame, so it can film your surroundings and then broadcast the images directly onto the Internet. People around the world may be able to share in your experiences as they happen. Whatever you are watching, thousands of others will be able to see it as well. Parents will know what their children are doing. Lovers may share experiences when separated. People at concerts will be able to communicate their excitement to fans around the world. Inspectors will visit faraway factories and then beam the live images directly to the contact lens of the boss. (Or one spouse may do the shopping, while the other makes comments about what to buy.)

  Already, Parviz has been able to miniaturize a computer chip so that it can be placed inside the polymer film of a contact lens. He has successfully placed an LED (light-emitting diode) into a contact lens, and is now working on one with an 8 × 8 array of LEDs. His contact lens can be controlled by a wireless connection. He claims, “Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented.” His ultimate goal, which is still years away, is to create a contact lens with 3,600 pixels, each one n
o more than 10 micrometers thick.

  One advantage of Internet contact lenses is that they use so little power, only a few millionths of a watt, so they are very efficient in their energy requirements and won’t drain the battery. Another advantage is that the eye and optic nerve are, in some sense, a direct extension of the human brain, so we are gaining direct access to the human brain without having to implant electrodes. The eye and the optic nerve transmit information at a rate exceeding a high-speed Internet connection. So an Internet contact lens offers perhaps the most efficient and rapid access to the brain.

  Shining an image onto the eye via the contact lens is a bit more complex than for the Internet glasses. An LED can produce a dot, or pixel, of light, but you have to add a microlens so that it focuses directly onto the retina. The final image would appear to float about two feet away from you. A more advanced design that Parviz is considering is to use microlasers to send a supersharp image directly onto the retina. With the same technology used in the chip industry to carve out tiny transistors, one can also etch tiny lasers of the same size, making the smallest lasers in the world. Lasers that are about 100 atoms across are in principle possible using this technology. Like transistors, you could conceivably pack millions of lasers onto a chip the size of your fingernail.

  DRIVERLESS CAR

  In the near future, you will also be able to safely surf the Web via your contact lens while driving a car. Commuting to work won’t be such an agonizing chore because cars will drive themselves. Already, driverless cars, using GPS to locate their position within a few feet, can drive over hundreds of miles. The Pentagon’s Defense Advanced Research Projects Agency (DARPA) sponsored a contest, called the DARPA Grand Challenge, in which laboratories were invited to submit driverless cars for a race across the Mojave Desert to claim a $1 million prize. DARPA was continuing its long-standing tradition of financing risky but visionary technologies.

 

‹ Prev