Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100

Home > Other > Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 > Page 7
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 Page 7

by Michio Kaku


  If you are driving a car in a foreign land, all the gauges would appear on your contact lens in English, so you would never have to glance down to see them. You will see the road signs along with explanations of any object nearby, such as tourist attractions. You will also see rapid translations of road signs.

  A hiker, camper, or outdoorsman will know not just his position in a foreign land but also the names of all the plants and animals, and will be able to see a map of the area and receive weather reports. He will also see trails and camping sites that may be hidden by brush and trees.

  Apartment hunters will be able to see what is available as you walk down the street or drive by in a car. Your lens will display the price, the amenities, etc., of any apartment or house that’s for sale.

  And gazing at the night sky, you will see the stars and all the constellations clearly delineated, as if you were watching a planetarium show, except that the stars you see are real. You will also see where galaxies, distant black holes, and other interesting astronomical sights are located and be able to download interesting lectures.

  In addition to being able to see through objects and visit foreign lands, augmented vision will be essential if you need very specialized information at a moment’s touch.

  For example, if you are an actor, musician, or performer who has to memorize large amounts of material, in the future you will see all the lines or music in your lens. You won’t need teleprompters, cue cards, sheet music, or notes to remind you. You will not need to memorize anything anymore.

  Other examples include:

  • If you are a student and missed a lecture, you will be able to download lectures given by virtual professors on any subject and watch them. Via telepresence, an image of a real professor could appear in front of you and answer any questions you may have. You will also be able to see demonstrations of experiments, videos, etc., via your lens.

  • If you are a soldier in the field, your goggles or headset may give you all the latest information, maps, enemy locations, direction of enemy fire, instructions from superiors, etc. In a firefight with the enemy, when bullets are whizzing by from all directions, you will be able to see through obstacles and hills and locate the enemy, since drones flying overhead can identify their positions.

  • If you are a surgeon doing a delicate emergency operation, you will be able to see inside the patient (via portable MRI machines), through the body (via sensors moving inside the body), as well as access all medical records and videos of previous operations.

  • If you are playing a video game, you can immerse yourself in cyberspace in your contact lens. Although you are in an empty room, you can see all your friends in perfect 3-D, experiencing some alien landscape as you prepare to do battle with imaginary aliens. It will be as if you are on the battlefield of an alien planet, with ray blasts going off all around you and your buddies.

  • If you need to look up any athlete’s statistics or sports trivia, the information will spring instantly into your contact lens.

  This means you would not need a cell phone, clocks or watches, or MP3 players anymore. All the icons on your various handheld objects would be projected onto your contact lenses, so that you could access them anytime you wanted. Phone calls, music Web sites, etc. could all be accessed this way. Many of the appliances and gadgets you have at home can be replaced by augmented reality.

  Another scientist pushing the boundary of augmented reality is Pattie Maes of the MIT Media Laboratory. Instead of using special contact lenses, glasses, or goggles, she envisions projecting a computer screen onto common objects in our environment. Her project, called SixthSense, involves wearing a tiny camera and projector around your neck, like a medallion, that can project the image of a computer screen on anything in front of you, such as the wall or a table. Pushing the imaginary buttons automatically activates the computer, just as if you were typing on a real keyboard. Since the image of a computer screen can be projected on anything flat and solid in front of you, you can convert hundreds of objects into computer screens.

  Also, you wear special plastic thimbles on your thumb and fingers. As you move your fingers, the computer executes instructions on the computer screen on the wall. By moving your fingers, for example, you can draw images onto the computer screen. You can use your fingers instead of a mouse to control the cursor. And if you put your hands together to make a square, you can activate a digital camera and take pictures.

  This also means that when you go shopping, your computer will scan various products, identify what they are, and then give you a complete readout of their contents, calorie content, and reviews by other consumers. Since chips will cost less than bar codes, every commercial product will have its own intelligent label you can access and scan.

  Another application of augmented reality might be X-ray vision, very similar to the X-ray vision found in Superman comics, which uses a process called “backscatter X-rays.” If your glasses or contact lens are sensitive to X-rays, it may be possible to peer through walls. As you look around, you will be able to see through objects, just as in the comic books. Every kid, when they first read Superman comics, dreams of being “faster than a speeding bullet, more powerful than a locomotive.” Thousands of kids don capes, jump off crates, leap into the air, and pretend to have X-ray vision, but it is also a real possibility.

  One problem with ordinary X-rays is that you have to place X-ray film behind any object, expose the object to X-rays, and then develop the film. But backscattered X-rays solve all these problems. First, you have X-rays emanating from a light source that can bathe a room. Then they bounce off the walls, and pass from behind through the object you want to examine. Your goggles are sensitive to the X-rays that have passed through the object. Images seen via backscattered X-rays can be just as good as the images found in the comics. (By increasing the sensitivity of the goggles, one can reduce the intensity of the X-rays, to minimize any health risks.)

  UNIVERSAL TRANSLATORS

  In Star Trek, the Star Wars saga, and virtually all other science fiction films, remarkably, all the aliens speak perfect English. This is because there is something called the “universal translator” that allows earthlings to communicate instantly with any alien civilization, removing the inconvenience of tediously using sign language and primitive gestures to communicate with an alien.

  Although once considered to be unrealistically futuristic, versions of the universal translator already exist. This means that in the future, if you are a tourist in a foreign country and talk to the locals, you will see subtitles in your contact lens, as if you were watching a foreign-language movie. You can also have your computer create an audio translation that is fed into your ears. This means that it may be possible to have two people carry on a conversation, with each speaking in their own language, while hearing the translation in their ears, if both have the universal translator. The translation won’t be perfect, since there are always problems with idioms, slang, and colorful expressions, but it will be good enough so you will understand the gist of what that person is saying.

  There are several ways in which scientists are making this a reality. The first is to create a machine that can convert the spoken word into writing. In the mid-1990s, the first commercially available speech recognition machines hit the market. They could recognize up to 40,000 words with 95 percent accuracy. Since a typical, everyday conversation uses only 500 to 1,000 words, these machines are more than adequate. Once the transcription of the human voice is accomplished, then each word is translated into another language via a computer dictionary. Then comes the hard part: putting the words into context, adding slang, colloquial expressions, etc., all of which require a sophisticated understanding of the nuances of the language. The field is called CAT (computer assisted translation).

  Another way is being pioneered at Carnegie Mellon University in Pittsburgh. Scientists there already have prototypes that can translate Chinese into English, and English into Spanish or German. They attac
h electrodes to the neck and face of the speaker; these pick up the contraction of the muscles and decipher the words being spoken. Their work does not require any audio equipment, since the words can be mouthed silently. Then a computer translates these words and a voice synthesizer speaks them out loud. In simple conversations involving 100 to 200 words, they have attained 80 percent accuracy.

  “The idea is that you can mouth words in English and they will come out in Chinese or another language,” says Tanja Schultz, one of the researchers. In the future, it might be possible for a computer to lip-read the person you are talking to, so the electrodes are not necessary. So, in principle, it is possible to have two people having a lively conversation, although they speak in two different languages.

  In the future, language barriers, which once tragically prevented cultures from understanding one another, may gradually fall with this universal translator and Internet contact lens or glasses.

  Although augmented reality opens up an entirely new world, there are limitations. The problem will not be one of hardware; nor is bandwidth a limiting factor, since there is no limit to the amount of information that can be carried by fiber-optic cables.

  The real bottleneck is software. Creating software can be done only the old-fashioned way. A human—sitting quietly in a chair with a pencil, paper, and laptop—is going to have to write the codes, line for line, that make these imaginary worlds come to life. One can mass-produce hardware and increase its power by piling on more and more chips, but you cannot mass-produce the brain. This means that the introduction of a truly augmented world will take decades, until midcentury.

  HOLOGRAMS AND 3-D

  Another technological advance we might see by midcentury is true 3-D TV and movies. Back in the 1950s, 3-D movies required that you put on clunky glasses whose lenses were colored blue and red. This took advantage of the fact that the left eye and the right eye are slightly misaligned; the movie screen displayed two images, one blue and one red. Since these glasses acted as filters that gave two distinct images to the left and right eye, this gave the illusion of seeing three dimensions when the brain merged the two images. Depth perception, therefore, was a trick. (The farther apart your eyes are, the greater the depth perception. That is why some animals have eyes outside their heads: to give them maximum depth perception.)

  One improvement is to have 3-D glasses made of polarized glass, so that the left eye and right eye are shown two different polarized images. In this way, one can see 3-D images in full color, not just in blue and red. Since light is a wave, it can vibrate up and down, or left and right. A polarized lens is a piece of glass that allows only one direction of light to pass through. Therefore, if you have two polarized lenses in your glasses, with different directions of polarization, you can create a 3-D effect. A more sophisticated version of 3-D may be to have two different images flashed into our contact lens.

  3-D TVs that require wearing special glasses have already hit the market. But soon, 3-D TVs will no longer require them, instead using lenticular lenses. The TV screen is specially made so that it projects two separate images at slightly different angles, one for each eye. Hence your eyes see separate images, giving the illusion of 3-D. However, your head must be positioned correctly; there are “sweet spots” where your eyes must lie as you gaze at the screen. (This takes advantage of a well-known optical illusion. In novelty stores, we see pictures that magically transform as we walk past them. This is done by taking two pictures, shredding each one into many thin strips, and then interspersing the strips, creating a composite image. Then a lenticular glass sheet with many vertical grooves is placed on top of the composite, each groove sitting precisely on top of two strips. The groove is specially shaped so that, as you gaze upon it from one angle, you can see one strip, but the other strip appears from another angle. Hence, by walking past the glass sheet, we see each picture suddenly transform from one into the other, and back again. 3-D TVs will replace these still pictures with moving images to attain the same effect without the use of glasses.)

  But the most advanced version of 3-D will be holograms. Without using any glasses, you would see the precise wave front of a 3-D image, as if it were sitting directly in front of you. Holograms have been around for decades (they appear in novelty shops, on credit cards, and at exhibitions), and they regularly are featured in science fiction movies. In Star Wars, the plot was set in motion by a 3-D holographic distress message sent from Princess Leia to members of the Rebel Alliance.

  The problem is that holograms are very hard to create.

  Holograms are made by taking a single laser beam and splitting it in two. One beam falls on the object you want to photograph, which then bounces off and falls onto a special screen. The second laser beam falls directly onto the screen. The mixing of the two beams creates a complex interference pattern containing the “frozen” 3-D image of the original object, which is then captured on a special film on the screen. Then, by flashing another laser beam through the screen, the image of the original object comes to life in full 3-D.

  There are two problems with holographic TV. First, the image has to be flashed onto a screen. Sitting in front of the screen, you see the exact 3-D image of the original object. But you cannot reach out and touch the object. The 3-D image you see in front of you is an illusion.

  This means that if you are watching a 3-D football game on your holographic TV, no matter how you move, the image in front of you changes as if it were real. It might appear that you are sitting right at the 50-yard line, watching the game just inches from the football players. However, if you were to reach out to grab the ball, you would bump into the screen.

  The real technical problem that has prevented the development of holographic TV is that of information storage. A true 3-D image contains a vast amount of information, many times the information stored inside a single 2-D image. Computers regularly process 2-D images, since the image is broken down into tiny dots, called pixels, and each pixel is illuminated by a tiny transistor. But to make a 3-D image move, you need to flash thirty images per second. A quick calculation shows that the information needed to generate moving 3-D holographic images far exceeds the capability of today’s Internet.

  By midcentury, this problem may be resolved as the bandwidth of the Internet expands exponentially.

  What might true 3-D TV look like?

  One possibility is a screen shaped like a cylinder or dome that you sit inside. When the holographic image is flashed onto the screen, we see the 3-D images surrounding us, as if they were really there.

  MIND OVER MATTER

  By the end of this century, we will control computers directly with our minds. Like Greek gods, we will think of certain commands and our wishes will be obeyed. The foundation for this technology has already been laid. But it may take decades of hard work to perfect it. This revolution is in two parts: First, the mind must be able to control objects around it. Second, a computer has to decipher a person’s wishes in order to carry them out.

  The first significant breakthrough was made in 1998, when scientists at Emory University and the University of Tübingen, Germany, put a tiny glass electrode directly into the brain of a fifty-six-year-old man who was paralyzed after a stroke. The electrode was connected to a computer that analyzed the signals from his brain. The stroke victim was able to see an image of the cursor on the computer screen. Then, by biofeedback, he was able to control the cursor of the computer display by thinking alone. For the first time, a direct contact was made between the human brain and a computer.

  The most sophisticated version of this technology has been developed at Brown University by neuroscientist John Donoghue, who has created a device called BrainGate to help people who have suffered debilitating brain injuries communicate. He created a media sensation and even made the cover of Nature magazine in 2006.

  Donoghue told me that his dream is to have BrainGate revolutionize the way we treat brain injuries by harnessing the full power of the information
revolution. It has already had a tremendous impact on the lives of his patients, and he has high hopes of furthering this technology. He has a personal interest in this research because, as a child, he was confined to a wheelchair due to a degenerative disease and hence knows the feeling of helplessness.

  His patients include stroke victims who are completely paralyzed and unable to communicate with their loved ones, but whose brains are active. He has placed a chip, just 4 millimeters wide, on top of a stroke victim’s brain, in the area that controls motor movements. This chip is then connected to a computer that analyzes and processes the brain signals and eventually sends the message to a laptop.

  At first the patient has no control over the location of the cursor, but can see where the cursor is moving. By trial and error, the patient learns to control the cursor, and, after several hours, can position the cursor anywhere on the screen. With practice, the stroke victim is able to read and write e-mails and play video games. In principle a paralyzed person should be able to perform any function that can be controlled by the computer.

  Initially, Donoghue started with four patients, two who had spinal cord injuries, one who’d had a stroke, and a fourth who had ALS (amyotrophic lateral sclerosis). One of them, a quadriplegic paralyzed from the neck down, took only a day to master the movement of the cursor with his mind. Today, he can control a TV, move a computer cursor, play a video game, and read e-mail. Patients can also control their mobility by manipulating a motorized wheelchair.

 

‹ Prev