Book Read Free

Inventing Iron Man

Page 7

by E. Paul Zehr


  Who Makes the Plan?

  Two other parts of the brain play important roles in the control of movement. These areas are specifically involved in the planning and coordination of movement and go by the clever names of premotor and supplementary motor areas. Inside your skull, these two areas would be found by locating the middle of your skull as for the motor cortex. Then move your fingers forward about the thickness of two or three fingers and you are into regions right beside the primary motor cortex.

  To tackle the question of how we could interface with the brain in order to control machines, we turn to what and how can we get information from brain activity. This brings us to the concept of recording activity from the brain, so our next stop is to understand a little bit about electroencephalography, also known by its initials of EEG. We spoke earlier about Galvani, Volta, and electricity in the nervous system. Here we are talking now about electrical activity in the brain. The activity of all those neurons generates electrical field potentials that can be measured by putting electrodes over the scalp. These noninvasive measures were first discovered by German scientist Hans Berger in 1929, but the concept of electrical activity in the brain was originally described by Englishman Richard Caton in 1875. The thing is that the brain activity, as taken from the EEG signal, changes depending on what you are doing. The size of the activity changes as well as the number of “spikes” that you can see. All of these represent changes in the overall activity of neurons in different parts of the brain. Although it gets a bit complicated, this EEG activity can be filtered and analyzed and then used as a control signal to affect computers and robotic devices. This is called a brain-computer interface and brings us back to the telepresence unit that Tony Stark created for the Iron Man armor. So this part of the Iron Man mythology is already a reality.

  You can appreciate the input and output of the brain by actually stimulating the neurons to make them become active. Electrical stimulation over the scalp or on the brain surface can be used. Or, a common research technique (and one that is now used clinically too) is to apply transcranial magnetic stimulation. Conveniently, electrical and magnetic fields are interchangeable, and we can use a magnetic field to activate electrical neurons. This involves using a powerful magnetic coil placed over the part of the brain containing the neurons controlling the muscles you’re interested in.

  Figure 3.6 shows me sitting in a chair with a magnetic coil placed over my scalp on the left side of my head. Since the pathways for the motor output cross over to the other side of the brain stem and spinal cord, the cells in the left cortex control the muscles on the right side and vice versa. If I were to make a slowly increasing contraction with my forearm flexor muscles (the ones that pull my wrist in), I would slowly increase muscle activation and force production at the wrist. We can mimic this by steadily increasing the stimulation intensity. Three examples of different intensities of stimulation are shown at the bottom of the figure. You can see how the response of the muscles (called “motor evoked potentials,” or MEPs, and measured with electrodes over those muscles) increases as stimulator output goes up. This shows the clear relation of input and output. We could also basically do the opposite. Instead of stimulating the motor cortex and recording EMG in the muscles, we could record EEG activity from the somatosensory cortex while we stimulated the skin on a body part, resulting in a “somatosensory evoked potential,” or SEP.

  In clinical neuroscience, tapping into brain commands for movement has been used to try to help people with certain neurological diseases that affect movement. The terrible disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease) is one example. In ALS the lower motoneurons in the spinal cord all progressively die. During this process, the person becomes steadily weaker, is eventually paralyzed, and survives only until the motoneurons controlling the muscles for breathing die. It is one of the most terrifying neurological diseases in my view. The remarkable thing about ALS, though, is that the upper motoneurons, the ones up in the brain that we were discussing earlier, do not die. In fact, ALS doesn’t affect any of the cells of the brain. In an attempt to help moderate the effects of the disease, scientists and clinicians can train people who are early into their ALS to use the EEG signal to control a computer cursor.

  Making a simple device such as this available to many has been the passion of physician and neuroscientist Jon Wolpaw. Since the early 1990s, Jon and his team at the Wadsworth Center of the New York State Department of Health in Albany have been working on developing a brain-machine interface system based on EEG brain wave activity recorded from the scalp. Recently, this group has developed a brain-machine interface that can be taken into the homes of users. The Wadsworth device is used by people paralyzed in end-stage ALS, and other neurological disorders in which motor control is lost, to communicate by measuring brain activity with a simple stretched cap containing electrodes (embedded into the cap used in clinical EEGs).

  Figure 3.6. A transcranial magnetic brain stimulator activating the neurons in my brain that control my right wrist. Each stimulation caused an involuntary twitch of my wrist muscles. This twitch got larger (like trying to contract more forcefully) when the stimulator was turned up higher. Courtesy Richard Carson.

  The interface system is used to measure brain activity while the person observes a computer display with items that relate to a standard PC keyboard. The interface then determines which keyboard item the person wants to use. This system can be used to write e-mails and operate any PC Windows-based software that can be controlled by keyboard interface. Currently, this system still requires ongoing intervention and monitoring by experienced support staff, who must come to the user’s home and also remotely monitor activity. The Wadsworth group currently focuses much of their efforts on trying to minimize this need for costly technical support.

  Currently, some people with ALS who are approaching complete paralysis are using the Wadsworth brain-machine interface. They were able to control a computer interface that allowed them to move a cursor on a screen to select letters to spell words. The idea was that it would be helpful when the disease progressed to the point that they couldn’t speak. So they trained the participants how to use the devices when they still had some use of their limbs and then they were well placed to be able to use them in the late stages of the disease.

  At the other end of the spectrum, several toy companies are coming up with similar devices with video game controllers. Mattel Mind Flex, NeuroSky, and Emotiv all use scalp electrodes to detect EEG activity to move cursors in games.

  Brain-Machine Interfaces Put Thoughts into Action

  The basic concept of a brain-machine interface is essentially replacing the biological signaling connections with technological ones. When damage occurs in the nervous system, such as after a stroke or spinal cord injury, there is interruption in the normal signaling connections from brain to spinal cord that leads to difficulty in muscle activation. As an example of the effect of trauma, let’s consider someone who experienced a spinal cord injury in the neck.

  The late Christopher Reeve (1952–2004) experienced a horrific spinal cord injury when he was thrown off a horse he was riding. He shattered two vertebrae (the bones of your spinal column) just below his head at the top of the neck. These were cervical vertebrae 1 and 2 (going from top to bottom you have seven cervical, 12 thoracic, and five lumbar vertebrae). Spinal cord injuries are graded in severity based on the level of the injury (higher is worse because more “downstream” parts of the spinal cord are affected) and how “complete” it is, basically on how damaged is the spinal cord. An injury at the C1-C2 level is often fatal, because it affects the parts of the brainstem that control breathing and cardiovascular function. Christopher Reeve is a tragic example, but he is a good one to think about in a book about a superhero since he played Superman in four major motion pictures from 1978–1987. After his accident, he was required to use a ventilator to breathe and had no functional ability to activate any arm or leg muscle
s.

  If a fully developed brain-machine interface had existed, it could have been used to detect motor signals in Christopher’s brain that signaled his intention to pick up an object. Perhaps a glass of something. Or a coffee mug. That would be a typical textbook example. However, I want to use the example of a New York Rangers hockey jersey—I will explain why in a minute. Using a brain-machine interface and a robotic arm, the command to pick up the jersey could be relayed to a computer controlling the robotic arm, and the controller would bring the jersey close to Christopher. As a point of reference, we are nowhere near having anything this complex at present—although researchers have been able to get a monkey to feed itself an orange using this kind of system. An ideal interface would—with no obvious delay or difficulty—take the thought about the action and transform it into actual action.

  Now, let me briefly come back to the reason for the New York Rangers jersey. I had the good fortune to meet Christopher Reeve in 2001 at an international spinal cord injury research conference held in Montreal, Quebec. He told us how much he liked the city of Montreal and how, in 1986, during the filming of Switching Channels, he wore a New York Rangers jersey to an NHL playoff hockey game between his Rangers and the Montreal Canadiens. Unfortunately for him, but fortunately for legions of Canadiens fans (such as me), the Rangers ran into a very hot, future hall of fame goalie named Patrick Roy and lost. Christopher explained his passion for hockey and his enjoyment of watching playoff hockey between New York and Montreal (two of the “original six” founding members of the NHL). He was presented with a Montreal Canadiens jersey by the conference organizers, which was what spurred the story I just related. So there.

  An interesting example of brain-machine interface using implantable electrodes is the CyberKinetics BrainGate. BrainGate’s mission is stated as “advancing technological interfaces in order to help neurologically impaired people continue to communicate with others.” The objective appears to extend to activities including the control of objects in the environment such using a telephone, television, or room lights. The basic BrainGate system includes an electrode sensor that is implanted into the motor cortex and connected to a computer interface that analyzes the recorded neuronal activity. The system simultaneously records electrical activity of many individual neurons using a silicon array about the size of a baby aspirin. That array contains one hundred electrode contacts that are each a bit thinner than a strand of your hair. Figure 3.7 shows an anatomical model of the head with the electrode array inserted into the brain through a port in the skull. The other end of the cable goes to an interface cable that can go to a computer. The principle of operation is that the neuronal signals from the brain are interpreted and translated into cursor movements. This means the person can control a computer with thought, in a way similar to using wrist movement to shift a mouse to move a cursor. This is close to the concept of the NTU-150 telepresence armor that Tony created way back in 1993. That’s where (when?) we go next.

  Figure 3.7. Electrode arrays implanted in the brain using the “Brain-Gate” system. Courtesy Paul Wicks.

  Brain-Machine Interface and the Iron Man Neuromimetic Telepresence Unit

  Let’s return for a minute to Iron Man’s telepresence armor. Recall that the telepresence unit responds to control from the user. The main outline of how this is supposed to work is shown in the “Stark Enterprises Technical Database” and the “communication network schematic” in the 2008 War Machine graphic novel series. The graphic novel depicts a flow chart linking the robotic remote controlled armor and the headset that Tony wore in “This Year’s Model” in Iron Man #190. The caption for it reads: “The diagram below represents the basic operation of information transfer between the User Interface Headset and the NTU-150. Because the actual process incorporates many hundreds of individual system checks, security interlock codes and neurological failsafe routines, the chart has been simplified to display only the primary system events.” So, now you know why it is such a streamlined schematic! There are indeed some great similarities between how the Iron Man NTU-150 system and a basic brain-machine interface works. Essentially both involve extracting information about movement from brain activity, which is then processed into a command to control a device. By far the most complex behavior demonstrated to date has been monkeys learning to feed themselves oranges using a robotic arm controlled by brain activity. Human studies have not reached this level of sophistication, even with brain electrode systems. So, we are a good way off from being able to remotely control a robotic suit of armor with brain-derived signals!

  Another interesting tweak for the Iron Man system that is not yet practicable in real life is the information flow shown in the communication schematic, which refers to information coming back from the device (“from NTU-150”). This kind of “closed loop” (where sensation feeds back into the system) would be an absolute requirement for telepresence armor and similar devices to work in practice but is considerably far away from being implemented.

  However, some recent work in a related area may one day pave the way for this kind of system. Deep brain stimulation is a technique that involves implanting electrodes into the base of the brain, usually into parts of the basal ganglia and into parts that are important for controlling movement. These areas of the brain are the main ones affected by the progressive neurological disorder of Parkinson’s disease. To help with the difficulties in producing movement and the tremors that are very common in this condition, many treatments are used, including taking certain drugs that affect dopamine systems. Current research into deep brain stimulation changes the activity in this part of the brain, and it can be really helpful in improving movement. The procedure involves setting the stimulator externally and then observing the effect on the user. Any changes in the stimulation have to be set externally in what is known as “open loop” control. A better and more adaptable system would be to have closed loop control, which is essentially the way the Iron Man NTU-150 is likely meant to work. And up until very recently no deep brain stimulator included this concept. The Medtronic Neuromodulation Technology Research division has developed a preliminary system that can extract information on brain activity that can then be used to change the settings of the stimulator. Rather coolly, Stanslaski and colleagues reported that this sensing system is rather BASIC. As in Brain Activity Sensing Interfacing Computer. While this is still a long way from the NTU-150 telepresence armor, it is actually a direct step along a path that is heading in that direction.

  Related to these steps is the concept of an optical neuroprosthesis. Illustrated in figure 3.8 is a summary of different approaches to supplementing vision in visually impaired people. These images come from the work of Eduardo Fernandez and colleagues and represent a fascinating parallel of work others have done with cochlear implants to restore hearing. At the top of the figure, three different “approaches” are outlined. Normally, visual information flows from the retina via ganglion cells into the optic nerve and eventually to visual cortex in the occipital lobe. Panel A shows the idea of using a neuroprosthetic eye to connect with neurons (ganglion cells) in the retina. Panel B uses the approach of directly activating the optic nerve (which contains the output from ganglion cells), and panel C shows the concept of connecting directly to visual cortex. This idea is highlighted in the bottom panel where the most “high-tech” approach (at least in appearance) is shown. In this case a camera in the lens of the glasses takes visual information and, after processing, feeds directly into the visual cortex by way of the cortical implant shown at the back of the head. This would be like taking the visor information from Iron Man and feeding it directly into Tony Stark’s brain. Or, as an example from Star Trek: The Next Generation, of using Geordi’s visor to send visual information to his brain. This is staggering stuff and exploration in this field continues at a rapid pace.

  Figure 3.8. Visual neuroprosthetic interfaces. Panels A–C show different approaches and locations for “tapping” in to the flow of infor
mation in the visual system. The bottom image shows an approach that uses video camera inputs from glasses, which then activate the visual cortex of the brain. Courtesy Fernandez et al. (2005).

  I have been dwelling on this system and example in Iron Man for so long because it gets to the heart of whether Tony could really become Iron Man: the robotic control of the suit. Instead of a remote-controlled suit of armor, Tony would have to use a brain-machine interface to control the suit. But how difficult would that suit be to control? And would your body like it? We are going to answer those questions in the next chapter.

  The First Decades of Iron

  “He Lives! He Walks! He Conquers!”

  Just developing and learning how to use the Iron Man suit would take up the first five years of a journey to invent an Iron Man. The technology to develop an armor system with articulated armor currently exists. Tony Stark—or anyone else who wants to follow in his footsteps—would need about two years to adapt such technology to create the full body armor that we see with Iron Man. An additional four years would likely be required to strengthen and lighten the suit and then incorporate it all into a fully mobile passive system. Such a system would make it move like a high-tech suit of armor reminiscent of medieval knights but with much more freedom of movement.

 

‹ Prev