The Tale of the Dueling Neurosurgeons
Page 30
Along those lines, split-brain people can help illuminate certain emotional struggles we face. Consider P.S., the teenager who confabulated about chickens and shovels. In another experiment scientists flashed “girlfriend” to his right hemisphere. In classic split-brain fashion, he claimed he saw nothing; but in classic teenage fashion, he giggled and blushed. His left hand then used some nearby Scrabble tiles to spell L-I-Z. When asked why he’d done that, he said he didn’t know. He certainly wouldn’t do anything as stupid as like a girl. Tests also revealed conflicting desires in his right and left brain. P.S. attended a fancy finishing school in Vermont, and when asked what he wanted to do for a living, his left brain bid him say “Draftsman,” a respectable career. Meanwhile his left hand spelled out “automobile race[r]” with tiles. His brain even betrayed a red/blue political divide: post-Watergate, his left brain expressed sympathy for President Nixon, while his right brain hinted it was glad to see Tricky Dick go. When facing a crisis or controversy, we often talk about feeling torn or being of two minds. Perhaps those aren’t just metaphors.*
This left-right asymmetry within the brain affects how we read emotions in other people as well. Imagine simple line drawings of two half-smiley, half-frowny faces, one with the smile on the left side of the face, one with the frown on the left. In a literal sense, these faces are equal parts sad and happy. But to most people the emotion on the left side (from the viewer’s point of view) dominates, and determines the overall emotional tenor. That’s because whatever’s in your left visual field taps into the emotion-dominant and face-dominant right brain. Along those lines, if you bisect a person’s photograph and view each half independently, people usually think he “looks like” the left half more than the right half.
Artists have long exploited this left-right asymmetry to make their portraits more dynamic. Generally, the left half of someone’s face (the side controlled by the emotive right brain) is more expressive, and surveys in European and American art museums have found that something like 56 percent of men and 68 percent of women in portraits face the left side of the canvas and thereby show more of the left side of the face. Crucifixion scenes of Jesus suffering on the cross showed an even stronger bias, with over 90 percent facing left. (By chance alone, you’d expect closer to 33 percent, since subjects could face left, right, or straight ahead.) And this bias held no matter whether the artists themselves were left- or right-handed. Whether this happens because the sitters prefer to display their more expressive left side or because the artists themselves find that side more interesting isn’t clear. But the bias seems universal: it shows up even in high school yearbook photos. A leftward pose also allows the artist to center the sitter’s left eye on the canvas. In this position most of her face appears on the canvas’s left side, where the face-hungry right hemisphere can study it.
There are exceptions to this leftward bias in portraiture, but even these are telling. The highly ambidextrous Leonardo often broke convention and drew right-facing profiles. But perhaps his most classic piece, the Mona Lisa, faces left. Another exception is that self-portraits often face right. Artists tend to paint self-portraits in a mirror, however, which makes the left half of the face appear on the right side of the canvas. So this “exception” might actually confirm the bias. Finally, one study found that prominent scientists, at least in their official portraits for the Royal Society in England, usually face right. Perhaps they simply preferred to seem cooler and less emotional—more the stereotypical rationalist.
In contrast to portraits, art in general doesn’t show a leftward bias, not in all cultures. In Western paintings, the so-called glance curve—the line the eye naturally follows—does often travel left to right. In art from east Asia, the glance curve more often runs right to left, more in line with reading habits there. A similar bias exists in theater: in Western theaters, as soon as the curtain rises, audiences look left in anticipation; in Chinese theaters, audiences swivel right.
The reason we show a left-right preference for some things (portraits) but not others (landscapes) probably traces back to our evolutionary heritage as animals. Animals can safely ignore most left-right differences in the environment: a scene and its mirror image are more or less identical with regard to food, sex, and shelter. Even smart and discriminating animals—such as rats, who can distinguish squares from rectangles pretty easily—struggle in telling mirror images apart. And human beings, being more animal than not, can be similarly oblivious about left/right differences, even with our own bodies. Russian drill sergeants in the 1800s got so fed up with illiterate peasants not knowing left from right that they’d tie straw to one leg of recruits, hay to the other, then bark, “Straw, hay, straw, hay!” to get them to march in step. Even brainiacs like Sigmund Freud and Richard Feynman admitted to having trouble telling right and left apart. (As a mnemonic, Freud made a quick writing motion with his right hand; Feynman peeked at a mole on his left.) There’s also a famous (right-facing) portrait of Goethe showing him with two left feet, and Picasso apparently shrugged at (mis)printed reversals of his works, even when his signature ran the wrong way.
So why then do humans notice any left-right differences? In part because of faces. We’re social creatures, and because of our lateralized brains, a right half-grin doesn’t quite come off the same as a left half-grin. But the real answer lies in reading and writing. Preliterate children often reverse asymmetric letters like S and N because their brains can’t tell the difference. Illiterate artisans who made woodblocks for books in medieval times were bedeviled by the same problem, and their s and Иs added a clownish levity to dry Latin manuscripts. Only the continual practice we get when reading and writing allows us to remember that these letters slant the way they do. In fact, in all likelihood only the advent of written scripts a few millennia ago forced the human mind to pay much attention to left versus right. It’s one more way that literacy changed our brains.
Of the three great “proving otherwise”s in Sperry’s career, the split-brain work was the most fruitful and the most fascinating. It made Sperry a scientific celebrity and brought colleagues from around the world to his lab. (Although not a schmoozer, Sperry did learn to host a decent party, with folk dancing and a drink called split-brain punch—presumably so named because a few glasses would cleave your mind in two.) The split-brain work entered the popular consciousness as well. Writer Philip K. Dick drew on split-brain research for plot ideas, and the entire educational theory of “left-brain people” versus “right-brain people” derives (however loosely) from Sperry and crew.
Sperry’s early proving otherwises probably deserved their own Nobel Prizes, but the split-brain work finally catapulted him to the award in 1981. He shared it with David Hubel and Torsten Wiesel, who’d proved how vision neurons work, thanks to a crooked slide. As lab rats, none of the three had much use for formal attire, and Hubel later recalled hearing a knock on his hotel room door just before the Nobel ceremony in Stockholm. Sperry’s son was standing there, his dad’s white bow tie limp in his hand: “Does anyone have any idea what to do with this?” Hubel’s youngest son, Paul, nodded. Paul had played trumpet in a youth symphony back home and knew all too well about tuxedos. He ended up looping and knotting the bow ties for the geniuses.
Winning a Nobel didn’t quench Sperry’s ambitions. By the time he won the prize, in fact, he’d all but abandoned his split-brain research to pursue that eternal MacGuffin of neuroscience, the mind-body problem. Like many before him, Sperry didn’t believe that you could reduce the mind to mere chirps of neurons. But neither did he believe in dualism, the notion that the mind can exist independently of the brain. Sperry argued instead that the conscious mind was an “emergent property” of neurons.
An example of an emergent property is wetness. Even if you knew every last quantum factoid about H2O molecules, you’d never be able to predict that sticking your hand into a bucket of water feels wet. Massive numbers of particles must work together for that quality to emerge. The same go
es for gravity, another property that surfaces almost magically on macro scales. Sperry argued that our minds emerge in an analogous way: that it takes large numbers of neurons, acting in coordinated ways, to stir a conscious mind to life.
Most scientists agree with Sperry up to this point. More controversially, Sperry argued that the mind, although immaterial, could influence the physical workings of the brain. In other words, pure thoughts somehow had the power to bend back and alter the molecular behavior of the very neurons that gave rise to them. Somehow, mind and brain reciprocally influence each other. It’s a bracing idea—and, if true, might explain the nature of consciousness and even provide an opening for free will. But that’s a doozy of a somehow, and Sperry never conjured up any plausible mechanism for it.
Sperry died in 1994 thinking his work on consciousness and the mind would be his legacy. Colleagues begged to differ, and some of them think back on Sperry’s final years (as with Wilder Penfield’s late work) with a mixture of disbelief and embarrassment. As one scientist commented, work on the fuzzier aspects of consciousness repels everyone but “fools and Nobel laureates.” Nevertheless, Sperry was right about one thing: explaining how human consciousness emerges from the brain has always been—and remains today—the defining problem of neuroscience.
CHAPTER TWELVE
The Man, the Myth, the Legend
The ultimate goal of neuroscience is to understand consciousness. It’s the most complicated, most sophisticated, most important process in the human brain—and one of the easiest to misunderstand.
September 13, 1848, proved a lovely fall day, bright and clear with a little New England bite. Around 4:30 p.m., when the mind might start wandering, a railroad foreman named Phineas Gage filled a drill hole with gunpowder and turned his head to check on his men. Victims in the annals of medicine almost always go by initials or pseudonyms. Not Gage: his is the most famous name in neuroscience. How ironic, then, that we know so little else about the man.
The Rutland and Burlington Railroad Company was clearing away some rock outcroppings near Cavendish, in central Vermont, that fall, and had hired a gang of Irishmen to blast their way through. While good workers, the men also loved brawling and boozing and shooting guns, and needed kindergarten-level supervision. That’s where the twenty-five-year-old Gage came in: the Irishmen respected his toughness, business sense, and people skills, and they loved working for him. Before September 13, in fact, the railroad considered Gage the best foreman in its ranks.
As foreman, Gage had to determine where to place the drill holes, a job that was half geology, half geometry. The holes reached a few feet deep into the black rock and had to run along natural joints and rifts to help blow the rock apart. After the hole was drilled, the foreman sprinkled in gunpowder, then tamped the powder down, gently, with an iron rod. This completed, he snaked a fuse into the hole. Finally an assistant poured in sand or clay, which got tamped down hard, to confine the bang to a tiny space. Most foremen used a crowbar for tamping, but Gage had commissioned his own rod from a blacksmith. Instead of a crowbar’s elongated S, Gage’s rod was straight and sleek, like a javelin. It weighed 13¼ pounds and stretched three feet seven inches long (Gage stood five-six). At its widest the rod had a diameter of 1¼ inches, although the last foot—the part Gage held near his head when tamping—tapered to a point.
Around 4:30 Gage’s crew apparently distracted him; they were loading some busted rock onto a cart, and it was near quitting time, so perhaps they were a-whooping and a-hollering. Gage had just finished pouring some powder into a hole, and turned his head. Accounts differ about what happened next. Some say Gage tried to tamp the gunpowder down with his head still turned, and scraped his iron against the side of the hole, creating a spark. Some say Gage’s assistant (perhaps also distracted) failed to pour the sand into the hole, and when Gage turned back he smashed the rod down hard, thinking he was packing inert material. Regardless, a spark shot out somewhere in the dark cavity, and the tamping iron reversed thrusters.
Gage was likely speaking at that instant, with his jaw open. The iron entered point first, striking Gage point-blank below the left cheekbone. The rod destroyed an upper molar, pierced the left eye socket, and passed behind the eye into his brainpan. At this point things get murky. The size and position of the brain within the skull, as well as the size and position of individual features within the brain itself, vary from person to person—brains vary as much as faces do. So no one knows exactly what got damaged inside Gage’s brain (a point worth remembering). But the iron did enter the underbelly of his left frontal lobe and plow through the top of his skull, exiting where babies have their soft spots. After parabola-ing upward—it reportedly whistled as it flew—the rod landed twenty-five yards distant and stuck upright in the dirt, mumblety-peg-style. Witnesses described it as streaked with red and greasy to the touch from fatty brain tissue.
The rod’s momentum threw Gage backward and he landed hard. Amazingly, though, he claimed he never lost consciousness, not even for an eyeblink. He merely twitched a few times on the ground, and was talking again within a few minutes. He walked to a nearby oxcart and climbed in, and someone grabbed the reins and giddyupped. Despite the injury Gage sat upright for the mile-long trip into Cavendish, then dismounted with minimal assistance at the hotel where he was lodging. He took a load off in a chair on the porch and even chatted with passersby, who could see a funnel of upturned bone jutting out of his scalp.
Two doctors eventually arrived. Gage greeted the first by angling his head and deadpanning, “Here’s business enough for you.” Doctor one’s “treatment” of Gage hardly merited the term: “the parts of the brain that looked good for something, I put back in,” he later recalled, and threw the “no good” parts out. Beyond that, he spent much of his hour with Gage questioning the veracity of the witnesses. You’re sure? The rod passed through his skull? On this point the doctor also queried Gage himself, who—despite all expectation—had remained utterly calm and lucid since the accident, betraying no discomfort, no pain, no stress or worry. Gage answered the doctor by pointing to his left cheek, which was smeared with rust and black powder. A two-inch flap there led straight into his brain.
Finally, Dr. John Harlow arrived around 6 p.m. Just twenty-nine years old, and a self-described “obscure country physician,” Harlow spent his days treating people who’d fallen from horses and gotten in carriage accidents, not neurological cases. He’d heard nothing of the new theories of localization simmering in Europe and had no inkling that, decades later, his new patient would become central to the field.
Like everyone else, Harlow didn’t believe Gage at first. Surely, the rod didn’t pass through your skull? But after receiving assurance that it had, Harlow watched Gage lumber upstairs to his hotel room and lie down on the bed—which pretty much ruined the linens, since his upper body was one big bloody mess. As for what happened next, readers with queasy stomachs should skip to the next paragraph. (I’m not kidding.) Harlow shaved Gage’s scalp and peeled off the dried blood and gelatinous brains. He then extracted skull fragments from the wound by sticking his fingers in from both ends, Chinese-finger-trap-style. Throughout all this, Gage was retching every twenty minutes, mostly because blood and greasy bits of brain kept slipping down the back of his throat and gagging him. The violence of the heaving also caused “half a teacupful” of brain to ooze out the exit wound on top. Incredibly, even after tasting his own brains, Gage never got ruffled. He remained conscious and rational throughout. The only false note was Gage’s boast that he’d be back blasting rocks within two days.
The bleeding stopped around 11 p.m. Gage’s left eyeball was still protruding a good half inch, and his head and arms remained heavily bandaged (he had flash burns up to his elbows). Harlow nevertheless allowed visitors the next morning, and Gage recognized his mother and uncle, a good sign. He remained stable over the next few days thanks to Harlow’s diligent care, which included fresh dressings and cold compresses. But just when H
arlow grew hopeful that Gage would survive, his condition deteriorated. His face puffed up, his brain swelled, and the wound, no doubt due to something beneath Harlow’s fingernails, developed a mushrooming fungal infection. Worse, as his brain continued swelling, Gage started raving, demanding that someone find his pants so he could go outside. He soon lapsed into a coma, and at one point a local cabinetmaker measured him for a coffin.
Gage would indeed have died—of intracranial pressure, like Henri II three centuries before—if Harlow hadn’t performed emergency surgery and punctured the tissue inside his nose to drain the wound of pus and blood. Things were touch and go for a few weeks afterward, and Gage did lose sight in his left eye. (The lid remained sewn shut the rest of his life.) But he eventually stabilized and returned home to Lebanon, New Hampshire, in late November. In his case notes Harlow downplayed his role here and even quoted Ambroise Paré: “I dressed him, God healed him.” In reality it was Harlow’s dedicated care and his bravery in performing an emergency operation—something Paré had refused to do with Henri—that saved Phineas Gage.
Or did it? Harlow kept Gage alive, but Gage’s friends and family swore that the man who came home to Lebanon was not the same man who’d left Lebanon months before. True, most things were the same. He suffered some memory lapses (probably inevitable), but otherwise his basic mental faculties remained intact. It was his personality that had changed, and not for the better. Although resolute in his plans before the accident, this Gage was capricious, almost ADD, and no sooner made a plan than dropped it for another scheme. Although deferential to people’s wishes before, this Gage chafed at any restraint on his desires. Although a canny businessman before, this Gage lacked money sense: Harlow once tested Gage by offering him $1,000 for some random pebbles that Gage had picked out of a riverbed; Gage refused. And although a courteous and reverent man before, this Gage was foul-mouthed. (To be fair, you’d probably swear, too, if an iron rod had rocketed through your skull.) Harlow summed up Gage’s personality changes by saying, “The equilibrium or balance… between his intellectual faculties and his animal propensities seems to have been destroyed.” More pithily, friends said that Gage “was no longer Gage.”