The Field
Page 11
Once he was convinced that they’d learned the routine, Lashley systematically set about trying to surgically blot out that memory. For all his criticism of the failings of other researchers, Lashley’s own surgical technique was a mess – a makeshift and hurried operation. His was a laboratory protocol that would have incensed any modern-day animal-rights champion. Lashley didn’t employ aseptic technique, largely because it wasn’t considered necessary for rats. He was a crude and sloppy surgeon, by any medical standard, possibly deliberately so, sewing up wounds with a simple stitch – a perfect recipe for brain infection in larger mammals – but no cruder than most brain researchers of the day. After all, none of Ivan Pavlov’s dogs survived his brain surgery, all succumbing to brain abscesses or epilepsy.2 Lashley sought to deactivate certain portions of his rats’ brains to find which part held the precious key to specific memories. To accomplish this delicate task he chose as his surgical instrument his wife’s curling iron – a curling iron! – and simply burned off the part he wished to remove.3
His initial attempts to find the seat of specific memories failed; the rats, though sometimes even physically impaired, remembered exactly what they’d been taught. Lashley fried more and more sections of brain; the rats still seemed to make it through the jumping stand. Lashley became even more liberal with the curling iron, working through one part of the brain to the next, but still it didn’t seem to have any effect on the rat’s ability to remember. Even when he’d injured the vast majority of the brains of individual rats – and a curling iron caused much more damage to the brain than any clean surgical cut – their motor skills might be impaired, and they might stagger disjointedly along, but the rats always remembered the routine.
Although they represented a failure of sorts, the results appealed to the iconoclast in Lashley. The rats had confirmed what he had long suspected. In his 1929 monograph Brain Mechanisms and Intelligence, a small work that had first gained him notoriety with its radical notions, Lashley had already elucidated his view that cortical function appeared to be equally potent everywhere.4 As he would later point out, the necessary conclusion from all his experimental work ‘is that learning just is not possible at all’.5 When it came to cognition, for all intents and purposes, the brain was a mush.6
For Karl Pribram, a young neurosurgeon who’d relocated to Florida just to do research with the great man, Lashley’s failures were something of a revelation. Pribram had bought Lashley’s monograph for ten cents second-hand, and when he first arrived in Florida, he hadn’t been shy about challenging it with the same fervor Lashley had reserved for many of his peers. Lashley had been stimulated by his bright upstart apprentice, whom he would eventually regard as the closest he ever had to a son.
All of Pribram’s own views about memory and the brain’s higher cognitive processes were being turned on their heads. If there was no one single spot where specific memories were stored – and Lashley had burnt, variously, every part of a rat’s brain – then our memories and possibly other higher cognitive processes – indeed, everything that we term ‘perception’ – must somehow be distributed throughout the brain.
In 1948, Pribram, who was 29 at the time, accepted a position at Yale University, which had the best neuroscience laboratory in the world. His intention was to study the functions of the frontal cortex of monkeys, in an attempt to understand the effects of frontal lobotomies being performed on thousands of patients at the time. Teaching and carrying out research appealed to him far more than the lucrative life of a neurosurgeon; at one point some years later he would turn down a $100,000 salary at New York’s Mt Sinai for the relatively impoverished salary of a professor. Like Edgar Mitchell, Pribram always thought of himself as an explorer, rather than a doctor or healer; as an eight-year-old he’d read over and over – at least a dozen times – the exploits of Admiral Byrd in navigating the North Pole. America itself represented a new frontier to conquer for the boy, who’d arrived at that age from Vienna. Pribram was the son of a famous biologist who’d relocated his family to the US in 1927 because he’d felt that Europe, war-torn and impoverished after the First World War, was no place to raise a child. As an adult, possibly because he’d been so slight of build and not really the stuff of hearty physical exploration (in later life he’d resemble an elfin version of Albert Einstein, with the same majestic drapery of white shoulder-length hair) Karl chose the human brain as his exploratory terrain.
After leaving Lashley and Florida, Pribram would spend the next 20 years pondering the mysteries surrounding the organization of the brain, perception and consciousness. He would set up his own experiments on monkeys and cats, painstakingly carrying out systems studies to work out what part of the brain does what. His laboratory was among the first to identify the location of cognitive processes, emotion and motivation, and he was extraordinarily successful. His experiments clearly showed that all these functions had a specific address in the brain – a finding that Lashley was hard-pressed to believe.
What puzzled him most was a fundamental paradox: cognitive processing had very precise locations in the brain, but within these locations, the processing itself seemed to be determined by, as Lashley had put it, ‘masses of excitations … without regard to particular nerve cells’.7 It was true that parts of the brain performed specific functions, but the actual processing of the information seemed to be carried out by something more basic than particular neurons – certainly something that was not particular to any group of cells. For instance, storage appeared to be distributed throughout a specific location and sometimes beyond. But through what mechanism was this possible?
Like Lashley, much of Pribram’s early work on higher perception appeared to contradict the received wisdom of the day. The accepted view of vision – for the most part still accepted today – is that the eye ‘sees’ by having a photographic image of the scene or object reproduced onto the cortical surface of the brain, the part which receives and interprets vision like an internal movie projector. If this were true, the electrical activity in the visual cortex should mirror precisely what is being viewed – and this is true to some extent at a very gross level. But in a number of experiments, Lashley had discovered that you could sever virtually all of a cat’s optic nerve without apparently interfering whatsoever with its ability to see what it was doing. To his astonishment, the cat apparently continued to see every detail as it was able to carry out complicated visual tasks. If there were something like an internal movie screen, it was as though the experimenters had just demolished all but a few inches of the projector, and yet all of the movie was as clear as it had been before.8
In other experiments, Pribram and his associates had trained a monkey to press a certain bar if he was shown a card with a circle on it and another bar if shown a card with stripes. Planted in the monkey’s visual cortex were electrodes which would register the brain waves when the monkey saw a circle or stripes. What Pribram was testing for was simply to see if the brain waves differed according to the shape on the card. What he discovered instead was that the monkey’s brain not only registered a difference related to the design on the card, but also whether he’d pressed the right bar and even his intention to press the bar before he did. This result convinced Pribram that control was being formulated and sent down from higher areas in the brain to the more primary receiving stations. This must mean that something far more complicated was happening than what was widely believed at the time, which was that we see and respond to outside stimuli through a simple tunnel flow of information, which flows in from our sense organs to the brain and flows out from the brain to our muscles and glands.9
Pribram spent a number of years conducting studies measuring the brain activities of monkeys as they performed certain tasks, to see if he could isolate any further the precise location where patterns and colors were being perceived. His studies kept coming up with yet more evidence that brain response was distributed in patches all across the cortex. In another study, this time of newborn cats,
which had been given contact lenses with either vertical or horizontal stripes, Pribram’s associates found that the behavior of the horizontally oriented cats wasn’t markedly different from that of the vertically oriented ones, even though their brain cells were now oriented either horizontally or vertically. This meant that perception couldn’t be occurring with line detection.10 His experiments and those of others like Lashley were at odds with many of the prevailing neural theories of perception. Pribram was convinced that no images were being projected internally and that there must be some other mechanism allowing us to perceive the world as we do.11
Pribram had moved from Yale to the Center for Advanced Study in the Behavioral Sciences at Stanford University in 1958. He might never have formulated any alternative view if his friend Jack Hilgard, a noted psychologist at Stanford, hadn’t been updating a textbook in 1964 and needed some up-to-date view of perception. The problem was that the old notions about electrical ‘image’ formation in the brain – the supposed correspondence between images in the world and the brain’s electrical firing – had been disproved by Pribram, and his own monkey studies made him extremely dubious about the latest, most popular theory of perception – that we know the world through line detectors. Just to focus on a face would require a new huge computation by the brain anytime you moved a few inches away from it. Hilgard kept pressing him. Pribram hadn’t a clue as to what kind of theory he could give his friend, and he kept racking his brain to offer up some positive angle. Then one of his colleagues chanced across an article in Scientific American by Sir John Eccles, the noted Australian physiologist, who postulated that imagination might have something to do with microwaves in the brain. Just a week later, another article appeared, written by Emmet Leith, an engineer at the University of Michigan, about split laser beams and optical holography, a new technology.12
It had been right there, all along, right in front of his nose. This was just the metaphor he’d been looking for. The concept of wave fronts and holography seemed to hold the answer to questions he’d been posing for 20 years. Lashley himself had formulated a theory of wave interference patterns in the brain but abandoned it because he couldn’t envision how they could be generated in the cortex.13 Eccles’ ideas appeared to solve that problem. Pribram now thought that the brain must somehow ‘read’ information by transforming ordinary images into wave interference patterns, and then transform them again into virtual images, just as a laser hologram is able to. The other mystery solved by the holographic metaphor would be memory. Rather than precisely located anywhere, memory would be distributed everywhere, so that each part contained the whole.
During a UNESCO meeting in Paris, Pribram met up with Dennis Gabor, who’d won the Nobel prize in the 1940s for his discovery of holography in his quest to produce a microscope powerful enough to see an atom. Gabor, the first engineer to win the Nobel prize in physics, had been working over the mathematics of light rays and wavelengths. In the process he’d discovered that if you split a light beam, photograph objects with it and store this information as wave interference patterns, you could get a better image of the whole than you could with the flat two dimensions you get by recording point-to-point intensity, the method used in ordinary photography. For his mathematical calculations, Gabor had used a series of calculus equations called Fourier transforms, named after the French mathematician Jean Fourier, who’d developed it early in the nineteenth century. Fourier first began work on his system of analysis, which has gone on to be an essential tool of modern-day mathematics and computing, when working out, at Napoleon’s request, the optimum interval between shots of a cannon so that the barrel wouldn’t overheat. Fourier’s method was eventually found to be able to break down and precisely describe patterns of any complexity into a mathematical language describing the relationships between quantum waves. Any optical image could be converted into the mathematical equivalent of interference patterns, the information that results when waves superimpose on each other. In this technique, you also transfer something that exists in time and space into ‘the spectral domain’ – a kind of timeless, spaceless shorthand for the relationship between waves, measured as energy. The other neat trick of the equations is that you can also use them in reverse, to take these components representing the interactions of waves – their frequency, amplitude and phase – and use them to reconstruct any image.14
The evening they were together, Pribram and Gabor drank a particularly memorable bottle of Beaujolais and covered three napkins with complicated Fourier equations, to work out how the brain might be capable of managing this intricate task of responding to certain wave-interference patterns and then converting this information into images.15 There were numerous fine points to be worked out in the laboratory; the theory wasn’t complete. But they were convinced of one thing: perception occurred as a result of a complex reading and transforming of information at a different level of reality.
To understand how this is possible, it’s useful to understand the special properties of waves, which are best illustrated in a laser optical hologram, the metaphor that so captured Pribram’s imagination. In a classic laser hologram, a laser beam is split. One portion is reflected off an object – a china teacup, say – the other is reflected by several mirrors. They are then reunited and captured on a piece of photographic film. The result on the plate – which represents the interference pattern of these waves – resembles nothing more than a set of squiggles or concentric circles.
However, when you shine a light beam from the same kind of laser through the film, what you see is a fully realized, incredibly detailed, three-dimensional virtual image of the china teacup floating in space (an example of this is the image of Princess Leia which gets generated by R2D2 in the first movie of the Star Wars series). The mechanism by which this works has to do with the properties of waves that enables them to encode information and also the special quality of a laser beam, which casts a pure light of only a single wavelength, acting as a perfect source to create interference patterns. When your split beams both arrive on the photographic plate, one half provides the patterns of the light source and the other picks up the configuration of the teacup and both together interfere. By shining the same type of light source on the film, you pick up the image that has been imprinted. The other strange property of holography is that each tiny portion of the encoded information contains the whole of the image, so that if you chopped up your photographic plate into tiny pieces, and shone a laser beam on any one of them, you would get a full image of the teacup.
Although the metaphor of the holograph was important to Pribram, the real significance of his discovery was not holography per se, which conjures up a mental image of the three-dimensional ghostly projection, or a universe which is only our projection of it. It was the unique ability of quantum waves to store vast quantities of information in a totality and in three dimensions, and for our brains to be able to read this information and from this to create the world. Here was finally a mechanical device that seemed to replicate the way that the brain actually worked: how images were formed, how they were stored and how they could be recalled or associated with something else. Most important, it gave a clue to the biggest mystery of all for Pribram: how you could have localized tasks in the brain but process or store them throughout the larger whole. In a sense, holography is just convenient shorthand for wave interference – the language of The Field.
The final important aspect of Pribram’s brain theory, which would come a little later, had to do with another discovery of Gabor. He’d applied the same mathematics used by Heisenberg in quantum physics for communications – to work out the maximum amount that a telephone message could be compressed over the Atlantic cable. Pribram and some of his colleagues went on to develop his hypothesis with a mathematical model demonstrating that this same mathematics also describes the processes of the human brain. He had come up with something so radical that it was almost unthinkable – a hot, living thing like the brain functioned accordi
ng to the weird world of quantum theory.
When we observe the world, Pribram theorized, we do so on a much deeper level than the sticks-and-stones world ‘out there’. Our brain primarily talks to itself and to the rest of the body not with words or images, or even bits or chemical impulses, but in the language of wave interference: the language of phase, amplitude and frequency – the ‘spectral domain’. We perceive an object by ‘resonating’ with it, getting ‘in synch’ with it. To know the world is literally to be on its wavelength.
Think of your brain as a piano. When we observe something in the world, certain portions of the brain resonate at certain specific frequencies. At any point of attention, our brain presses only certain notes, which trigger strings of a certain length and frequency.16 This information is then picked by the ordinary electrochemical circuits of the brain, just as the vibrations of the strings eventually resonate through the entire piano.
What had occurred to Pribram is that when we look at something, we don’t ‘see’ the image of it in the back of our heads or on the back of our retinas, but in three dimensions and out in the world. It must be that we are creating and projecting a virtual image of the object out in space, in the same place as the actual object, so that the object and our perception of the object coincide. This would mean that the art of seeing is one of transforming. In a sense, in the act of observation, we are transforming the timeless, spaceless world of interference patterns into the concrete and discrete world of space and time – the world of the very apple you see in front of you. We create space and time on the surface of our retinas. As with a hologram, the lens of the eye picks up certain interference patterns and then converts them into three-dimensional images. It requires this type of virtual projection for you reach out to touch an apple where it really is, not in some place inside your head. If we are projecting images all the time out in space, our image of the world is actually a virtual creation.