Book Read Free

Connectome

Page 15

by Sebastian Seung


  You might not be convinced by this speculation, but it illustrates a general moral of the Rakic–Gould story: We should be cautious about blanket denials of regeneration, rewiring, or other types of connectome change. A denial has to be accompanied by qualifications if it’s to be taken seriously. Furthermore, the denial may well cease to be valid under some other conditions.

  As neuroscientists have learned more about regeneration, simply counting the number of new neurons has become too crude. We’d like to know why certain neurons survive while others are eliminated. In the Darwinian theory, the survivors are the ones that manage to integrate into the network of old neurons by making the right connections. But we have little idea what “right” means, and there is little prospect of finding out unless we can see connections. That’s why connectomics will be important for figuring out whether and how regeneration serves learning.

  I’ve talked about four types of connectome change—reweighting, reconnection, rewiring, and regeneration. The four R’s play a large role in improving “normal” brains and healing diseased or injured ones. Realizing the full potential of the four R’s is arguably the most important goal of neuroscience. Denials of one or more of them were the basis of past claims of connectome determinism. We now know that such claims are too simplistic to be true, unless they come with qualifications.

  Furthermore, the potential of the four R’s is not fixed. Earlier I mentioned that the brain can increase axonal growth after injury. In addition, damage to the neocortex is known to attract newly born neurons, which migrate into the zone of injury and become another exception to the “no new neurons” rule. These effects of injury are mediated by molecules that are currently being researched. In principle we should be able to promote the four R’s through artificial means, by manipulating such molecules. That’s the way genes exert their influence on connectomes, and future drugs will do the same. But the four R’s are also guided by experiences, so finer control will be achieved by supplementing molecular manipulations with training regimens.

  This agenda for a neuroscience of change sounds exciting, but will it really put us on the right track? It rests on certain important assumptions that are plausible but still largely unverified. Most crucially, is it true that changing minds is ultimately about changing connectomes? That’s the obvious implication of theories that reduce perception, thought, and other mental phenomena to patterns of spiking generated by patterns of neural connections. Testing these theories would tell us whether connectionism really makes sense. It’s a fact that the four R’s of connectome change exist in the brain, but right now we can only speculate about how they’re involved in learning. In the Darwinian view, synapses, branches, and neurons are created to endow the brain with new potential to learn. Some of this potential is actualized by Hebbian strengthening, which enables certain synapses, branches, and neurons to survive. The rest are eliminated to clear away unused potential. Without careful scrutiny of these theories, it’s unlikely that we’ll be able to harness the power of the four R’s effectively.

  To critically examine the ideas of connectionism, we must subject them to empirical investigation. Neuroscientists have danced around this challenge for over a century without having truly taken it on. The problem is that the doctrine’s central quantity—the connectome—has been unobservable. It has been difficult or impossible to study the connections between neurons, because the methods of neuroanatomy have only been up to the coarser task of mapping the connections between brain regions.

  We’re getting there—but we have to speed up the process radically. It took over a dozen years to find the connectome of the worm C. elegans, and finding connectomes in brains more like our own is of course much more difficult. In the next part of this book I’ll explore the advanced technologies being invented for finding connectomes and consider how they’ll be deployed in the new science of connectomics.

  Part IV: Connectomics

  8. Seeing Is Believing

  Smelling whets the appetite, and listening saves relationships, but seeing is believing. More than any other sense, we trust our eyes to tell us what is real. Is this just a biological accident, the result of the particular way in which our sense organs and brains happened to evolve? If our dogs could share their thoughts by more than a bark or a wag of the tail, would they tell us that smelling is believing? As a bat dines on an insect, captured in the darkness of night by following the echoes of ultrasonic chirps, does it pause to think that hearing is believing?

  Or perhaps our preference for vision is more fundamental than biology, based instead on the laws of physics. The straight lines of light rays, bent in an orderly fashion by a lens, preserve spatial relationships between the parts of an object. And images contain so much information that—until the development of computers—they could not easily be manipulated to create forgeries.

  Whatever the reason, seeing has always been central to our beliefs. In the lives of many Christian saints, visions of God—apocalyptic or serene—often triggered the conversion of pagans into believers. Unlike religion, science is supposed to employ a method based on the formulation and empirical testing of hypotheses. But science, too, can be propelled by visual revelations, the sudden and simple sight of something amazing. Sometimes science is just seeing.

  In this chapter I’ll explore the instruments that neuroscientists have created to uncover a hidden reality. This might seem like a distraction from the real subject at hand—the brain—but I hope to convince you otherwise. Military historians dwell on the cunning gambits of daring generals, and the uneasy dance of soldiers and statesmen. Yet in the grand scheme of things, such tales may matter less than the backstory of technological innovation. Through the invention of the gun, the fighter plane, and the atomic bomb, weapon makers have repeatedly transformed the face of war more than any general ever did.

  Historians of science likewise glorify great thinkers and their conceptual breakthroughs. Less heralded are the makers of scientific instruments, but their influence may be more profound. Many of the most important scientific discoveries followed directly on the heels of inventions. In the seventeenth century Galileo Galilei pioneered telescope design, increasing magnifying power from 3× to 30×. When he pointed his telescope at the planet Jupiter, he discovered moons orbiting around it, which overturned the conventional wisdom that all heavenly bodies circled the Earth.

  In 1912 the physicist Lawrence Bragg showed how to use x-rays to determine the arrangement of atoms in a crystal, and three years later, at the tender age of twenty-five, he won the Nobel Prize for his work. Later on, x-ray crystallography enabled Rosalind Franklin, James Watson, and Francis Crick to discover the double-helix structure of DNA.

  Have you heard the joke about two economists walking down the street? “Hey, there’s a twenty-dollar bill lying on the sidewalk!” one of them says. “Don’t be silly,” says the other. “If there were, someone would have picked it up.” The joke makes fun of the efficient market hypothesis (EMH), the controversial claim that there exists no fair and certain method of investment that can outperform the average return for a financial market. (Bear with me—you’ll see the relevance soon.)

  Of course, there are uncertain ways of beating the market. You can glance at a news story about a company, buy stock, and gloat when it goes up. But this is no more certain than a good night in Vegas. And there are unfair ways of beating the market. If you work for a pharmaceutical company, you might be the first to know that a drug is succeeding in clinical trials. But if you buy stock in your company based on such nonpublic information, you could be prosecuted for insider trading.

  Neither of these methods fulfills the “fair” and “certain” criteria of the EMH, which makes the strong claim that no such method exists. Professional investors hate this claim, preferring to think they succeed by being smart. The EMH says that either they’re lucky or they’re unscrupulous.

  The empirical evidence for and against the EMH is complex, but the theoretical justif
ication is simple: If new information indicates that a stock will appreciate in value, then the first investors to know that information will bid the price up. And thus, says the EMH, there are no good investment opportunities available, just as there are never (well, almost never) twenty-dollar bills lying on the sidewalk.

  What does this have to do with neuroscience? Here’s another joke: “Hey, I just thought of a great experiment!” one scientist says. “Don’t be silly,” says the other. “If it were a great experiment, someone would already have done it.” There’s an element of truth to this exchange. The world of science is full of smart, hard-working people. Great experiments are like twenty-dollar bills on the sidewalk: With so many scientists on the prowl, there aren’t many left. To formalize this claim, I’d like to propose the efficient science hypothesis (ESH): There exists no fair and certain method of doing science that can outperform the average.

  How can a scientist make a truly great discovery? Alexander Fleming discovered and named penicillin after finding that one of his bacterial cultures had accidentally become contaminated by the fungus that produces the antibiotic. Breakthroughs like this are serendipitous. If you want a more reliable method, it might be better to search for an “unfair” advantage. Technologies for observation and measurement might do the trick.

  After hearing rumors of the invention of the telescope in Holland, Galileo quickly built one of his own. He experimented with different lenses, learning how to grind glass himself, and eventually managed to make the best telescopes in the world. These activities uniquely positioned him to make astronomical discoveries, because he could examine the heavens using a device others didn’t have. If you’re a scientist who purchases instruments, you could strive for better ones than your rivals by excelling at fundraising. But you’d gain a more decisive advantage by building an instrument that money can’t buy.

  Suppose you think of a great experiment. Has it already been done? Check the literature to find out. If no one has done it, you’d better think hard about why not. Maybe it’s not such a great idea after all. But maybe it hasn’t been done because the necessary technologies did not exist. If you happen to have access to the right machines, you might be able to do the experiment before anyone else.

  My ESH explains why some scientists spend the bulk of their time developing new technologies rather than relying on those that they can purchase: They are trying to build their unfair advantage. In his 1620 treatise the New Organon, Francis Bacon wrote:

  It would be an unsound fancy and self-contradictory to expect that things which have never yet been done can be done except by means which have never yet been tried.

  I would strengthen this dictum to:

  Worthwhile things that have never yet been done can only be done by means that have never yet existed.

  It’s at those moments when new means exist—when new technologies have been invented—that we see revolutions in science.

  To find connectomes, we will have to create machines that produce clear images of neurons and synapses over a large field of view. This will be an important new chapter in the history of neuroscience, which is perhaps best seen not as a series of great ideas but as a series of great inventions, each of which surmounted a once insuperable barrier to observing the brain. It now seems trivial to say that the brain is made of neurons, but the path to this idea was tortuous. And for a simple reason—for a long time, it was impossible to see neurons.

  Living sperm were first observed in 1677 by Antonie van Leeuwenhoek, a Dutch textile merchant turned scientist. Leeuwenhoek made the discovery with his homebrew microscope, but he didn’t fully recognize its significance. He did not prove that the sperm, rather than the surrounding fluid in semen, were the agents of reproduction. And he had no inkling of the process of fertilization by which an egg and a sperm unite. But by paving the way to these discoveries by his successors, Leeuwenhoek’s work was epoch-making.

  Three years earlier, Leeuwenhoek had examined a droplet of lake water with his microscope. He saw tiny objects moving around and decided that they were alive. He called them “animalcules” and wrote a letter about them to the Royal Society of London. By now we are completely used to the idea of microscopic organisms and have difficulty imagining how much they must have stunned his contemporaries. At the time, though, Leeuwenhoek’s claims seemed so fantastic that they provoked suspicions of fraud. To allay these fears, he sent the Royal Society testimonial letters from eight eyewitnesses, including three clergymen, a lawyer, and a physician. After several years his claims were finally vindicated, and the Society honored him with membership.

  Leeuwenhoek is sometimes called the father of microbiology. This field turned out to have great practical significance in the nineteenth century, when scientists like Louis Pasteur and Robert Koch showed that microbial infection can cause disease. Microbiology was also critical in the development of the cell theory. This cornerstone of modern biology, formulated in the nineteenth century, holds that all organisms are composed of cells. Microscopic organisms are those that are composed of just a single cell.

  Most of the members of the Royal Society were wealthy men with the time to devote themselves to intellectual pursuits. Leeuwenhoek was not born rich, but by age forty he had secured enough income to turn his attention to science. He did not study at a university, and did not know Latin or Greek. How did this self-educated man from humble origins achieve so much?

  Leeuwenhoek did not invent the microscope; the credit goes to eyeglass craftsmen who worked at the end of the sixteenth century. Like today’s microscopes, the first ones combined multiple lenses, but they could magnify only 20 to 50 times. Leeuwenhoek’s microscopes delivered up to 10 times better magnification with a single, very powerful lens. We can’t be certain how he learned to make such outstanding lenses, because he kept his methods secret. This was Leeuwenhoek’s “unfair” advantage: He made better microscopes than the ones used by his rivals.

  When Leeuwenhoek died, his methods were lost. Later, in the eighteenth century, technical improvements made the multilens (“compound”) microscope more powerful than Leeuwenhoek’s. Scientists were able to see the structures of plant and animal tissues more clearly, which resulted in the acceptance of the cell theory in the nineteenth century. Yet there was one place where the theory ran into trouble: the brain. Microscopists could see the cell bodies of neurons and the branches extending from them. But they lost track of the branches after a short distance. All they could see was a dense tangle, and no one knew what happened there.

  The problem was resolved by a breakthrough in the second half of the nineteenth century. An Italian physician named Camillo Golgi invented a special method of staining brain tissue. Golgi’s method stains only a few neurons; it leaves almost all of them unstained and hence invisible. Figure 26 may still look a little crowded, but we can make out the shapes of individual neurons. Golgi’s scientific rival, the Spanish neuroanatomist Santiago Ramón y Cajal, was presumably viewing something like this in his microscope when he drew the image shown in Figure 1.

  Figure 26. Golgi staining of neurons in the cortex of a monkey

  Golgi’s new method was an extraordinary advance. To appreciate why, let’s imagine the branches of neurons as entangled strands of yellow spaghetti. (I introduced this metaphor earlier, but it’s now even more apt, given Golgi’s national origin.) Cooks with extremely bad eyesight see only a yellow mass on the plate, because the individual strands are too blurry to be distinguished. Now suppose that a single dark strand is mixed in with the others (see Figure 27, left). Even with blurry vision, it’s possible to follow the path of the dark strand (right).

  Figure 27. Why Golgi staining works: photograph of pasta before (left) and after (right) blurring

  As an invention, a microscope might seem more glamorous than a stain. Its metal and glass parts are impressive, and can be designed using the laws of optics. A stain isn’t much to look at; it might even smell bad. Stains are often discovered by chance rat
her than design. Actually, we still don’t know why Golgi’s stain marks only a small fraction of neurons. All we know is that it works. In any case, Golgi’s stain and others have played an important role in the history of neuroscience. “The gain in the brain lies mainly in the stain,” neuroanatomists like to say. Golgi’s is simply the most famous.

  Science can languish for a long time if the proper technology does not exist. Without the right kind of data, it can be impossible to make progress, no matter how many smart people are working on the problem. The nineteenth-century struggle to see neurons lasted until the invention of Golgi’s staining method, which was soon used most avidly, and most illustriously, by Cajal. In 1906 Golgi and Cajal voyaged to Stockholm to receive the Nobel Prize “in recognition of their work on the structure of the nervous system.” As is customary, both scientists gave special lectures describing their research. But rather than celebrate their joint honor, the two men took the opportunity to attack each other.

  They had long been embroiled in a bitter dispute. Golgi’s staining method had finally revealed neurons to the world, but the limited resolution of the microscope still left ambiguities. When Cajal looked in his microscope, he saw points at which two stained neurons contacted each other but still remained separate. When Golgi looked in his microscope, he saw such points as locations where neurons had fused together into a continuous network, forming a kind of supercell.

  By 1906 Cajal had convinced many of his contemporaries that a gap existed, but it was still unclear how neurons could communicate with each other if they were not physically continuous. Three decades later, Otto Loewi and Sir Henry Dale were awarded the Nobel Prize “for their discoveries relating to chemical transmission of nerve impulses.” They had found conclusive evidence that neurons can send messages by secreting neurotransmitter molecules, and receive messages by sensing them. The idea of a chemical synapse explained how two neurons could communicate across a narrow gap.

 

‹ Prev