Book Read Free

Connectome

Page 17

by Sebastian Seung


  9. Following the Trail

  The ancient greeks told the story of King Minos, who kept a beautiful white bull for himself instead of offering it as a sacrifice. The gods, angry at his greed, punished Minos by driving his wife mad with lust for the bull. She gave birth to the Minotaur, a monster with two legs and two horns. Minos imprisoned her deadly offspring in the Labyrinth, a mazelike structure ingeniously constructed by the great engineer Daedalus. Eventually the hero Theseus came from Athens and killed the Minotaur. To find his way back out of the Labyrinth, he followed a thread supplied by his lover Ariadne, the daughter of Minos.

  Connectomics reminds me of this myth. Like the Labyrinth, the brain must deal with the consequences of destructive emotions such as greed and lust, while also inspiring acts of ingenuity and love. Try to imagine yourself traveling through the axons and dendrites of the brain, like Theseus navigating the twisting passages of the Labyrinth. Perhaps you are a protein molecule sitting on a molecular motor car running on a molecular track. You are being transported on the long journey from your birthplace, the cell body, to your destination, the outer reaches of the axon. You patiently sit and watch as the walls of the axon go by.

  If this journey sounds intriguing, let me invite you to embark on a virtual version. You will travel through images of the brain, rather than the brain itself. You’ll trace the path of an axon or dendrite through a stack of images collected by the machines described in Chapter 8. It’s a task essential for finding connectomes. In order to map the brain’s connections, you have to see which neurons are connected by synapses, and you can’t do it without knowing where the “wires” go.

  To find an entire connectome, though, you’d have to explore every passage in the brain’s labyrinth. To map just one cubic millimeter, you’d have to travel through miles of neurites and wade through a petabyte of images. Such laborious and careful analysis would be essential; a mere glance at the images would tell you nothing. This style of science seems far removed from Galileo’s sighting of the moons of Jupiter or Leeuwenhoek’s glimpse of sperm.

  Today, our notion of “science as seeing” is being stretched to the limit by current technologies. No single person can possibly comprehend all the images now being collected by automated instruments. But if technology created the problem, maybe it can also solve it. Perhaps computers could trace the paths of all those axons and dendrites through the images. If our machines did most of the work for us, we’d be able to see connectomes.

  The problem of dealing with huge quantities of data is not unique to connectomics. The world’s largest scientific project is the Large Hadron Collider (LHC), a circular tube constructed one hundred meters underground, inside a twenty-seven-kilometer-long tunnel between Lake Geneva and the Jura Mountains. The LHC accelerates protons to great speeds and smashes them together to probe the forces between elementary particles. At one location on its circumference sits a gigantic apparatus called the Compact Muon Solenoid. It’s designed to detect one billion collisions per second, of which one hundred are selected by computers that automatically sift through the data. Only these interesting events are recorded, but the data still flows at a torrential rate, as each event yields over one megabyte. The data is shipped to a network of supercomputers around the world for analysis.

  To find entire connectomes of mammalian brains, we will need microscopes that produce images at data rates greater than those of the LHC. Can we analyze the data quickly enough to keep up? The scientists who compiled the C. elegans connectome encountered a similar challenge. To their surprise, it took more effort to analyze the images than to collect them.

  In the mid-1960s, the South African biologist Sydney Brenner saw the possibility of using serial electron microscopy to map all the connections in a small nervous system. The term connectome had not been invented yet, and Brenner called the task “reconstruction of a nervous system.” Brenner was working at the MRC Laboratory for Molecular Biology in Cambridge, England. At that time, he and others at the lab were establishing C. elegans as a standard animal for research on genetics. It later became the first animal to have its genome sequenced, and thousands of biologists study C. elegans today.

  Brenner thought that C. elegans might also help us understand the biological basis of behavior. It did the standard things like feeding, mating, and laying eggs. It also gave canned responses to certain stimuli. For example, if you touched its head, it would recoil and swim away. Now suppose you found a worm that was incapable of one of these standard behaviors. If its offspring inherited the same problem, you could assume that the cause was a genetic defect, and try to pinpoint it. That kind of research would elucidate the relationship between genes and behaviors, which would already be valuable. But one could raise the stakes even further by examining the nervous systems of such mutant worms. Perhaps one would be able to identify particular neurons or pathways disrupted by the faulty gene. The prospect of studying the worm at all these levels—genes, neurons, and behavior—sounded truly exciting. But the whole plan hinged on something that Brenner did not have: a map of the normal worm’s nervous system. Without that, it would be difficult to discern what was different about the nervous systems of mutants.

  Brenner was aware of the early twentieth-century attempt of Richard Goldschmidt, a German-American biologist, to map the nervous system of another species of worm, Ascaris lumbricoides. Goldschmidt’s light microscope did not have enough resolution to show the branches of neurons clearly, or reveal synapses. Brenner decided to try something similar with C. elegans, but using the superior technology of the electron microscope and the ultramicrotome.

  C. elegans is just one millimeter long, much smaller than Ascaris, which can grow up to a foot in the intestines of its human hosts. Converting the entire C. elegans worm, like a tiny sausage, into slices thin enough for electron microscopy could be accomplished with a mere several thousand cuts. Nichol Thomson, a member of Brenner’s team, found it impossible to slice up an entire worm without error, owing to the technical difficulties of the not-yet-automated slicing process, but he could manage a large fraction of a worm. Brenner decided to combine images from segments of several worms. It was a reasonable strategy because the worm’s nervous system is so standardized.

  Thomson sliced up worms until he had covered every region of the worm’s body at least once. The slices were placed one by one in an electron microscope and imaged (see Figure 32). This laborious process eventually yielded a stack of images representing the entire nervous system of C. elegans. All of the worm’s synapses were there.

  Figure 32. A slice of C. elegans

  You might think Brenner and his team were done at that point. Isn’t a connectome just the entirety of all synapses? In fact, they had only just begun. Although the synapses were all visible, their organization was still hidden. In effect, the researchers had collected a jumbled-up bag of synapses. To find the connectome, they needed to sort out which synapses belonged to which neurons. They couldn’t tell from a single image, which showed only two-dimensional cross-sections of neurons. But if they could follow the successive cross-sections of a single neuron through a sequence of images, they could determine which synapses belonged to it. And if this could be done for all the neurons, then the connectome would be found. In other words, Brenner’s team would know which neurons were connected to which other neurons.

  Again, think of a worm as a tiny sausage. But imagine this time that the sausage is stuffed with spaghetti. These spaghetti strands are its neurons, and our task is to trace the path of each one. Since we don’t have x-ray vision, we ask the butcher to cut the sausage into many thin slices. Then we lay all the slices flat and trace each strand by matching its cut pieces from slice to slice.

  To have any hope of tracing without errors, the slices must be extremely thin, less than the diameter of a spaghetti strand. Similarly, the slices of C. elegans had to be thinner than the branches of neurons, which can be less than 100 nanometers in diameter. Nichol Thomson cut slices ab
out 50 nanometers thick—just thin enough to allow most branches of neurons to be traced reliably.

  John White, who was trained as an electrical engineer, attempted to computerize the analysis of the images, but the technology was too primitive. White and a technician named Eileen Southgate had to resort to manual analysis. Cross-sections of the same neuron were marked with the same number or letter, as shown in the two images in Figure 33. To trace a single neuron in its entirety, the researchers repeatedly wrote the same symbol on the appropriate cross-section in successive images, like Theseus unrolling Ariadne’s thread in the Labyrinth. Once the paths of neurons were traced, they went back to each synapse and noted the letters or numbers of the neurons involved in it. And in this way the C. elegans connectome slowly emerged.

  Figure 33. Tracing the branches of neurons by matching their cross-sections in successive slices

  In 1986 Brenner’s team published the connectome as an entire issue of the Philosophical Transactions of the Royal Society of London, a journal of the same society that had welcomed Leeuwenhoek as a member centuries before. The paper was titled “The Structure of the Nervous System of the Nematode Caenorhabditis elegans,” but its running head was the pithier “The Mind of a Worm.” The body of the text is a 62-page appetizer. The main course is 277 pages of appendices, which describe the 302 neurons of the worm along with their synaptic connections.

  As Brenner had hoped, the C. elegans connectome turned out to be useful for understanding the neural basis of the worm’s behaviors. For example, it helped identify the neural pathways important for behaviors like swimming away from a touch to the head. But only a small fraction of Brenner’s original ambitions were realized. It wasn’t for lack of images; Nichol Thomson had gathered plenty of them, from many worms. He had actually imaged worms with many types of genetic defects, but it was too laborious to analyze the images to detect the hypothesized abnormalities in their connectomes. Brenner had started out wanting to investigate the hypothesis that the “minds” of worms differ because their connectomes differ, but he had been unable to do so because his team had found only a single connectome, that of a normal worm.

  Finding even one connectome was by itself a monumental feat. Analyzing the images consumed over a dozen years of effort in the 1970s and 1980s—much more labor than was required to cut and image the slices. David Hall, another C. elegans pioneer, has made these images available online in a fascinating repository of information about the worm. (The vast majority of them remain unanalyzed today.) The toil of Brenner’s team served as a cautionary note, effectively warning other scientists, “Don’t try this at home.”

  The situation began to improve in the 1990s, when computers became cheaper and more powerful. John Fiala and Kristen Harris created a software program that facilitated the manual reconstruction of the shapes of neurons. The computer displayed images on a screen and allowed a human operator to draw lines on top of them using a mouse. This basic functionality, familiar to anyone who has used computers to create drawings, was then extended to allow a person to trace a neuron through a stack of images, drawing a boundary around each cross-section. As the operator worked, each image in the stack would become covered with many boundary drawings. The computer kept track of all the cross-section boundaries that belonged to each neuron, and displayed the results of the operator’s labors by coloring within the lines. Each neuron was filled with a different color, so that the stack of images resembled a three-dimensional coloring book. The computer could also render parts of neurites in three dimensions, as in the image shown in Figure 34.

  Figure 34. Three-dimensional rendering of neurite fragments reconstructed by hand

  With this process, scientists could do their work much more efficiently than Brenner’s team had in the C. elegans project. Images were now stored neatly on the computer, so researchers no longer had to deal with thousands of photographic plates. And using a mouse was less cumbersome than manual marking with felt-tip pens. Nevertheless, analyzing the images still required human intelligence and was still extremely time-consuming. Using their software to reconstruct tiny pieces of the hippocampus and the neocortex, Kristen Harris and her colleagues discovered many interesting facts about axons and dendrites. The pieces were so small, however, that they contained only minuscule fragments of neurons. There was no way to use them to find connectomes.

  Based on the experience of these researchers, we can extrapolate that manual reconstruction of just one cubic millimeter of cortex could take a million person-years, much longer than it would take to collect the electron microscopic images. Because of these daunting numbers, it’s clear that the future of connectomics hinges on automating image analysis.

  Ideally we’d have a computer, rather than a person, draw the boundaries of each neuron. Surprisingly, though, today’s computers are not very good at detecting boundaries, even some that look completely obvious to us. In fact, computers are not so good at any visual task. Robots in science fiction movies routinely look around and recognize the objects in a scene, but researchers in artificial intelligence (AI) are still struggling to give computers even rudimentary visual powers.

  In the 1960s researchers hooked up cameras to computers and attempted to build the first artificial vision systems. They tried to program a computer to turn an image into a line drawing, something any cartoonist could do. They figured it would be easy to recognize the objects in the drawing based on the shape of their boundaries. It was then that they realized how bad computers are at seeing edges. Even if the images were restricted only to stacks of children’s blocks, it was challenging for the computers to detect the boundaries of the blocks.

  Why is this task so difficult for computers? Some subtleties of boundary detection are revealed by a well-known illusion called the Kanizsa triangle (Figure 35). Most people see a white triangle superimposed on a black-outlined triangle and three black circles. But it’s arguable that the white triangle is illusory. If you look at one of its corners while blocking the rest of the image with your hand, you’ll see a partially eaten pie (or a Pac-Man, if you remember that video game from the 1980s) rather than a black circle. If you look at one of the V’s while blocking the rest of the image with both hands, you won’t see any boundary where you used to see a side of the white triangle. That’s because most of the length of each side is the same color as the background, with no jump in brightness. Your mind fills in the missing parts of the sides—and perceives the superimposed triangle—only when provided with the context of the other shapes.

  Figure 35. The “illusory contours” of the Kanizsa triangle

  This illusion might seem too artificial to be important for normal vision. But even in images of real objects, context turns out to be essential for the accurate perception of boundaries. The first panel of Figure 36, a zoomed-in view of part of an electron microscope image of neurons, shows little evidence of a boundary. As subsequent panels reveal more of the surrounding pixels, a boundary at the center becomes evident. Detecting the boundary leads to the correct interpretation of the image (next-to-last panel); missing the boundary would lead to an erroneous merger of two neurites (last panel). This kind of mistake, called a merge error, is like a child’s use of the same crayon to color two adjacent regions in a coloring book. A split error (not shown) is like the use of two different crayons to color a single region.

  Figure 36. The importance of context for boundary detection

  Granted, this sort of ambiguity is relatively rare. The one shown in the figure presumably arose because the stain failed to penetrate one location in the tissue. In most of the rest of the image, however, it would be obvious whether or not there is a boundary even in a zoomed-in view. Computers are able to detect boundaries accurately at these easy locations but still stumble at a few difficult ones, because they are less adept than humans at using contextual information.

  Boundary detection is not the only visual task that computers need to perform better if we want to find connectomes. Anothe
r task involves recognition. Many digital cameras are now smart enough to locate and focus on the faces in a scene. But sometimes they erroneously focus on some object in the background, showing that they still don’t recognize faces as well as people do. In connectomics, we’d like computers to perform a similar task, and to do it flawlessly: look through a set of images and find all the synapses.

  Why have we failed (so far) to create computers that see as well as humans? In my view, it is because we see so well. The early AI researchers focused on duplicating capabilities that demand great effort from humans, such as playing chess or proving mathematical theorems. Surprisingly, these capabilities ended up being not so difficult for computers—in 1997 IBM’s Deep Blue supercomputer defeated the world chess champion Garry Kasparov. Compared with chess, vision seems childishly simple: We open our eyes and instantly see the world around us. Perhaps because of this effortlessness, early AI researchers didn’t anticipate that vision would be so difficult for machines.

  Sometimes the people who are the best at doing something are the worst teachers. They themselves can do the task unconsciously, without thinking, and if they’re asked to explain what they do, they have no idea. We are all virtuosos at vision. We’ve always been able to do it, and we can’t understand an entity that can’t. For these reasons we’re lousy at teaching vision. Luckily we never have to, except when our students are computers.

 

‹ Prev