Book Read Free

Behave: The Biology of Humans at Our Best and Worst

Page 69

by Robert M. Sapolsky


  Appendix 1

  Neuroscience 101

  Consider two different scenarios. First:

  Think back to when you hit puberty. You’d been primed by a parent or teacher about what to expect. You woke up with a funny feeling, found your jammies alarmingly soiled. You excitedly woke up your parents, who got tearful; they took embarrassing pictures, a sheep was slaughtered in your honor, and you were carried through town in a sedan chair while neighbors chanted in an ancient language. This was a big deal.

  But be honest—would your life be so different if those endocrine changes had instead occurred twenty-four hours later?

  Second:

  Emerging from a store, you are unexpectedly chased by a lion. As part of the stress response, your brain increases your heart rate and blood pressure, dilates blood vessels in your leg muscles, which are now frantically working, and sharpens sensory processing to produce a tunnel vision of concentration.

  How would things have turned out if your brain took twenty-four hours to send those commands? You’d be dead meat.

  That’s what makes the brain special. Hit puberty tomorrow instead of today? So what. Make some antibodies tonight instead of now? Rarely fatal. Same for delaying depositing calcium in your bones. But much of what the nervous system is about is encapsulated in the framing of chapter 2—what happened one second before? Incredible speed.

  The nervous system is about contrasts, unambiguous extremes between having something and having nothing to say, maximizing signal-to-noise ratios. And this is demanding and expensive.*

  ONE NEURON AT A TIME

  The basic cell type of the nervous system, what we typically call a “brain cell,” is the neuron. The hundred billion or so in our brains communicate with one another, forming complex circuits. In addition, there are glia cells, which do a lot of gofering—providing structural support and insulation for neurons, storing energy for them, helping to mop up neuronal damage.

  Naturally, this neuron/glia comparison is all wrong. There are about ten glial cells for every neuron, coming in various subtypes. They greatly influence how neurons speak to one another, and also form glial networks that communicate completely differently from neurons. So glia are important. Nonetheless, to make this primer more manageable, I’m going to be very neuron-centric.

  Part of what makes the nervous system so distinctive is how distinctive neurons are as cells. Cells are usually small, self-contained entities—consider little round red blood cells:

  Neurons, in contrast, are highly asymmetrical, elongated beasts, typically with processes sticking out all over the place:

  These processes can be elaborated to nutty extents. Consider this single neuron, drawn in the early twentieth century by one of the gods in the field, Santiago Ramón y Cajal:

  It’s like the branches of a manic tree, explaining the jargon that this is a highly “arborized” neuron.

  Many neurons are also outlandishly large. A zillion red blood cells fit on the proverbial period at the end of this sentence. In contrast, there are single neurons in the spinal cord that send out projection cables many feet long. There are spinal cord neurons in blue whales that are half the length of a basketball court.

  Now for the subparts of a neuron, the key to understanding its function.

  What neurons do is talk to one another, cause one another to get excited. At one end of a neuron are its metaphorical ears, specialized processes that receive information from another neuron. At the other end are the processes that are the mouth, that communicate with the next neuron in line.

  The ears, the inputs, are called dendrites. The output begins with a single long cable called an axon, which then ramifies into axonal endings—these axon terminals are the mouths (ignore the myelin sheath for the moment). Those axon terminals connect to the dendrites of the next neuron in line. Thus a neuron’s dendritic ears are informed that the neuron behind it is excited. The flow of information then sweeps from the dendrites to the cell body to the axon to the axon terminals, and is then passed to the next neuron.

  Let’s translate “flow of information” into quasi chemistry. What actually goes from the dendrites to the axon terminals? A wave of electrical excitation. Inside the neuron are various positively and negatively charged ions. Just outside the neuron’s membrane are other positively and negatively charged ions. When a neuron has gotten an exciting signal from the previous neuron at the end of one single dendritic fiber, channels in the membrane in that dendrite open, allowing various ions to flow in and others to flow out, and the net result is that the inside of the end of that dendrite becomes more positively charged. The charge spreads toward the axon terminal, where it is passed to the next neuron. That’s it for the chemistry.

  Two gigantically important details:

  The resting potential. So when a neuron has gotten a hugely excitatory message from the previous neuron in line, its insides can become positively charged relative to the extracellular space around it. Back to our earlier metaphor—the neuron now has something to say and it is screaming its head off. What might things look like then when the neuron has nothing to say, has not been stimulated? Maybe a state of equilibrium, where the inside and outside have equal, neutral charges?* No, never! That’s good enough for some cell in your spleen or your big toe. But back to that critical issue, that neurons are all about contrasts. When a neuron has nothing to say, that isn’t some passive state of things just trickling down to zero. Instead it’s an active process. An active, intentional, forceful, muscular, sweaty process. Instead of the “I have nothing to say” state being one of charge neutrality, the inside of the neuron is negatively charged relative to the outside.

  You couldn’t ask for a more dramatic contrast: I have nothing to say = inside of the neuron is negatively charged. I have something to say = inside is positive. No neuron ever confuses the two. The internally negative state is called the “resting potential.” The excited state is called the “action potential.” And why is generating this dramatic resting potential such an active process? Because neurons have to work like crazy, using various pumps in their membranes, to push some positively charged ions out and to keep some negatively charged ones in, all in order to generate that negative internal resting state. Along comes an excitatory signal; the pumps stop working, channels open, and ions rush this way and that to generate the excitatory positive internal charge. And when that wave of excitation has passed, the channels close and the pumps go back into action, regenerating that negative resting potential. Remarkably, neurons spend nearly half their energy on the pumps that generate the resting potential. It doesn’t come cheap to generate dramatic contrasts between having nothing to say and having some exciting news.

  Now that we understand resting potentials and action potentials, on to the other gigantically important detail:

  That’s not what action potentials really are. What I’ve just outlined is that a single dendritic fibril receives an excitatory signal from the previous neuron (i.e., the previous neuron has had an action potential); this generates an action potential in that dendrite, which propagates toward the cell body, over it, on to the axon, to the axon terminals, and signals the next neuron in line. Not true. Instead:

  So the neuron is sitting there with nothing to say, which is to say that it’s displaying a resting potential; all of its insides are negatively charged. Along comes an excitatory signal at that one dendritic fibril, emanating from the previous neuron in line. As a result, channels open and ions flow in and out of that one dendrite. But only a little bit. Not enough to make the entire inside of the neuron positively charged, simply a little less negatively charged just inside that dendrite (just to attach some numbers here that don’t matter in the slightest, the resting potential charge shifts from around –70 millivolts to around –60 millivolts). Then the channels close. That little hiccup of becoming less negative* spreads farther up the shaft of that dendrite. The pump
s have started working, pumping ions back to where they were in the first place. So at the end of that dendritic fibril, the charge went from –70 to –60 mV. But a little bit up the shaft of that fibril, things then go from –70 to –65 mV. Farther up the shaft, –70 to –69. In other words, that excitatory signal dissipates. You’ve taken a nice smooth calm lake, in its resting state, and tossed a little pebble in. It causes a bit of a ripple right there, which spreads outward, getting smaller in its magnitude, until it dissipates not far from where the pebble hit. And miles away, at the lake’s axonal end, that ripple of excitation has had no effect whatsoever.

  In other words, if a single dendritic fibril is excited, that’s not enough to pass on the excitation down to the axonal end and on to the next neuron. How does a message ever get passed on? Back to that wonderful drawing of a neuron by Cajal here.

  That arborized array of bifurcating dendritic branches ends in lots of ends of fibrils (time to introduce the term commonly used: “ends in lots of dendritic spines”). And in order to get sufficient excitation to sweep from the dendritic end of the neuron to the axonal end, you have to have summation—the same spine must be stimulated repeatedly and/or, more commonly, a bunch of the spinal neurons must be stimulated at once. You can’t get a wave, rather than just a ripple, unless you throw in a lot of pebbles.

  At the base of the axon, where it emerges from the cell body, is a specialized part (called the axon “hillock”). If all those summated dendritic inputs produce enough of a ripple to move the resting potential around the hillock from –70 mV to around –40 mV, a threshold is passed. And once that happens, all hell breaks loose. A different class of channels opens in the membrane of the hillock, which allows a massive migration of ions, producing, finally, a positive charge (about 30 mV). In other words, an action potential. Which then opens up those same types of channels in the next smidgen of axonal membrane, regenerating the action potential there, and then the next, and the next, all the way down to the axon terminals.

  From an informational standpoint, a neuron has two different types of signaling systems. From the dendritic spines to the starts of the axon hillock, it’s an analogue signal, with gradations of signals that dissipate over space and time. And from the axon hillock to the axon terminals, it’s a digital system with all-or-none signaling that regenerates down the length of the axon.

  Let’s throw in some imaginary numbers. Suppose an average neuron has about one hundred dendritic spines and about one hundred axon terminals. What are the implications of this in the context of the analogue/digital feature of neurons?

  Sometimes nothing interesting. Consider neuron A, which, as just introduced, has one hundred axon terminals. Each one of those connects to one of the dendritic spines of the next neuron in line, neuron B. Neuron A has an action potential, which propagates down to all of its one hundred axon terminals, which excites all one hundred dendritic spines in neuron B. The threshold at the axon hillock of neuron B requires fifty of the dendrites to get excited around the same time in order to generate an action potential; thus, with all one hundred of the dendrites firing, neuron B is guaranteed to get an action potential.

  Now, instead, neuron A projects half of its axon terminals to neuron B and half to neuron C. It has an action potential; does that guarantee one in neurons B and C? Yes. Each of those neurons’ axon hillocks has that threshold of needing a signal from fifty dendritic pebbles at once in order to have an action potential.

  Now, instead, neuron A evenly distributes its axon terminals among ten different target neurons, neurons B through K. Is its action potential going to produce action potentials in the target neurons? No way—continuing our example, the ten dendritic spines’ worth of pebbles in each target neuron is way below the threshold of fifty pebbles.

  So what will ever cause an action potential in, say, neuron K, which has only ten of its dendritic spines getting excitatory signals from neuron A? Well, what’s going on with its other ninety dendritic spines? They’re getting inputs from other neurons—nine of them, with ten inputs from each. When will neuron B have an action potential? When at least half of the neurons projecting to it have action potentials. In other words, any given neuron integrates the inputs from all the neurons projecting to it. And out of this comes a rule: the more neurons that neuron A projects to, by definition, the more neurons it can influence; however, the more neurons it projects to, the smaller its average influence will be at each of those target neurons. There’s a trade-off.

  This doesn’t matter in the spinal cord, where one neuron typically sends all its projections to the next one in line. But in the brain one neuron will disperse its projections to scads of other ones and receive inputs from scads of other ones, with each neuron’s axon hillock determining whether the threshold is reached and an action potential generated. The brain is wired in networks of divergent and convergent signaling.

  —

  Now to put in a flabbergasting real number—your average neuron has about ten thousand dendritic spines and about the same number of axon terminals. Factor in a hundred billion neurons, and you see why brains, rather than kidneys, write poetry.

  Just for completeness, here are a couple of final facts. Neurons have some additional tricks, at the end of an action potential, to enhance the contrast between nothing to say/something to say even more, two means of ending the action potential really fast and dramatically: something called delayed rectification and another thing called the hyperpolarized refractory period. Another minor detail from that diagram above—a type of glial cell wraps around an axon, forming a layer of insulation called a myelin sheath; this “myelination” causes the action potential to shoot down the axon faster.

  And one final detail of great future importance: the threshold of the axon hillock can change over time, thus changing the neuron’s excitability. What things change thresholds? Hormones, nutritional state, experience, and other factors filling this book’s pages.

  We’ve now made it from one end of a neuron to the other. How exactly does a neuron with an action potential communicate its excitation to the next neuron in line?

  TWO NEURONS AT A TIME: SYNAPTIC COMMUNICATION

  So an action potential has been triggered at the hillock in neuron A and has swept down to all ten thousand axon terminals. How is this excitation passed on to the next neuron(s)?

  The Defeat of the Synctitium-ites

  For your average nineteenth-century neuroscientist, the answer was easy. Their explanation would be that a fetal brain is made up of huge numbers of separate neurons that slowly grow their dendritic and axonal processes. And eventually the axon terminals of one neuron reach and touch the dendritic spines of the next neuron(s), and they merge, forming a continuous membrane between the two cells. From all those separate fetal neurons, the mature brain forms this continuous, vastly complex net of one single superneuron, which was called a “synctitium.” Thus excitation readily flows from one neuron to the next because they aren’t really separate neurons.

  Late in the nineteenth century an alternative view emerged, namely that each neuron remains an independent unit, and that the axon terminals of one neuron don’t actually touch the dendritic spines of the next. Instead there’s a tiny gap between the two. This notion was called the “neuron doctrine.”

  The adherents of the synctitium school thought that the neuron doctrine was asinine. “Show me the gaps between axon terminals and dendritic spines,” they demanded of these heretics, “and tell me how excitation jumps from one neuron to the next.”

  And then in 1873 it all got solved by the Italian neuroscientist Camillo Golgi, who invented a technique for staining brain tissue in a novel fashion. And the aforementioned Cajal used this “Golgi stain” to stain all the processes, all the branches and branchlets and twigs of the dendrites and axon terminals of single neurons. Crucially, the stain didn’t spread from one neuron to the next. There wasn’t a continuous, merged net
of a single superneuron. Individual neurons are discrete entities. The neuron doctrine-ers vanquished the synctitium-ites.*

  Hooray, case closed; there are indeed micro-microscopic gaps between axon terminals and dendritic spines; these gaps are called “synapses” (which weren’t directly visualized, putting the last nail in the synctitial coffin, until the invention of electron microscopy in the 1950s). But there’s still that problem of how excitation propagates from one neuron to the next, leaping across the synapse.

  The answer, whose pursuit dominated neuroscience in the middle half of the twentieth century, is that the electrical excitation doesn’t leap across the synapse. Instead it gets translated into a different type of signal.

  Neurotransmitters

  Sitting inside each axon terminal, tethered to the membrane, are little balloons called vesicles, filled with many copies of a chemical messenger. Along comes the action potential that initiated miles away in that neuron’s axon hillock. It sweeps over the terminal and triggers the release of those chemical messengers into the synapse. Which they float across, reaching the dendritic spine on the other side, where they excite the neuron. These chemical messengers are called neurotransmitters.

  How do neurotransmitters, released from the “presynaptic” side of the synapse, cause excitation in the “postsynaptic” dendritic spine? Sitting on the membrane of the spine are receptors for the neurotransmitter. Time to introduce one of the great clichés of biology. The neurotransmitter molecule has a distinctive shape (with each copy of the molecule having the same). The receptor has a binding pocket of a distinctive shape that is perfectly complementary to the shape of the neurotransmitter. And thus the neurotransmitter—cliché time—fits into the receptor like a key into a lock. No other molecule fits snugly into that receptor; the neurotransmitter molecule won’t fit snugly into any other type of receptor. Neurotransmitter binds to receptor, which triggers those channels to open, and the currents of ionic excitation begin in the dendritic spine.

 

‹ Prev