Brain Buys

Home > Other > Brain Buys > Page 3
Brain Buys Page 3

by Dean Buonomano


  The existence of these two independent memory systems within our brains can be appreciated by introspection. For example, I have memorized my phone number and can easily pass it along to someone by saying the sequence of digits. The PIN of my bank account is also a sequence of digits, but because I do not generally give this number out and mostly use it by typing it on a number pad, I have been known to “forget” the actual number on the rare occasions I do need to write it down. Yet I still know it, as I am able to type it in to the keypad—indeed, I can pretend to type it and figure out the number. The phone number is stored explicitly in declarative memory; the “forgotten” PIN is stored implicitly as a motor pattern in nondeclarative memory.

  You may have trouble answering the question, What key is to the left of the letter E on your computer keyboard? Assuming you know how to type your brain knows very well which keys are beside each other, but it may not be inclined to tell you. But if you mimic the movements while you pretend to type wobble, you can probably figure it out. The layout of the keyboard is stored in nondeclarative memory, unless you have explicitly memorized the arrangement of the keys, in which case it is also stored in declarative memory. Both declarative and nondeclarative forms of memory are divided into further subtypes, but I will focus primarily on a type of declarative memory, termed semantic memory, used to store most of our knowledge of meaning and facts, including that zebras live in Africa, Bacchus is the god of wine, or that if your host offers you Rocky Mountain oysters he is handing you bull testicles.

  How exactly is this type of information stored in your brain? Few questions are more profound. Anyone who has witnessed the slow and inexorable vaporization of the very soul of someone with Alzheimer’s disease appreciates that the essence of our character and memories are inextricably connected. For this reason the question of how memories are stored in the brain is one of the holy grails of neuroscience. Once again, I draw upon our knowledge of computers for comparison.

  Memory requires a storage mechanism, some sort of modification of a physical media, such as punching holes in old-fashioned computer cards, burning a microscopic dot in a DVD, or charging or discharging transistors in a flash drive. And there must be a code: a convention that determines how the physical changes in the media are translated into something meaningful, and later retrieved and used. A phone number jotted down on a Post-it represents a type of memory; the ink absorbed by the paper is the storage mechanism, and the pattern corresponding to the numbers is the code. To someone unfamiliar with Arabic numerals (the code), the stored memory will be as meaningless as a child’s scribbles. In the case of a DVD, information is stored as a long sequence of zeros and ones, corresponding to the presence or absence of a “hole” burned into the DVD’s reflective surface. The presence or absence of these holes, though, tells us nothing about the code: does the string encode family pictures, music, or the passwords of Swiss bank accounts? We need to know whether the files are in jpeg, mp3, or text format. Indeed, the logic behind encrypted files is that the sequence of zeros and ones is altered according to some rule, and if you do not know the algorithm to unshuffle it, the physical memory is worthless.

  The importance of understanding both the storage mechanisms and the code is well illustrated in another famous information storage system: genes. When Watson and Crick elucidated the structure of DNA in 1953, they established how information, represented by sequences of four nucleotides (symbolized by the letters A, C, G and T), was stored at the molecular level. But they did not break the genetic code; understanding the structure of DNA did not reveal what all those letters meant. This question was answered in the sixties when the genetic code that translated sequences of nucleotides into proteins was cracked.

  To understand human memory we need to determine the changes that take place in the brain’s memory media when memories are stored, and work out the code used to write down information. Although we do not have a full understanding of either of these things, we do know enough to make a sketch.

  ASSOCIATIVE ARCHITECTURE

  The human brain stores factual knowledge about the world in a relational manner. That is, an item is stored in relation to other items, and its meaning is derived from the items to which it is associated.4 In a way, this relational structure is mirrored in the World Wide Web. As with many complex systems we can think of the World Wide Web as a network of many nodes (Web pages or Web sites), each of which interacts (links) in some way with a subset of others.5 Which nodes are linked to each other is far from random. A Web site about soccer will have links to other related Web sites, teams around the world, recent scores, and other sports, and it is pretty unlikely to have links to pages about origami or hydroponics. The pattern of links among Web sites carries a lot of information. For example, two random Web sites that link to many of the same sites are much more likely to be on the same topic than two sites that do not share any links. So Web sites could be organized according to how many links they share. This same principle is also evident in social networks. For instance, on Facebook, people (the nodes) from the same city or who attended the same school are more likely to be friends (the links) with each other than people from different geographic areas or different schools. In other words, without reading a single word of Mary’s Facebook page, you can learn a lot about her by looking at her list of friends. Whether it is the World Wide Web or Facebook, an enormous amount of information about any given node is contained in the list of links to and from that node.

  We can explore, to a modest degree, the structure of our own memory web by free-associating. When I free-associate with the word zebra, my brain returns animal, black and white, stripes, Africa, and lion food. Like clicking on the links of a Web page, by free-associating I am essentially reading out the links my brain has established between zebra and other concepts. Psychologists have attempted to map out what concepts are typically associated with each other; one such endeavor gave thousands of words to thousands of subjects and developed a huge free-association database.6 The result can be thought of as a complex web composed of over 10,000 nodes. Figure 1.1 displays a tiny subset of this semantic network. A number captures the association strength between pairs of words, going from 0 (no link) to 100 percent, which are represented by the thickness of the lines. When given the word brain 4 percent of the people responded with mind, a weaker association strength than brain/head, which was an impressive 28 percent. In the diagram there is no direct link between brain and bug (nobody thought of bug when presented with brain). Nevertheless, two possible indirect pathways that would allow one to “travel” from brain to bug (as in an insect) are shown. While the network shown was obtained by thousands of people, each person has his or her own semantic network that reflects unique individual experiences. So although there are only indirect connections between brain and bug in the brains of virtually everyone on the planet, it is possible that these nodes may have become strongly linked in my brain because of the association I now have between them (among the words that pop into my mind when I free-associate starting from brain are complex, neuron, mind, and bug.

  Figure 1.1 Semantic network: The lines fanning out from a word (the cue) connect to the words (the targets) most commonly associated with it. The thickness of a line between a cue and a target is proportional to the number of people who thought of the target in response to the given cue. The diagram started with the cue brain, and shows two pathways to the target bug. (Diagram based on the University of South Florida Free Association Norms database [Nelson, et al., 1998].)

  Nodes and links are convenient abstract concepts to describe the structure of human semantic memory. But the brain is made of neurons and synapses (Figure 1.2), so we need to be more explicit about what nodes and links correspond to in reality. Neurons are the computational units of the brain—the specialized cells that at any point in time can be thought of as being “on” or “off.” When a neuron is “on,” it is firing an action potential (which corresponds to a rapid increase in the voltage of
a neuron that lasts a millisecond or so) and in the process of communicating with other neurons (or muscles). When a neuron is “off,” it may be listening to what other neurons are saying, but it is mute. Neurons talk to each other through their synapses—the contacts between them. Through synapses, a single neuron can encourage others to “speak up” and generate their own action potentials. Some neurons receive synapses from more than 10,000 other neurons, and in turn send signals to thousands of other neurons. If you want to build a computational device in which information is stored in a relational fashion, you want to build it with neurons.

  Figure 1.2 Neurons: Neurons receive input through their dendrites and send output through their axons. The point of contact between two neurons (inset) corresponds to a synapse. When the presynaptic neuron (left; the “sender”) fires an action potential it releases vesicles of neurotransmitters onto the postsynaptic neuron (right; the “receiver”). The dendrites often have protrusions (spines), where the synapses are formed, while the axons are smooth. In humans the cell body of a pyramidal neuron is roughly 0.02 millimeters, but the distance from the cell body to the tip of the dendrites can be over 1 millimeter.

  What is the “zebra” node in terms of neurons? Does one neuron in your brain represent the concept of zebra and another your grandmother? No. Although we do not understand exactly how the brain encodes the virtually infinite number of possible objects and concepts we can conceive of, it is clear that every concept, such as zebra, is encoded by the activity of a population of neurons. So the “zebra” node is probably best thought of as a fuzzy group of neurons: a cluster of interconnected neurons (not necessarily close to each other). And just as an individual can simultaneously be a member of various distinct social groups (cyclists, Texans, and cancer survivors), a given neuron may be a member of many different nodes. The UCLA neurosurgeon Itzhak Fried has provided a glimpse into the relationship between neurons and nodes. He and his colleagues recorded from single neurons in the cortex of humans while they viewed pictures of famous individuals. Some neurons were active whenever a picture of a specific celebrity was shown. For instance, one neuron fired in response to any picture of the actress Jennifer Aniston, whereas another neuron in the same area responded to any picture of Bill Clinton.7 In other words, without knowing which picture the patient was looking at, the experimenters could have a good idea of who the celebrity was by which neurons were active. We might venture to say that the first neuron was a member of the “Jennifer Aniston” node, and the other was a member of the “Bill Clinton” node. Importantly, however, even those neurons found to be part of the Jennifer Aniston or Bill Clinton node might also fire in response to a totally unrelated picture.

  If a node corresponds to a group of neurons, you have probably deduced that synapses correspond to the links. If our “brain” and “mind” nodes are strongly associated with each other, we would expect strong synaptic connections between the neurons representing these nodes. Although the correspondence between nodes and neurons and between links and synapses provides a framework to understand the mapping between semantic networks at the psychological level and the biological building blocks of the brain, it is important to emphasize this is a stupendously simplified scenario.8

  MAKING CONNECTIONS

  Information is contained in the structure of the World Wide Web and in social networks because at some point people linked their pages to relevant pages, or “friended” like-minded people. But who connected the “zebra” and “Africa” nodes? The answer to this question leads us to the heart of how memory is physically stored in the brain.

  Although it would be a mistake to imply that the riddle of memory storage has been solved, it is now safe to say that long-term memory relies on synaptic plasticity: the formation of new synapses or the strengthening (or weakening) of previously existing ones.9 Today it is widely accepted that synaptic plasticity is among the most important ways in which the brain stores information. This consensus was not always the case. The quest to answer the question of how the brain stores information has been full of twists and turns. As late as the 1970s, some scientists believed that long-term memories were stored as sequences of the nucleotides that make up DNA and RNA. In other words, they believed that our memories were stored in the same media as the instructions to life itself. Once an animal learned something, this information would somehow be translated into strands of RNA (the class of molecules that among other functions translate what is written in the DNA into proteins). How memories would be retrieved once stored in RNA was not exactly addressed. Still, it was reasoned that if long-term memories were stored as RNA, then this RNA could be isolated from one animal and injected into another, and, voilà, the recipient would know what the donor animal had learned. Perplexingly, several papers published in the most respected scientific journals reported that memories had been successfully transferred from one rat to another by grinding up the brain of the “memory donor” and injecting it into the recipient.10 Suffice it to say, this hypothesis was an unfortunate detour in the quest to understand how the brain stores information.

  The current notion that it is through synaptic plasticity that the brain writes down information, not coincidently, fits nicely into the associative architecture of semantic memory. Learning new associations (new links between nodes) could correspond to the strengthening of very weak synapses or the formation of new ones. To understand this process we have to delve further into the details of what synapses do and how they do it. Synapses are the interface between two neurons. Like a telephone handset that is composed of a speaker that sends out a signal and a microphone that records a signal, synapses are also composed of two parts: one from the neuron that is sending out a signal and one from the neuron that is receiving the signal. The flow of information at a given synapse is unidirectional; the “messenger” half of a synapse comes from the presynaptic neuron, while the “receiver” half belongs to the postsynaptic neuron. When the presynaptic neuron is “on” it releases a chemical called a neurotransmitter, which is detected by the postsynaptic half of a synapse by a class of proteins referred to as receptors that play the role of microphones (refer back to Figure 1.2). With this setup a presynaptic neuron can whisper to the postsynaptic something like “I’m on, why don’t you go on too” or “I’m on, I suggest you keep your mouth shut.” The first message would be mediated by an excitatory synapse; the second by an inhibitory synapse.

  To understand this process from the perspective of a single postsynaptic neuron, let’s imagine a contestant on a TV game show trying to decide whether to pick answer A or B. The audience is allowed to participate, and some members are yelling out “A,” others “B,” and some aren’t saying anything. The contestant, like a postsynaptic neuron, is essentially polling the audience (a bunch of presynaptic neurons) to decide what she should do. But the process is not entirely democratic. Some members of the audience may have a louder voice than others, or the contestant may know that a few members of the audience are highly reliable—these individuals would correspond to strong or influential synapses. The behavior of a given neuron is determined by the net sum of what thousands of presynaptic neurons are encouraging it to do through synapses—some excitatory, some inhibitory, some strong and others that generate a barely audible mumble but together can add up to a roar. Although the distinction between pre- and postsynaptic neurons is critical at a synapse, like humans in a conversation, any given neuron plays the role of both speaker (presynaptic) and listener (postsynaptic). The game-contestant analogy provides a picture of neuronal intercommunication, but it does not begin to capture the actual complexity of real neurons embedded in an intricate network. One of many additional complexities—perhaps the most crucial—is that the strength of each synapse is not fixed, synapses can become stronger or weaker with experience. In our analogy this would be represented by the contestant’s learning, over the course of many questions, to pay more attention to certain members of the audience and ignore others.

&n
bsp; Although the term synapse had not yet been coined, Santiago Ramón y Cajal suggested in the late nineteenth century that memories may correspond to the strengthening of the connections between neurons.11 But it took close to a hundred years to convincingly demonstrate that synapses are indeed plastic. In the early 1970s, the neuroscientists Tim Bliss and Terje Lømo observed long-lasting increases in strength at synapses in the hippocampus (a region known to contribute to the formation of new memories) after their pre- and postsynaptic neurons were strongly activated.12 This phenomenon, called long-term potentiation, was an example of a “synaptic memory”—those synapses “remembered” they had been strongly activated. This finding, plus decades of continuing research, established that changes in synaptic strength are at some level the brain’s version of burning a hole in the reflective surface of a DVD.

  As is often the case in science, this important discovery led to an even more baffling question: if synapses are plastic, then how do two neurons “decide” if the synapse between them should become stronger or weaker? One of the most fundamental scientific findings of the twentieth century provided a partial answer to this question—one that offers powerful insights into the workings of the organ we use to ask and answer all questions. We now know that the synaptic strength between neurons X and Y increases when they are active at roughly the same time. This simple notion is termed Hebb’s rule, after the Canadian psychologist credited with first proposing it in 1949.13 The rule has come to be paraphrased as “neurons that fire together, wire together.” Imagine two neurons Pre1 and Pre2 that synapse onto a common postsynaptic neuron, Post. Hebb’s rule dictates that if neurons Pre1 and Post are active at the same time, whereas Pre2 and Post are not, then the Pre1Post synapse will be strong, while the Pre2Post synapse will be weak.

 

‹ Prev