It is thanks to this editing that we have a nice consistent picture of the events of the world. The sounds and sights are manipulated to link the events that are actually related to each other, despite the fact that they arrive in our brain at different times. Whether they do or not, our brain does its best to provide the illusion that the sight of the cymbals colliding are in register with the sound they produce. The movements of the lips and the sound of the voice of an actress in a movie are temporally aligned to create a coherent percept of sight and sound—the brain only alerts us to egregious violations of the visual and sound tracks, such as what might occur when watching a badly dubbed movie.
There are, however, situations in which our inability to correctly detect the true order of events can have tragic consequences and affect the lives of millions of people. I am, of course, referring to decisions made by referees. Many sports require referees to make judgments regarding the order or simultaneity of two events. In basketball, the referee must decide whether the ball left the hands of a player shooting a basket before or after the final buzzer; in the former case the point will count, but not in the latter. But it is surely in World Cup matches where this brain bug has wreaked the most havoc. Many soccer games, and thus the fate of nations, have been determined by goals allowed or annulled by referees unable to accurately call the offside rule. To call this rule, a referee must determine whether the forward-most offensive player is ahead of the last defensive player at the time another offensive player passes the ball. In other words, a judgment of the relative position of two moving players at the time of the pass must be made. Note that most of the time both events take place at distant points on the field, and thus require the line referee to shift his gaze to make the call. Studies suggest that up to 25 percent of offside calls are incorrectly assessed. Two sources of error may include the fact that it takes 100 milliseconds to shift our gaze, and that if two events occur simultaneously we often judge the one to which we were paying attention to as having occurred first.25 Additionally, there is a fascinating illusion called the flash-lag effect, in which we tend to extrapolate the position of a moving object, placing it ahead of its actual position at the time of another event.26 If a dot is moving across your computer screen from left to right, and at the exact time it reaches the middle of the screen another dot is flashed just above it, you will perceive the moving dot as having been ahead of the flashed dot even though they were both in the middle. By the same logic, the fact that the forward-most attacker is often running at the time of the pass may result in the referee’s perceiving him as ahead of his actual position. Humans did not evolve to make highly precise decisions regarding the temporal order of events; so it would seem that referees are simply not wired to perform the tasks we require from them.27
In some cases the brain simply edits a frame out of our perception. Look at the face of a friend and ask him to move his eyes laterally back and forth; you have no trouble seeing his eyes move more or less smoothly. Now perform this task while looking at yourself in the mirror. You see the extremes to the left and right but nothing in the middle. Where did the image of what happened in between go? It was edited out! This is termed saccade blindness. Visual awareness appears to be a continuous uninterrupted event. However, our eyes are generally jumping from one object to another. While these saccades are relatively short events, they do take time, around a tenth of a second (100 milliseconds). Visual input during this period disappears into the void, but the gap that is created as a result is seamlessly edited out of the visual stream of consciousness.
As you read this sentence, you are not consciously aware of each individual word. You do not laboriously string each word together to generate a running narrative of the sentence meaning. Rather, you unconsciously chunk words and phrases together, and consciously grasp the meaning of the sentence at critical junctures. This point is highlighted in the following two sentences:
The mouse that I found was broken.
The mouse that I found was dead.
In both cases, the appropriate meaning of “mouse” is determined by the last word of the sentence. Yet, in most cases, you do not find yourself arriving at the last word and changing your initial interpretation of “mouse.” As you read or hear the above sentences your brain backward edits the meaning of “mouse” to match the meaning established by the last word in the sentence. The brain had to wait until the end before delivering the meaning of the sentence into consciousness. Clearly our awareness of each word was not generated sequentially in real-time. Rather, awareness is “paused” until unconscious processing arrives at a reasonable interpretation of the sentence. This type of observation has also been used to point out the extent to which consciousness itself is illusory, that it is not a continuous online account of the events transpiring in the world, but an after-the-fact construct that requires cutting, pasting, and delaying chunks of time before creating a cozy narrative of external events.
HOW DO BRAINS TELL TIME?
We have now seen how important the brain’s ability to tell time is, and the degree to which our sense of time can be distorted. But we have not asked the most important question of all: how does a computational device built out of neurons and synapses tell time? We know something about what it means to discriminate colors: different wavelengths of light activate different populations of cells in our retinas (each containing one of three photosensitive proteins), which convey this information to neurons in cortical areas involved in color vision. But, in contrast to color we do not have receptors, or a sensory organ, that perceives or measures time.28 Nevertheless, we all discriminate between short and long durations, and claim to perceive the passage of time, so we must be able to measure it.
We live in a world in which we use technology to track time across scales spanning over 16 orders of magnitude: from the nanosecond accuracy of atomic clocks used for global-positioning systems, to the tracking of our yearly trips around the sun. Between these extremes, we track the minutes and hours that govern our daily activities. It is worth noting that the same technology can be used to measure time across this vast range. Atomic clocks are used to time nanosecond delays in the arrival of signals from different satellites, set the time in our cell phones, and make small adjustments to ensure that “absolute” time matches calendar time (due to a tiny slowing of Earth’s rotation, solar time does not exactly match time as measured by atomic clocks). Even digital wristwatches are used to time hundredths of a second as well as months, an impressive range of roughly nine orders of magnitude. In nature, animals also keep track of time over an almost equally impressive range of time scales: from a few microseconds (millionths of a second) all the way up to yearly seasonal changes. Mammals and birds can easily determine if a sound is localized to their left or right; this is possible because the brain can detect the extra amount of time it takes sound to travel from one ear to the other (in humans it takes sound approximately 600 microseconds to travel all the way from the right to left ear). As we have seen, tracking time in the range of tens and hundreds of milliseconds is important for communication; this holds true for animals as well. On the scale of hours, the nervous system tracks time in order to control sleep/wake cycles and feeding schedules. Finally, on the scale of months, many animals track and anticipate seasonal changes that control reproductive and hibernation cycles.
Both modern technology and living creatures, then, are faced with the need to tell time across a wide range of scales. What is amazing is the degree to which technology and nature settled on completely different solutions. In stark contrast to man-made timing devices, the biological solutions to telling time are fundamentally different from one time scale to the next. The “clock” your brain uses to predict when the red light will change to green has nothing to do with the “clock” that controls your sleep/wake cycle or the one used to determine how long it takes sound to travel from your right to left ear. In other words, your circadian clock does not even have a second hand, and the clock you use
to tap the beat of a song does not have an hour hand.29
Of the different timing devices in the brain, the inner workings of the circadian clock are probably the best understood. Humans, fruit flies, and even single-cell organisms track daily light/dark cycles.30 Why, you might ask, would a single-cell organism care about the time of day? One of the forces driving the evolution of circadian clocks in single-cell organisms was probably the harmful effects of ultraviolet radiation from the sun, which can cause mutations during the DNA replication necessary for cell division. Unicellular organisms, devoid of a protective organ such as the skin, are particularly vulnerable to light-induced replication errors. Thus, dividing at night provided a means to increase reproductive success, and anticipating the onset of darkness optimized replication by engaging the necessary cellular machinery before nightfall.
Decades of research have revealed that the circadian clock of single-cell organisms, plants, and animals alike relies on sophisticated biochemical feedback loops within cells: DNA synthesizes proteins through the process of transcription, and when the proteins involved in the circadian clock reach a critical concentration they inhibit the DNA transcription process that was responsible for their synthesis to begin with. When the proteins degrade, DNA transcription and synthesis of the protein can begin anew.31 Not coincidently, this cycle takes approximately 24 hours. The details of this clock and the proteins involved vary from organism to organism, but the general strategy is essentially the same from single-cell organisms to plants and animals.
What about on much shorter time scales? How do we anticipate the next ring of the telephone? How do people discriminate between the short (dot) and long (dash) audio tone used in Morse code? The neural mechanisms that allow animals and humans to tell time on the scale of milliseconds and seconds remain a mystery, but a number of hypotheses have been put forth. Over the past few decades, the dominant model of how the brain tells time bore a suspiciously close similarity to man-made clocks. The general idea was that some neurons generate action potentials at a periodic rate, and that some other group of neurons counted these “ticks” of the neural pacemaker. Thus, if the pacemaker “ticked” every 100 milliseconds, when 1 second elapsed the counter neurons would read “10.” As computational units go, some neurons are gifted pacemakers, which is fortunate since things like breathing and heartbeats rely on the ability of neurons to keep a beat. Neurons, however, were not designed to count. Timing seems to rely more on the brain’s internal dynamics than on its ability to tick and tock. While we often think of periodic events, such as the oscillations of a pendulum, when attempting to conjure timing devices, many systems that change or evolve in time (that is, they have dynamics) can be used to tell time. Think of a pond in which someone tosses a pebble that creates a concentric pattern of circles centered around the entry point. Assume you are handed two pictures of the pattern of ripples taken at different points in time. You will have no problem figuring out which picture was taken first based on the diameter of the ripples. Furthermore, with some experiments and calculations, one could figure out when both pictures were taken in relation to when the pebble was tossed in. So, even without a clock, the dynamics of the pond can be used to tell time.
Networks of neurons are complex dynamic systems that can also tell time. One hypothesis is that each point in time may be encoded by tracking which population of neurons is active: a particular pattern of neuronal activity would initially be triggered at “time zero” and then evolve through a reproducible sequence of patterns. We can think of this as a population clock.32 Imagine looking at the windows of a skyscraper at night, and for each window you can see whether the light in the room is on or off. Now let’s assume that for some reason—perhaps because the person in each room has a unique work schedule—that the same pattern is repeated every day. In one window, the light goes on immediately at sunset, in another an hour after sunset, in another the light goes on at sunset and off after an hour and then back on in three hours. If there were 100 windows, we could write down a string of binary digits representing the “state” of the building at each point in time 1-0-1…at sunset, 1-1-0…one hour after sunset, and so forth—each digit representing whether the light in a given window was on (1) or off (0). Even though the building was not designed to be a clock, you can see that we could use it to tell time by the pattern of lights in the windows.
In this analogy, each window is a neuron that could be “on” (firing action potentials) or “off” (silent). The key for this system to work is that the pattern must be reproducible. Why would a network of neurons fire in a reproducible pattern again and again? Because that is precisely what networks of neurons do well! The behavior of a neuron is largely determined by what the neurons that connect to it were doing a moment before, and what those neurons were doing is in turn determined by what other neurons did two moments ago.33 In this fashion, given the same initial pattern of neural activity, the entire sequence of patterns is generated time and time again. A number of studies have recorded from an individual neuron or groups of neurons while animals were performing a specific task and the results show that, in principle, these neurons could be used to tell time over the course of seconds.34
A related notion is that a network of active neurons changes in time as a result of the interaction between incoming stimuli and the internal state of the network. Let’s return to the pond analogy. If we drop the same pebble into a placid pond over and over again, a similar dynamic pattern of ripples will be observed each time. But if a second pebble is dropped in at the same point shortly after the first one, a different pattern of ripples emerges. The pattern produced by the second pebble is a result of the interaction with the state (the amplitude, number, and spacing of the little waves) of the pond when it was thrown in. By looking at pictures of the pattern of ripples when the second pebble was thrown in we could determine the interval between when the pebbles were dropped. A critical aspect of this scenario is that time is encoded in a “nonlinear” fashion, and thus does not play by normal clock rules. There are no ticks that allow for a convenient linear measure of time in which four ticks means that twice as much time as two ticks has elapsed. Rather, like the interacting ripples on the pond the brain encodes time in complex patterns of neural activity. The fact remains, however, that we will have to await future advances before we understand how the brain tells time in the range of milliseconds and seconds.
Neurons initially evolved to allow simple creatures to detect possible food sources and move toward them, and to detect potential hazards and move away from them. While these actions took place in time, they did not require organisms to tell time. So neurons in their primordial form were not designed to tell time. But as the evolutionary arms race progressed, the ability to react at the appropriate time—predict when other creatures will be where, anticipate upcoming events, and eventually communicate using signals that change in time—provided an invaluable selective advantage. Little by little, different adaptations and strategies emerged that allowed networks of neurons to time events ranging from less than a millisecond to hours. However, as with all of evolution’s designs, the ability to tell time evolved in a haphazard manner; many features were simply absent or added on later as a hack. Consider the circadian clock. Over the 3 billion years that some sort of creature has inhabited the earth, it is unlikely any one of them ever traveled halfway across the planet in a matter of hours—until the twentieth century. There was never any evolutionary pressure to build a circadian clock in a manner that allowed it to be rapidly reset. The consequence of this is jet lag. As any cross-continental traveler knows, sleep patterns and general mental well-being are impaired for a few days after a trip from the United States to Japan; unlike the watches on our wrist, our internal circadian clock cannot be reset on command.
As a consequence of evolution’s inherently unsystematic design process, we have an amalgam of different biological time-keeping devices, each specialized for a given time scale. The diverse and distinct
strategies that the brain uses to tell time allow humans and animals to get many jobs done, including the ability to understand speech and Morse code, determine if the red light is taking suspiciously long to change to green, or anticipate that a boring lecture must be about to wrap up. The strategies the brain uses to tell time also lead to a number of brain bugs, including the subjective contraction and dilation of time, illusions that can invert the actual order of sensory stimuli, mental blind spots caused by built-in assumptions about the appropriate delay between cause and effect, and difficulties in appropriately weighing the trade-off between the short- and long-term consequences of our actions. This last bug is by far the one that has the most dramatic impact on our lives.
Brain Buys Page 12