by Martin Erwig
Doc Brown: Obviously the time continuum has been disrupted, creating a new temporal event sequence resulting in this alternate reality.
Marty: English, Doc!
All this jumping back and forth in time sounds quite confusing, and Doc Brown has to explain some of the consequences of a planned trip with the time machine to Marty using a drawing on a blackboard, similar to the diagram shown on the right.
Some of the difficulties in making sense of these and other time travel stories are rooted in our experience of reality as a single stream of events with one past and one future. The possibility of time travel challenges this view and presents the potential for multiple alternative realities. Yet, time travel and alternative realities are not completely foreign to us. First, we all actually travel through time, albeit in a very limited fashion. As Carl Sagan put it, “We’re all time travelers—at a rate of exactly one second per second.”1 We also contemplate alternative realities when we are planning ahead and when we reminisce about events in the past, although we never really experience alternative realities.
Time travel is a fascinating topic, but what does it have to do with computation and, in particular, with recursion? As illustrated by the stories and daily activities described so far, a computation corresponds to a sequence of actions taken by a computer (human, machine, or other actor). Therefore, traveling into the past to perform actions corresponds to inserting actions into the stream of actions that lead to the present state of the world. The purpose is to bring the state of the world into a desired shape that allows specific actions to be taken at present.
For example, when Marty, Jennifer, and Doc return from 2015 to 1985, Marty expects to go with Jennifer on a long-planned camping trip. However, the violent state in which they find the world doesn’t allow that, and thus they travel into the past and change it by taking the sports almanac away from Biff so that the sequence of actions leading to the present will generate the state they expect. But time travel is not limited to a single step into the past. Right after Marty and Doc Brown have finished their mission of retrieving the sports almanac from Biff, the time machine is struck by lightning and transports Doc into 1885. Marty later finds out that Doc was killed a few days after he had arrived in 1885 by the outlaw Buford “Mad Dog” Tannen. He therefore follows Doc back to 1885, using the time machine that Doc had hidden in an old gold mine in 1885 for Marty to recover in 1955. In 1885 he helps Doc avoid being shot by Buford Tannen and ultimately manages to return to 1985.
Recursive algorithms do something very similar and can be understood as inserting instructions into a stream of instructions. Each step of an algorithm is either some basic instruction, or it can be an instruction to execute another algorithm, which effectively causes the instructions of that algorithm to be inserted into the current sequence of instructions. In the case of recursion, this means that the instructions of the current algorithm itself are inserted where the algorithm is called.
Since the execution of each algorithmic step produces some intermediate value or has some other effect, a recursive execution of an algorithm makes its value or effect available at the place where it is called. In other words, a recursive call of an algorithm can be viewed as an instruction to travel back in time to start the computation that makes the required result available now.
As a first example, let’s describe Marty’s actions by a recursive algorithm. Specifically, we can define an algorithm ToDo through several equations that tell Marty what to do at what time:2
ToDo(1985) = ToDo(1955); go camping
ToDo(1955) = retrieve sports alamanac; ToDo(1885); return to 1985
ToDo(1885) = help Doc avoid Buford Tannen; return to 1985
We could extend this algorithm to include the time travel to 2015, but the three shown cases suffice to illustrate a number of points about recursion. First, the equation for ToDo(1985) reveals that the actions to be taken in 1985 require actions to be taken in 1955, since in order to be able to go camping with Jennifer the world needs to be in a different state. This requirement is expressed using recursion in the algorithm ToDo. One of the steps in the algorithm ToDo is an execution of ToDo itself. Second, the recursive execution ToDo(1955) uses a different argument than the equation for ToDo(1985), of which it is a part. This means that the recursion does not lead to an exact replication (unlike the picture of the room containing a TV). This is significant for the termination behavior of computations.
Let’s consider how the computation unfolds when the algorithm is executed with the argument 1985. The first step of the computation ToDo(1985) is the execution of ToDo(1955), which means that before Marty can go camping, he has to travel back in time to 1955 to retrieve the sports almanac from Biff. But before he can return to 1985, he has to take the steps described by ToDo(1885), that is, he has to travel further back in time, to 1885, to save Doc Brown. After that he returns to 1985, when he finally can go on the long-planned camping trip with his girlfriend.
When we examine the third equation carefully, we observe something peculiar. Instead of returning to 1955, the time Marty came from when the computation ToDo(1885) was started, the algorithm directly returns to 1985. This is what actually happens in the third Back to the Future movie. This makes a lot of sense, because jumping back to 1955, only to then directly jump back to 1985 would not be very useful. (Most people prefer direct flights over layovers.)
However, this behavior is not how recursion typically works. When a recursive computation is completed, it automatically returns to the point where it left off, and the computation then continues right after it. In this example, it would have meant the jump to 1885 returning to 1955. The reason for this behavior is that the recursive computation generally doesn’t know how the computation is being continued, and thus the safe thing to do is to return to the point of departure in order to not miss anything important, even though in this particular case, since the next step is to return to 1985, the direct jump is fine. Whether there are two consecutive jumps or just a single one is a matter of efficiency. In Back to the Future this really matters because the flux capacitor that makes time travel possible in this story requires a lot of energy for each time jump. Doc Brown laments in 1955 about his car design from 1985:
How could I have been so careless? 1.21 gigawatts! Tom [Thomas Edison], how am I gonna generate that kind of power? It can’t be done, it can’t!
No Matter When
To use Doc Brown’s time machine one needs to provide the exact target date and time of the time travel. However, since the purpose of time travel is to change the causal chain of events, the exact date/time doesn’t really matter as long as one arrives before the event that is to be changed. Assuming that we have a table containing the dates/times for all the events we are interested in changing, we could express the ToDo algorithm differently, employing intended causal changes instead of explicit time jumps. In fact, the algorithmic style in ToDo of using direct jumps is quite old. It is how low-level languages for programming microprocessors operate, namely, by labeling pieces of code and then using jump instructions to move between pieces of code. All the control structures discussed in chapter 10 can be realized using such jumps. However, programs that employ jumps are difficult to understand and reason about, especially if the jumps are used in a haphazard way, in which case the resulting code is often called spaghetti code. Jumps have therefore been mostly retired as a way of expressing algorithms, even though they are still used in the low-level representation of code that runs on computer hardware. Instead, algorithms employ conditionals, loops, and recursion.
Since explicit jumps are a deprecated control structure, how can we express the ToDo algorithm without them? In this case, instead of using specific years for labeling sequences of actions, we can employ names for goals the algorithm is supposed to achieve. When an algorithm is called with a particular goal, we can find the equation for that goal among the equations. Moreover, once a recursive execution is finished, we automatically return to the
point where it was started. While the exact years are very important for the movie to establish different cultural contexts, they do not matter for establishing the correct sequence of steps, which only depends on a relative ordering that respects the causal dependencies of the actions. We can therefore replace the algorithm ToDo with a new algorithm Goal, defined as follows:
Goal(live now) = Goal(restore world); go camping
Goal(restore world) = retrieve sports alamanac; Goal(save Doc)
Goal(save Doc) = help Doc avoid Buford Tannen
The computation for this algorithm unfolds in the same way as for the algorithm ToDo, except that the times are not made explicit and that the return to 1985 happens in two steps.
Just in Time
The computation that results from executing the recursive ToDo or Goal algorithm has an effect on the state of the world and is not directly visible by tracing the steps of the algorithms as they unfold. To illustrate the connection between recursive execution and computation more clearly, let us look at another example and consider a simple algorithm for counting the elements in a list. As a concrete analogy, think of counting the cards in a deck.
This algorithm has to distinguish two cases. First, if the deck is empty, it contains 0 cards. Second, if the deck is not empty, the number of cards can be determined by adding 1 to the number of cards in the deck without the card on top. If we represent the deck of cards as a list data structure, the algorithm has to correspondingly distinguish between an empty and a nonempty list. In the latter case, it adds 1 to the result of applying the algorithm to the list’s tail (which is the list without its first element). Since any recursive call to a nonempty list will itself lead to the addition of 1, the algorithm will add as many 1s as there are elements in the list. The algorithm Count can be described by the following two equations that deal with the cases empty list and nonempty list:
Count( ) = 0
Count(x→rest) = Count(rest) + 1
The two cases are distinguished by displaying different forms of arguments for Count on the left sides of the equation. In the first equation a blank space indicates that this equation applies to empty lists that don’t contain any elements. In the second equation the pattern x→rest represents a nonempty list, where x stands for the first element and rest stands for the tail of the list. The definition in this case counts the elements in the list’s tail by Count(rest) and then adds 1 to the result. This way of selecting between different cases for an algorithm is called pattern matching. Pattern matching can often replace the use of conditionals and leads to a clear separation of the different cases to be considered by the algorithm. Pattern matching also provides direct access to parts of the data structure that the algorithm is working on, which sometimes can make definitions shorter. In this case x refers to the first element of the list, but it is not used in the definition on the right side of the equation. But since rest names the tail of the list, it can be employed as the argument for the recursive call to Count. Another nice benefit of pattern matching is that it clearly separates the recursive from the nonrecursive part in the definition.
The recursive equation for Count can be understood as the following hypothetical statement: If we knew the number of elements in the list’s tail, the total number of elements could be obtained by simply adding 1 to that number. Imagine somebody had already counted the number of cards in the deck without the top card and had left a sticky note with the number on it. In that case, the total number of cards could be determined by simply adding 1 to the number on the sticky note.
However, since this information is not available, we have to compute it—recursively—by applying Count to the tail of the list, rest. And here is the connection to time travel. Consider the times when the different computation steps occur. If we want to add 1 now, the computation of Count(rest) must already be completed and therefore must have been started sometime in the past. Similarly, if we want to compute the total number of cards immediately, we could have asked somebody else to count the number of cards in the deck without the top card a few minutes ago to have the result available now. Thus we can regard a recursive call of an algorithm as a trip into the past to create computational steps that will be completed in the present so that the remaining operations can be carried out immediately.
To see an example computation of a recursive algorithm, let’s count the different things Marty takes with him to 1885, which are cowboy boots (B), a pair of walkie talkies (W), and the hoverboard (H). When we apply Count to the list B→W→H, we have to use the second equation, since the list is not empty. To perform this application, we have to match the pattern x→rest against the list, which causes x to refer to B, and rest to refer to W→H. The defining equation for Count then instructs to add 1 to the recursive call of Count to rest:
Count(B→W→H) = Count(W→H) + 1
In order to carry out this addition, we need the result of Count(W→H), which we know to be 2, but the corresponding computation with Count must have started earlier if we want to be able to use the result now.
In order to get a more precise understanding of the timing, we assume that a basic computation step, such as adding two numbers, takes one time unit. We can then talk about the duration of a computation in terms of such time units. The start and completion of a computation relative to the present can be expressed as a distance of time units. If we designate the present as time 0, a basic step taken now will be completed at time +1. Similarly, a computation that takes two steps and completes now must have been started at time −2.
It is not obvious how long a recursive computation will take because it very much depends on how often a recursive step occurs, and this depends in general on the input. In the example Count, the number of recursions, and thus the runtime, depends on the length of the list. As with algorithms that use loops, the runtime for recursive algorithms can be described only as a function of the size of the input. This observation raises a question for the travel-into-the past view of recursion: How far back in time does the recursive call of Count have to travel to complete its work just in time and deliver its result for the addition? It seems that we have to travel far enough back into the past to have enough time for all the required expansions and additions, but since we don’t know the length of the list, we don’t know how far back we should go.
Fortunately, we don’t have to know the size of the input to employ time travel effectively. The key observation is that it is sufficient to start a recursive computation only one time unit into the past, because no matter how long a recursive computation takes, any additional time that would be required to execute further recursive computations is gained by sending the corresponding recursive calls even further into the past. How this works is illustrated in figure 12.1.
As shown, the execution of Count(B→W→H) generates the computation of Count(W→H) + 1. Once the recursive computation is completed, Count takes exactly one step to add 1 to result of the recursive call, and we have the final result 3 available at time +1. To have the result of a recursive call available at some specific time, it is sufficient to start the computation one time unit earlier. For example, to obtain the value of Count(W→H) in the present, that is, at time 0, we need to send the computation only one time unit into the past. As shown in the figure, this computation results in the expression 1 + 1 at time −1, which evaluates then in one step to 2 at time 0, exactly when it’s needed. And how is it possible for Count(W→H) to produce 1 + 1 instantaneously? This is achieved, again, by sending the involved recursive call Count(H) one step into the past, that is, to time −2, where it produces the expression 0 + 1, which also evaluates in one step and makes its result 1 available one time unit later at time −1. The 0 in the expression is the result of the call Count( ) that was sent back to time −3. Altogether we can see that by repeatedly starting recursive computations in the past, we can finish the original computation only 1 time unit into the future. This is true regardless of how long the list is; a longer list would sim
ply generate more time travel further back in time.
Figure 12.1 Recursively counting the number of elements in a list by traveling into the past. The execution of the algorithm Count on a nonempty list triggers a computation that adds 1 to the result of the computation of Count for the list’s tail. Executing the computation of the tail one step in the past makes its result available in the present, and the result of the addition will be available one step in the future. Executing Count on a nonempty list in the past leads to a further execution of Count even further back in the past.