Overcomplicated

Home > Other > Overcomplicated > Page 6
Overcomplicated Page 6

by Samuel Arbesman


  This is not a problem only for those in tense situations and in scenarios with lots of complexity, such as directing air traffic or reacting calmly in a battle. It has become a problem for all of us, individually and collectively, in coping with our human-made technological systems. We have lost the bubble.

  In the Entanglement, we lose the bubble in two related ways: we are unable to fathom the structure and dynamics of huge and complex systems themselves—the way the different pieces interact as a whole; and we are unable to make much headway into the vast quantity of knowledge and the specialized expertise it would take to fully understand how these systems operate. To see more clearly why we think about the world in ways that are ill-suited to complex technological systems, we can again look at language.

  When Our Brains Fall Short

  Recursion is a computer science term that means, essentially, self-reference: it describes a section of computer code that refers back to itself. It’s also spawned some programming humor. Search for the term recursion on a search engine and it might ask you, “Did you mean recursion?” This is funny to a distinct subset of humanity.

  Recursion is also built into the fabric of how we speak. Language has a recursive capability; in fact, it is infinitely recursive, at least in theory. You can say, “He said that the dog is brown,” as well as “She thought he said that the dog is brown,” the second sentence embedding the first within it. And if you’re more daring, you could even utter, “I remembered that she thought he said that the dog is brown.” Depending on the structure of the sentence, this embedding of sentences within sentences can be done at the beginning of a sentence, in the middle, or at the end, over and over.

  Its recursive nature makes language infinitely rich. Imagine a relatively small language with 1,000 verbs, 10,000 nouns, and a rule that the only sentences one can make are of the form noun verb noun. The linguistic capacity of this language is huge: you can make up to 10,000 × 1,000 × 10,000 sentences, which is 100 billion sentences. If you spoke a sentence every ten seconds, it would take more than thirty thousand years to exhaust all possible sentences. What’s more, the example above is a particularly impoverished language. To speak every possible sentence in a language even slightly more complicated—with a greater number of words or more complex sentence structures—would take amounts of time that might better be described as geological eons.

  While these numbers are inconceivably large, they are still finite. To proceed from mind-boggling finitude to true infinity, we must use recursion. And once you introduce recursion—allowing an arbitrarily large number of clauses to be embedded within one another—a language becomes theoretically infinite in its richness.

  But in practice this isn’t quite true. It’s silly to say that language allows for an arbitrarily large number of embedded clauses: that may be technically feasible according to the rules of grammar, but our brains simply can’t parse that much recursion. As much as we would like our languages to be infinite and variegated, we can’t handle sentences with a recursion depth of much more than two or three.

  Here are some sentences from the linguist Steven Pinker that not only are hard to understand, they don’t even look syntactically correct:

  The dog the stick the fire burned beat bit the cat.

  The rapidity that the motion that the wing that the hummingbird has has has is remarkable.

  Each of these has only a small amount of nesting. For example, the first sentence means that the dog—the one that was beaten by a burnt stick—bit the cat. It is constructed by modifying “the dog”—of “The dog bit the cat”—with a description of the stick. This sentence has only two levels of nesting. If you were to go up to ten levels, the sentence would be effectively impossible to make sense of. And if we can’t handle ten, we certainly can’t deal with numbers that scrape the ceiling of infinity.

  Humans can do some types of linguistic processing, such as translation, better than computers can. But for parsing sentences, computers have numerous advantages. While human cognitive processing is limited by our working memory, computers can use large memory stores to put each portion of the sentence in its place, and can then construct the tree of the sentence, rendering it meaningful. For this reason, computers can easily parse sentences that flummox the human mind. For example, the sentence “This is the cheese that the rat that the cat that the dog chased bit ate,” although strange and impenetrable to human ears (and eyes), can be parsed by a machine.

  There are even computer programs that have these syntactic structures built in, allowing for the creation of quite large sentences. Instead of processing these complicated sentences, they generate random text that appears realistic in the style of certain authors. Consider Kant Generator, which can make such sentences as “Since knowledge of the phenomena is a priori, the reader should be careful to observe that, so far as regards necessity and the things in themselves, the discipline of human reason, so far as I know, can be treated like our judgments.” We are far more complicated creatures than this tiny computer program, but we have a lot of trouble breaking such things down and determining whether or not they’re nonsensical.

  In a similar vein, there are structures known as garden path sentences, such as “The complex houses married and single soldiers and their families.” These are intriguing sentences that begin one way but end up having a different grammatical structure and meaning than we were initially led to expect. We fill in the expected meaning and are surprised and often momentarily confused when the sentence takes a different route.

  Of course, recursion and other grammatical tricks are far from the only such demonstrations of human cognitive limits. There’s a whole cottage industry of tasks that we’re not particularly good at that machines handle with ease. I’m not talking about perceptual challenges like optical illusions, but bona fide examples of our mental-processing limits. For example, have you ever wondered how long a string of numbers you can memorize at once? For most of us, it’s about seven. It’s hard to memorize much more than a telephone number, minus the area code.

  Or take counting objects, or even dots on a screen. We can definitely count lots and lots of objects and dots, but how many can you count in one glance, perceiving that number immediately? It turns out that this number is very small. When you’re looking at a group of dots, your brain ends up grouping them into multiple smaller groups, often of about three or four. Visually, most of us can immediately perceive only four or so items. This ability to perceive at once is called subitizing, and it’s a weird quirk of our brains that we can do this effectively only for a small number. Just as we have trouble reading long and winding sentences, we have trouble counting more than four objects at once.

  In other comparisons with machines, we are also pretty pathetic. It takes about eight seconds to transfer a piece of information into our long-term memory. In less than that amount of time, you can download War and Peace to your laptop. And unlike some simple computers, we’re pitiful at multitasking. Our neurons are more than a million times slower than a computer circuit, and, according to one estimate, our long-term memory can’t hold much more than one of my family’s old Macintoshes from the 1980s could.

  Are our brains capable of reaching beyond these rather meager limits? Research in human cognition is not encouraging on the matter. Much as computers can be tweaked to work faster than intended—this is known as overclocking—we can sometimes soup up our mental engines as well, using pharmaceuticals. But when we study these attempts to “overclock” our brains, we discover trade-offs. Just as overclocked computers can overheat, our brains can also suffer from being pushed beyond their limits. It seems that our brains have been delicately optimized by evolution, and attempts to tinker with them can create serious problems.

  You can see examples of such trade-offs if you look at those rare individuals who have unlimited memories—they remember essentially every fact they encounter and every occurrence they witness. B
ut they are not superhumans. In fact, they end up being hampered by such issues as trouble recognizing faces. Because their memory is so detail-oriented, anytime there is a change in how a person appears, it is difficult to recognize that person as the same one. In “Funes the Memorious,” a short story by Jorge Luis Borges, the title character is burdened by a perfect and complete memory. Every change and detail generates a new memory. Just as it was for the fictional Funes, in real life an unbelievably good memory seems to cause problems with skills such as abstraction, leaving one burdened with huge amounts of unnecessary information.

  And just as there are outliers in cognitive processing—such as these individuals with prodigious memories and those who can calculate huge arithmetical operations in their heads—we also see extremes of insight in understanding something complex. For example, take the mathematician Srinivasa Ramanujan. A self-taught genius who worked during the early part of the twentieth century, Ramanujan was not your average mathematician who tried to solve problems through trial and error and occasional flashes of brilliance. Instead, equations seemed to leap fully formed from his brain, often mind-bogglingly complex and stunningly correct (though some were also wrong).

  The Ramanujan of technology might be Steve Wozniak. Wozniak programmed the first Apple computer and was responsible for every aspect of the Apple II. As the programmer and novelist Vikram Chandra notes, “Every piece and bit and byte of that computer was done by Woz, and not one bug has ever been found. . . . Woz did both hardware and software. Woz created a programming language in machine code. Woz is hardcore.” Wozniak was on a level of technological understanding that few can reach.

  We can even see the extremes of our brain’s capacity—as well as how its limits can be stretched—in the way London cabdrivers acquire and use what is known as The Knowledge. The Knowledge—a wonderfully eldritch term—is the full complement of all details of the metropolitan London area: the 25,000 roads and their interconnections, as well as parks, landmarks, statues, restaurants, hotels, and every other conceivable detail that a cabdriver must know in order to accurately and efficiently transport a passenger from any one location to another. Learning The Knowledge, and being certified as a cabdriver, can take several years of intense memorization and exploration of London. The result, though, is that the brains of these cabdrivers visibly change: the posterior hippocampus, a region important for spatial memory, increases in size.

  But even these outliers, impressive as they are, have limits. Ramanujan still got things wrong, and Wozniak would certainly also run up against cognitive limits, recursive or otherwise. And London cabdrivers would be hard-pressed to contain the entire Earth’s road network in their minds.

  Besides our limited memory storage and retrieval capacities and our ability to hold only so much in our conscious minds at once, we have difficulty grasping the implications of interconnections within a system. Specifically, we are confounded by nonlinear systems. When something changes in a linear way—a small change creating a small difference, a bigger change yielding a bigger difference—we are essentially tasked with extrapolating a straight line. Our brains have little difficulty doing this, because a linear system’s inputs are directly proportional to its outputs. But when a small cause caroms through a large interconnected system and results in a big effect—so that the system is changed in a disproportionate way—we are unprepared for this highly nonlinear result.

  A nonlinear system’s behavior is modulated by feedback and the magnification of inputs (or even the opposite: a big value giving you a tiny effect), making it much more difficult to relate the inputs to the outputs. We are no longer extrapolating a straight line; the variables interact in swooping and complicated curves, over which our brains stumble. These shortcomings cause us to have difficulty grasping complex systems, even those we have built ourselves.

  Too Complex to Handle

  Philosophy is a large field, with specialties ranging from political philosophy and ethics to the philosophy of science and technology. Within the philosophy of technology, there is a growing interest in the philosophical implications of software: How should we think about our computational creations? The philosophers John Symons and Jack Horner at the University of Kansas have examined how our construction of software—one type of technological system—can yield incomprehensibility almost immediately.

  The simplest reason for this is branch points. If you have a piece of technology that does one thing if condition A is true but something else if condition B is true, this is considered a branch point, or an if-then statement, in the parlance of programmers. For example, a computer program might add ten to a number if that number is odd, but only five if that number is even.

  As Symons and Horner note, once a computer program incorporates these branch points, the number of potential paths the program can take when run on a machine begins to multiply. Using some reasonably conservative calculations, they show that a program of only 1,000 lines (relatively short for even pretty simple programs, and much shorter than most programs used in “the wild”) already has 1030—more than a trillion trillion—potential pathways that can be traversed, assuming that branch points occur every so often in the computer code. To check all possible paths—understanding the implications and soundness of each one—is not only infeasible, it is impossible. This system is not simply difficult to understand; it is effectively hopeless to fully understand, in all its details, within the age of the universe. In other words, the vast majority of computer programs will never be thoroughly comprehended by any human being. This includes programs on your laptop, computer code in your kitchen appliances, and software that determines how airplanes are directed around the globe.

  Of course, computer programs can still be understood, at least on some level, without manually traversing every potential path. That is one of the features of abstraction, as discussed in the first chapter. Abstraction, combined with various rigorous methods of testing, reduction of errors, and software “hygiene”—such as not using GOTO or certain types of variables—can reduce our lack of understanding. But we will never be truly sure that we know all implications and potential situations. Users, whether scientists working with a model of the world, technicians operating a large machine, or drivers of state-of-the-art automobiles, must be satisfied with incomplete understanding as part of living in the Entanglement.

  We also see the effect of large numbers of components and interconnections when we encounter analyses based on large datasets, where huge quantities of data points are fed into algorithms that provide us with predictive power, but sometimes at the expense of human meaning. Google recently turned powerful computional methods on itself, seeking to boost the energy efficiency of its data centers by feeding a slew of their properties into a computer model, including everything from the total number of condenser water pumps running to outdoor wind speed. To quote Google’s blog, “In a dynamic environment like a data center, it can be difficult for humans to see how all of the variables . . . interact with each other. One thing computers are good at is seeing the underlying story in the data, so [a data center engineer] took the information we gather in the course of our daily operations and ran it through a model to help make sense of complex interactions that his team—being mere mortals—may not otherwise have noticed.” (Emphasis mine.)

  It’s very difficult to follow the mathematical details of these kinds of massive technological models. But as Douglas Heaven, the chief technology editor at New Scientist magazine, has written, even if we are able to, it wouldn’t necessarily be meaningful to us. A choice or an answer produced by such a piece of software is not arrived at the way we would, and often cannot be understood in terms of a statable general rule or idea. Rather than a straightforward path of logic, the decision is based on an enormously complex set of calculations. We throw in huge amounts of information and data and let the massive piece of software churn something out. We get an answer, and it works, but we are missing in
sight into the process by which it came to be an answer. In area after area, from the law to the hardware we build, we are partnering with computers to help navigate incredibly complicated technologies. But in the process, we find ourselves largely mystified by how these systems we depend on operate.

  This process is only accelerating. One computational realm, evolutionary computation, allows software to “evolve” solutions to problems, while remaining agnostic as to what shape the eventual solution will take. Need an equation to fit some data? Take a page from biology. Create a population of potential solutions within a computer program and allow them to evolve, recombining, mutating, and reproducing, until the fittest solutions emerge triumphant. An evolutionary algorithm does this splendidly—even if you can’t understand the final answer it comes up with.

  A number of years ago, research was conducted to design a type of computer circuit. A simple task was created that the circuit needed to solve, and then the researcher tried to evolve a solution in hardware, with candidate circuits mingling in a Darwinian stew. After many generations, the program found a successful circuit design. But this design had a curious feature: parts of it were disconnected from the main circuit yet were somehow still vital to its function. The evolved circuit had taken advantage of weird physical and electromagnetic phenomena, which no engineer would ever have thought of using, to make the circuit complete its task.

  In another instance, an equation was evolved to solve another problem, and the result was also recognized as impenetrable. Kevin Kelly, in Out of Control, describes it thus: “Not only is it ugly, it’s incomprehensible. Even for a mathematician or computer programmer, this evolved formula is a tar baby in the briar patch.” The evolved code was eventually understood, but its way of solving the problem appeared to be “decidedly inhuman.” This evolutionary technique yields novel technological systems, but ones that we have difficulty understanding, because we would never have come up with such a thing on our own; these systems are fundamentally different from what we are good at thinking about.

 

‹ Prev