Now, let’s say we pull up the buoys and toss them onto the dock. Their arrangement is precise and potentially hard to describe—it might require many paragraphs to write out the arrangement of these buoys, complete with diagrams, so that someone could re-create this specific configuration—but there’s nothing interesting going on here. There are no cascading effects, no feedback, no process happening within this sophisticated network. It’s just a bunch of stuff that can float, sitting in a pile on a dock.
The buoys in the water form a complex system. The buoys on the dock? Their arrangement is simply complicated. For a system to be complex, it’s not sufficient for it to contain lots of parts. The parts themselves need to be connected and interacting together in a tumultuous dance. When this happens, you see certain characteristic behaviors, hallmarks of a complex system: small changes cascade through this network, feedback occurs in the complex system, and there is even a sensitive dependence on the initial state of this system. These properties, among others, take a system from complication to complexity.
Here’s another way to think about this distinction: living creatures are complex, while dead things are complicated. A dead organism is certainly intricate, but there is nothing happening inside it: the networks of biology—the circulatory system, metabolic networks, the mass of firing neurons, and more—are all quiet. However, a living thing is a riot of motion and interaction, enormously sophisticated, with small changes cascading throughout the organism’s body, generating a whole host of behaviors. Furthermore, even if a system is dynamic—such as a bunch of unconnected buoys floating in the water—if there is no interconnection, potential for feedback, or other such properties, we are still in the realm of complication, not complexity.
If you define technology as any sort of system that humans have built and engineered for a specific purpose, you notice that almost all of today’s most advanced technologies are complex systems: dynamic, functionally intricate, of vast size, and with an almost organic level of complexity. These complex systems are all around us, from the software in our cars to the computers found in our appliances to the infrastructure of our cities. We have software projects that are massive, highly interconnected, and could fill encyclopedias—Microsoft Office alone has been estimated to be in the tens of millions of lines of computer code. The road system in the United States has 300,000 intersections with traffic signals and is the substrate for the constantly churning turmoil of transportation that spans a continent. Autocorrect, which we often deride as being hopelessly stupid for its failures, is actually incredibly advanced, relying on petabytes of data (a petabyte is a million gigabytes) and complex probability models. Our legal constructions have also become more complex over time, with the number of pages in the federal tax code numbering over 74,000 as of 2014. This vast legal network is profoundly complex, with numerous interconnections and a cascade of interacting effects on taxpayers, the functionality of which no person could completely understand in its entirety.
The intricacy in the complex technological systems that suffuse our lives is often a good thing. Within their vast complexity we find resilience and sophistication. These systems often possess many features and fail-safes that help them deal with anything that comes their way. These systems also provide us a life that the royalty of the ancients couldn’t imagine. They allow us to automate drudgery, bring water and power to our homes, live in perfect climates year-round, and summon information instantly.
But what does it mean to understand these complex systems? Understanding a phenomenon or system is not a binary condition. It exists across a rich spectrum. For example, you can understand a system as a whole, in its broad strokes, but not the details of the parts within it; you can understand all of its parts but not how they function together as a whole; you can understand how the parts are connected, or perhaps only the effect of these connections. Further, each of these components of understanding involves specific activities: describing how something works, predicting its future actions to varying degrees, and replicating it through a model given enough time and resources.
To return to our example, you might only understand the behavior of two or three buoys connected together, or even the motion of a single buoy in great detail, rather than the vast network of all the combined buoys. You might be able to describe the motion of the buoys without predicting their behavior. In software, you might understand a few modules in a given program really well—the one that calculates the value of pi, or the one that can efficiently sort a list of numbers—but not necessarily how they all work together. Often, we can grasp only some of these components of understanding, rather than all of them.
In addition, understanding is not static; it can improve with training. Someone who has never played chess before will look at most chessboard configurations and be unable to distinguish the differences between a hopeless muddle, the endgame, and threats to your king. A novice or intermediate player, however, will begin to see patterns in the pieces, and the current thread of the game. A master will see whole patterns at once, and multiple future patterns that could evolve from them, surveying the game and identifying potential moves and weaknesses. With sufficient training, one’s view of a chessboard goes from an array of pieces to a configuration that has white to checkmate in three moves. Training and expertise can actually change how we see the world and how we understand it.
We see the same situation with the systems we’ve built. Pages of computer code can be either gobbledygook or a beautiful solution to a difficult problem, depending on what you know. But when we fail to have a complete understanding, we fall short in a specific way: we encounter unexpected outcomes.
Take the Traffic Alert and Collision Avoidance System (TCAS), which was developed to prevent airplanes from crashing into each other in the sky. TCAS alerts pilots to potential hazards, and tells them how to respond by using a series of rules. In fact, this set of rules—developed over decades—is so complex that perhaps only a handful of individuals alive even understand it anymore. When a TCAS is developed for a new airplane, a simulation is used to test its effectiveness. If the new system responds as expected after a number of test cases, it receives a seal of approval and goes into use.
While the problem of avoiding collisions is itself a complex challenge, the system we’ve built to handle this problem has essentially become too complicated for us to understand, with even experts sometimes reacting with surprise to how it responds in certain situations.
When an outcome is unexpected, it means that we don’t have the level of understanding necessary to see how it occurred. If it’s a bug in a video game, this can be delightful or even entertaining. But when we encounter unexpected situations in the complex systems that allow our society to function—the infrastructure that provides our power and water, or the software that allows financial transactions to occur, or the program that prevents planes from colliding midair—it’s not entertaining at all. Lack of understanding becomes a matter of life and death.
While there are natural variations in our abilities to understand the world, with geniuses capable of incredible intuitive leaps that the rest of us struggle to grasp, we, as humans, still have cognitive limits. Increasingly, as we build technological systems that are ever more complicated and interconnected, we become less able to understand them, no matter how smart we are or how prodigious our memory, because these systems are constructed differently from the way we think. Humans are ill-equipped to handle millions of components, all interacting in huge numbers of ways, and to hold all the implications in our heads. We get overwhelmed, and we fail.
The Entanglement
In our era of modern machines, the non–technologically savvy among us occasionally resort to superstition and wishful thinking in an attempt to understand technology. For instance, there is invariably one person in a family who is blamed for a computer not working. Sometimes their touch mucks things up; sometimes even their mere presence is deemed
to have caused technology not to function as it should. A child comes home from college, and the printer stops working. Or a parent visits and the mouse ceases to function.
Then there’s the opposite issue: when a problem inexplicably vanishes the minute a solution is at hand. You bring a malfunctioning machine to technical support, and as soon as they touch it, the problem is nowhere to be found. But when you bring it home, you discover you still have a broken device.
This is the experience of the layperson; in our absence of technical expertise, the inner workings of these machines can appear somewhat magical. If we, as users of these systems, don’t know all the inner complications, it doesn’t matter. When failures happen, we can half-seriously assume that someone is having a perversely demonic effect on our machines. And even if we don’t make this assumption, we are comfortable recognizing that at least the expert knows what’s going on, and can reduce the mysterious poltergeist to a case of misfiring motherboards.
Unfortunately, this attitude is no longer reserved for the common person; it occurs even among the developers of technology themselves. The engineer Lee Felsenstein has told the story of an engineering manager who had to be removed from the room whenever a piece of software was being demonstrated, because his presence caused things to malfunction. The designers simply had no idea why this manager’s presence made things go bad. As Felsenstein noted, this type of computationally unexplainable failure “falls into the area of metaphysics.” These engineers simply didn’t know what was going on, and felt compelled to wave their hands in the general direction of philosophical musings on the nature of being.
They are not alone. The computer scientist Gerard Holzmann has much the same feeling:
Large, complex code almost always contains ominous fragments of “dark code.” Nobody fully understands this code, and it has no discernable purpose; however, it’s somehow needed for the application to function as intended. You don’t want to touch it, so you tend to work around it.
The reverse of dark code also exists. An application can have functionality that’s hard to trace back to actual code: the application somehow can do things nobody programmed it to do.
Similarly, in the legal field, we are currently in a situation where, according to the lawyer and author Philip K. Howard, “Modern law is too dense to be knowable.” But we don’t even need to examine the frontiers of our technologies to find such examples. As the writer Quinn Norton has noted, even your average desktop machine is “so complex that no one person on Earth really knows what all of it is doing, or how.”
In recent decades, the inexplicable, not just the complicated, is turning up more and more in the world of our own creations, even for those who have built these systems. Langdon Winner notes in his book Autonomous Technology that H. G. Wells came to believe late in life “that the human mind is no longer capable of dealing with the environment it has created.” Wells concluded this in 1945, discussing primarily human organizations and societies. This problem has become even more acute in recent years, through the development of computational technologies to a level that even Wells might have had difficulty imagining.
The computer scientist Danny Hillis argues that we have moved from the Enlightenment to the Entanglement, at least when it comes to our technology: “Our technology has gotten so complex that we no longer can understand it or fully control it. We have entered the Age of Entanglement. . . . Each expert knows a piece of the puzzle, but the big picture is too big to comprehend.” Not even the experts who have actually built them fully understand these technologies any longer.
The Limits of Abstraction
When we build complex technologies, one of the most powerful techniques for constructing systems is what is known as abstraction. Abstraction is essentially the process of hiding unnecessary details of some part of a system while still retaining the ability to interact with it in a productive way. When I write a computer program, I don’t have to write it in machine code—the language written out in binary code that each specific computer uses for its instructions. Instead, I can use a programming language such as C, one that can be more easily read by people and yet still be translated into machine code. In many cases, I don’t even need to know what specific machine my program might run on: these details have all been taken care of by other programs that interact at deeper levels with the machine. In other words, these details have been abstracted away.
This kind of abstraction occurs everywhere in technology, whether we are interacting with a user-friendly website whose innards we don’t care about, or plugging a toaster into an outlet anywhere in the country and receiving electrical current. I don’t need to know where the electricity came from or where it was generated, just as I don’t need to know the specifics of how a search engine generated its results. As long as the interface is a logical and accessible one, I can focus on the details of whatever I am building (or fixing) and not worry about the complexity that lies beneath. Abstraction allows someone to build one technology on top of another, using what someone else has created without having to dwell on its internal details. If you are a financial analyst using a statistical package to examine datasets or an app developer using pre-written code to generate fancy graphics, you are using abstraction.
Abstraction can bring us the benefits of specialization. Even if a system has millions of interacting components, those working to build or maintain it don’t necessarily need to know how it all works; abstraction allows them the luxury of needing to know only about the specific part that they are focused on. The rest of the details are, again, abstracted away.
Unfortunately, in the Entanglement, abstraction can—and increasingly will—break down. Portions of systems that were intended to be shielded from each other increasingly collide in unexpected ways.
One of the places where we can clearly see this is in the financial realm. Today’s markets involve not just humans, but large numbers of computer programs trading on a wide variety of information at rates faster than what people could do manually. These programs interlock in complicated ways, making decisions that can cascade through vast trading networks. But how are the decisions made on how to trade? By pouring huge amounts of data into still other programs, ones that fit vast numbers of parameters in an effort to squeeze meaning from incredible complexity.
The result can be extreme. Take the so-called Flash Crash, when, on May 6, 2010, the global financial market experienced a massive but extremely rapid fluctuation in the stock market, as large numbers of companies lost huge amounts of value, only to regain them instants later. This crash seems to have involved a series of algorithms and their specific rules for trading all interacting in unexpected ways, causing a trillion dollars in lost value for a short period of time. Complex though they are, these systems do not exist in a vacuum. In addition to being part of a larger ecosystem of technology that determines when each specific equity or commodity should be traded, our financial systems are also regulated by a large set of laws and rules. And of course, this collection of regulations and laws is itself also a system—a sophisticated and complex one—with massive numbers of laws that are interdependent and reference one another in precise and sometimes inscrutable ways.
Furthermore, the infrastructure that allows these trades to be executed is built upon technologies that have grown over the decades, yielding a combination of the old and the new: traders at hoary physical stock exchanges coexist with fiber optic cables in this system. When we are building computer programs that trade at high speeds, understanding how to do this effectively doesn’t just require knowledge of computer science, complex financial instruments, and laws and regulations, but now also a deep understanding of physics, because the speed of light in different materials plays a role at these trading speeds. As a result, there is no single person on the planet who fully understands all the interconnected systems of the financial world, and few completely understand even a single one.
For
many situations, each person working with a system needs to understand only a subset of the system well, or even some of it at just a superficial level. Some programmers at a financial firm might know only how to work on the system that’s actually doing the trading, while having almost no need to understand the physical infrastructure of the computers in the company. Others might focus on specific pieces of software that allow messages to pass from outside their firm to the algorithms that operate internally, and have only a passing familiarity with most everything else. And a lawyer working for the company might need to know about the laws that regulate certain types of trading, but would have no need to know the details of the software, servers, or fiber optics. Abstraction serves us nicely.
Understanding something in a “good enough” way can be just fine—most of the time. But as we build systems that are more and more complicated, the different levels at which the systems and their subsystems operate increasingly interact in some sort of multidisciplinary mess. In particular, as things become more interconnected, it becomes difficult to know whether a cursory or incomplete understanding is really sufficient. In the Entanglement, things collide across the many levels of abstraction, interacting in ways we can’t imagine. This web of interactions produces what’s known in complexity science as emergence, where the interactions at one level end up creating unanticipated phenomena at another. This is common in complex systems of all types, such as, for instance, when insects move together to create the emergent behavior of a swarm. It is also particularly clear in our financial systems, which involve multiple factors down to the level of the speed of transmission across individual wires and up to the level of planet-wide computational interactions. It’s too complicated to really know whether being able to abstract away the details will be sufficient.
Overcomplicated Page 2