When those tiny little details deep within the system rise up like miniature demiurges and ruin some other portion of the technological system that we have constructed, we can no longer rely on understanding only part of the system. Hierarchies and abstraction, which have helped us manage complexity, now are increasingly collapsing in the face of the messy interaction of the Entanglement.
So is there any hope in sight, any way of returning from this muddle? Or are we doomed to contemplate these proliferating systems with a profound and ineffable horror?
Most of us think it’s okay if we don’t fully understand these technological systems, if we don’t know the details of our urban infrastructure or how the hardware of the iPhone registers our finger’s touch, or even how the morass of laws and regulations allows international commerce to occur. The mechanics of complicated systems, we assume, are unimportant as long as we are able to use them. But it’s one thing to not understand how your new gadget works; it’s another thing entirely when no one truly understands that gadget. While many of us continue to convince ourselves that experts can save us from this massive complexity—that they have the understanding that we lack—that moment has passed.
Our old patterns of making sense of these systems—the Challenger-style modes of thinking—are now hopelessly inadequate. The Entanglement is not off on the remote horizon, something to be rarely encountered. It is here, all around us. Each of us is going to need new ways of thinking about these technologies, even the ones for which we have blithely outsourced understanding to experts.
Although the dawn of the Entanglement looks pretty grim, I’m hopeful: we can learn to handle these systems, at least to some degree.
But to really understand this era we have created for ourselves, we need to take a step back and identify the forces that are propelling us deeper into complexity—and preventing us from comprehending it.
Chapter 2
THE ORIGINS OF THE KLUGE
In order to use the Internet, we must endure—at least indirectly—what can only be described as a mess. What was to become the Internet first began to be developed during the 1960s. It had an ingenious design, allowing it to be decentralized and to easily pass packets of information between different machines. This allowed smaller networks to be interconnected, with numerous protocols developed to enable this to happen efficiently.
Today, we no longer use the Internet the way it was originally developed. Take security, for example. A system developed by researchers to communicate is not ideal for a high volume of smooth and secure commercial transactions. To compensate for the Internet’s flaws, we have developed different mechanisms on top of its basic infrastructure to allow these commercial transactions to occur, including a whole slew of ways of encrypting and decrypting information that must remain private, as well as ways of transferring money in the digital realm. Happily, the system does work. But beneath the user interfaces of a website lurks a bizarre and complicated structure. Sometimes we even see this mess directly, as users, when we see warnings about security certificates. Things work, but they are far from pretty.
Similarly, the language used to construct our websites—HTML—was never designed to handle slick interactive web-based applications like Google Docs. These applications have been constructed, but at a cost: we have had to weld a fantastically baroque edifice atop a simple system. To get a glimpse of this underlying complexity, one need only examine the source HTML of Google’s homepage. While this webpage looks clean and elegant when viewed by a browser, there’s a huge amount going on below the surface. Last time I checked, the code for Google.com is well over 100,000 characters long and would take more than fifty pages to print in its entirety.
Even email, which seems relatively simple, has evolved far beyond its decades-old roots, with features such as message threads grafted on top of its original structure. As Slate interactives editor Chris Kirk noted after embarking on an ill-advised attempt to build his own email client program: “Though innovation in email is happening, it’s characterized by features balancing cunningly and sometimes haphazardly atop an antiquated system—features that attempt to either restore email to its original metaphor or evolve it into something else entirely.”
Computer science and engineering have a term, kluge, for a cobbled-together, inelegant, and sometimes needlessly complicated solution to a problem. A kluge works, but it isn’t pretty. Something may have been elegantly designed in its first iteration, but changes over time have complicated its structure, turning it into a Rube Goldberg–style jury-rigged mess.
There are kluges in every realm of technology, far beyond the structure of the Internet, from transportation to medical devices. Or even the wiring of your home entertainment system, which might work, but requires several remote controls and a Gordian knot of wires that is preferably located out of sight.
Consider the kluges of the American legal code, a technological system that has been built for a specific purpose but is far from elegant. Just as computer code is the written description of the operation of a piece of software, laws and regulations are technologies—essentially written embodiments of code as well.
The United States Constitution is a wonderfully elegant document. In only a handful of pages it lays out the foundation for a representative democracy. However, the Constitution is not the end of the story. The collection of federal laws that guide our country, known as the United States Code, is what has grown within this framework. This collection of laws has developed over the years to elaborate on the general principles of the Constitution as well as to handle specific situations. For example, while the Constitution states that Congress has the power to establish the postal service in the span of a single phrase, the United States Code spends more than 500 pages on the section that details this part of the government. This body of law includes everything from the bureaucratic details of the Postal Service to the specifics of postage. As a whole, the United States Code is far more complicated than America’s founding document. It has been increasing in both size and interconnectivity, and is now more than 22 million words long, with more than 80,000 connections between one section and another.
We can see massive growth in complexity over time nearly everywhere we look. And in general, as a complex system becomes big enough, it ends up becoming a kluge of one sort or another, as we shall see. The airplane the Wright brothers built in 1903 was a masterpiece of simplicity, constructed with a small number of parts and weighing only 750 pounds including the pilot. A Boeing 747-400 has 147,000 pounds of aluminum, 6 million individual parts, and 171 miles of wiring. More generally, during the past 200 years the numbers of individual parts in our most complicated manufactured machines have increased massively.
What about software, which undergirds the modern technological systems of every aspect of our lives? One common way to measure the complexity of software is through the number of lines of code it takes to write a program. According to some estimates, the source code for the Windows operating system became ten times longer over the course of about a decade. The image-editing software application Photoshop has exploded in size over the course of twenty years, growing to nearly forty times the number of lines of code it had in 1990.
If we look at the telephone network, this kind of growth happened rapidly there as well, with huge levels of complexity appearing quickly. By the 1920s, the American telephone system already had about 3 million miles of toll circuits and about 17 million telephones. The telephone had only been invented several decades earlier, and already its technological ecosystem was enveloping the country.
Each of these systems is an engineered technology, crafted by generations of experts for a specific function. One might assume that if these systems are being designed rationally, they should be logical, elegant, and even simple; they’d be more predictable, and easier to fix. And yet, despite our best efforts, our technology becomes ever more complex and complicated. This is not h
appening by accident. There are several forces intrinsic to technological development that propel us ever deeper into complexity. This tendency is by no means a law of physics, like gravity, but the forces that make our systems more complicated over time are so strong, often overcoming our desires for something less complex, that they almost do feel like inexorable physical laws. But why is this so?
In this chapter I explore several forces that make systems more complicated over time. On the surface these forces seem eminently reasonable. Each individual change brought about by these forces might allow a technological system to adapt to changing circumstances, continue to operate in a new environment, or provide additional usefulness. But ultimately these forces replace our once-elegant solutions with messy kluges. No matter how hard we try to avoid this outcome, we are going to end up with increasing complexity in the technologies that touch every aspect of our lives.
To begin with, the most obvious reasons we end up with increasingly complex systems over time are the twin forces of accretion and interaction: adding more parts to something over time, and adding more connections between those parts.
Accretion
In the years leading up to January 1, 2000, many engineers were concerned with fixing the Y2K bug. In a nutshell, when a piece of software that used only two digits to store the year—rather than four—rolled over into the year 2000, it would assume it to be 1900 instead, and potential problems could arise. No one was more focused on handling this problem than those at the Federal Aviation Administration. If the air traffic control system began to malfunction come the New Year, that would be a huge problem. So the FAA began to examine its computers, testing date changes to see what would happen when the system thought it was 1900.
During the testing process, they discovered that one particular type of machine in their systems—the IBM 3083—was particularly tricky to fix. Among the issues, according to the head of the union representing the FAA technicians: “There’s only two folks at IBM who know the micro-code, and they’re both retired.” That’s because the IBM 3083 was a mainframe machine installed in the 1980s, with software dating from years earlier. In other words, as of the late 1990s, the systems that were responsible for properly routing our airplanes were using computers with code that almost nobody was familiar with any longer.
This is hardly a surprising story. Over and over, large systems are built on top of somewhat smaller and older systems. As long as the pieces work reasonably well, little thought is given to the layering of new upon old, the accumulation and addition of piece after piece. According to one source, as of 2007 the machines at the Internal Revenue Service responsible for processing tax returns used a computer repository system developed during the Kennedy administration of the early 1960s. A separate IRS system was built in the 1970s and last overhauled in 1985. Similarly, the final space shuttle mission was supported by five IBM machines whose computational power pales by comparison with today’s average smartphone. And yet we continue using these kinds of software and technology.
In The Mythical Man-Month, originally published in 1975, Frederick P. Brooks Jr. examined software design and the management of programming projects. In the book he quotes the maxim “Add little to little and there will be a big pile.” Each individual design decision, whether accounting for an exception or providing a new feature, may seem like a small and separate choice. Each one makes sense: it fixes a problem or creates some new and exciting functionality for users. But as these choices accumulate, they add up, and you eventually get a large pile. This happens no matter what type of large technological system we look at, from transportation to energy to agriculture.
A large pile of rocks, for example, is not necessarily a problem. It can be unwieldy and messy, but it doesn’t have to be hard to understand. The problem is when the large pile we’ve created behaves in unexpected ways, such as when we get an avalanche. Unfortunately, this is often what happens when we add successive pieces to a technology. It grows not only bigger but less predictable as it accretes.
I first remember seeing the word “accretion” in relation to how a planetary system is formed, condensing from a spinning mass of dust and gas. This process of accretion, of adding bits and pieces—sometimes very old—doesn’t just create planets; technological growth is also a process of accretion.
One result of accretion is what is known as legacy code or legacy systems: outdated machines and pieces of technology that are still with us long after they were first developed, like those at the IRS. These antiquated systems are not exceptional or rare. They have been cobbled together and accreted over many years and are found everywhere we look, from the programs responsible for scientific simulations to parts of our urban infrastructure. For example, our cities can have water mains that are over one hundred years old coexisting alongside newer ones. In the case of computers, technological systems often rely on machinery that is no longer manufactured and code written in programming languages that have long since been retired. Many pieces of scientific software exist as legacy tools, often written in Fortran, a powerful but archaic programming language. Given the speed with which technology moves, reading Fortran is almost the computational equivalent of being well-versed in Middle English.
To quote the Whole Earth Catalog creator Stewart Brand in The Clock of the Long Now: “Typically, outdated legacy systems make themselves so essential over the years that no one can contemplate the prolonged trauma of replacing them, and they cannot be fixed completely because the problems are too complexly embedded and there is no one left who understands the whole system.”
When we are left with a slowly growing, glitch-ridden legacy system, we can only gingerly poke it into doing our bidding, because those who designed it are long gone. It is so completely embedded into other systems that removing it appears far worse than living with its quirks. These systems, when unwieldy enough, are sometimes even referred to as crawling horrors, in deference to the unspeakable monsters from the stories of H. P. Lovecraft.
We see a parallel to legacy code in the law. In our legal systems, laws are modified or amended over time, tweaked for changing circumstances, leaving laws from decades earlier that are still relevant. Regulations around traffic on the Internet are derived from a law passed in 1934. As laws accrete over time, a legal system becomes a kluge—it gets the job done, but it is far from elegant.
In fact, the tax code is so complex that the law has recognized this fact. The number of pages of instructions for the 1040 tax form has exploded, from two in 1940 to more than 200 in 2013. If you make an error in your taxes in good faith simply because the rules and provisions are so complicated, the Supreme Court has ruled that you cannot be convicted for willful failure to file tax returns. Essentially, it is more efficient for the law to make these klugey patches on the overcomplicated tax code than to overhaul it entirely from scratch to make it more user-friendly.
Or consider the overall growth in regulations enacted by various departments and agencies of the government, such as the Environmental Protection Agency. For example, if you look only at the number of pages in the Code of Federal Regulations—the collection of rules from these many agencies—this number has gone from fewer than 25,000 to more than 165,000 in the past fifty years.
We see a similar growth in bureaucracies and administration. In an article from the 1950s, The Economist proposed something known as Parkinson’s Law, which is a quantitative description of the multiplication of bureaucrats over time. While the article is somewhat tongue-in-cheek, it is buttressed with data, and concludes that governmental bureaucracies are essentially compelled to grow at the rate of about 5–6 percent more people per year. Managing an ever-growing bureaucratic structure seems to only increase in complexity.
In fact, those in the software world have enshrined this idea of accretion and accumulation to the level of a general rule. When it comes to software system evolution, these systems grow over time, unless there is an
active attempt to simplify them.
So why don’t we periodically sweep away all the complexity and start from scratch? Some reasons are practical—there is simply not enough time to rewrite a piece of software, for example, instead of just releasing a patch on whatever came before it. I highly doubt that anyone at Microsoft would be fine with rebuilding Word from scratch if it would take many years. When there are constraints and trade-offs in time, effort, and money, a system often ends up being modified in a way that is “good enough,” which generally means that things keep on getting tweaked and modified over time, just as lawmakers have done with the U.S. tax code. You build on top of what came before: our cities have gas pipes that are over a century old, transit systems run on 1930s technologies, and abandoned subway stations lurk beneath large cities.
But sometimes it is simply too difficult, or even dangerous, to start from scratch; no one understands the essential contributions of all the older pieces that a system relies on, and so to design an untried new system from the beginning would be foolish and even debilitating. For example, imagine a sophisticated banking software system that was designed many decades ago and has slowly been adapted to advancing technologies, from new computers and operating systems to the ubiquitous presence of the Internet. While the underlying core of the system was never intended for our modern era, these pieces are now far too embedded to remove. And so we must accept the general principle that systems become more complicated over time, whatever the technological system we look at.
But when we look at legacy code in our technologies—whether in software or in a body of laws—the real complexity isn’t simply the result of growth in size over time. Accretion needs to incorporate another factor that complicates our technological systems: interaction.
Overcomplicated Page 3