In the realm of logistics, powerful algorithms have been developed that route delivery trucks in seemingly illogical ways, leaving drivers dissatisfied with the counterintuitive routes they are being given. And in chess, a realm where computers are more powerful than humans and are able to win via pathways that the human mind can’t always see, the machines’ characteristic game choices are known as “computer moves”—the moves that a human would rarely make, the ones that are ugly but still get results. As the economist Tyler Cowen noted in his book Average Is Over, these types of moves often seem wrong, but they are very effective. When IBM’s Deep Blue was playing Garry Kasparov, it made a move so strange that it “was both praised and panned by different commentators,” according to one of Deep Blue’s builders. In fact, this highly odd but potentially brilliant move was eventually found to be due to a bug. Computers have exposed the fact that chess, at least when played at the highest levels, is too complicated for us, with too many interacting parts for a human—even a grandmaster—to keep in view. We sometimes can’t even tell when a decision is flawed.
What about the law, from contracts to regulations? There, too, we see this problem intensifying. Mark Flood and Oliver Goodenough, quoted in the previous chapter, recognize that “[w]hen interpretation is necessary, legalese, even when not ambiguous, makes slow, tortuous reading, with the need to check and recheck the definitions, cross-references, exceptions, etc. in which the complexity is embedded. Lawyer brains, the computational mechanism of traditional contract interpretation, are expensive and subject to cognitive limitations.” We should not be surprised to learn that when forty-five tax professionals were given data on a hypothetical family’s income, they came up with forty-five distinct conclusions about how much that family should pay in taxes.
Even as the sheer number of components and connections in complex systems overwhelms our processing capacity and causes us to lose the bubble, there is another factor in our reduced comprehension: the limits to how much knowledge—not just memorized raw data, but specialized technical expertise—we can keep in our heads. As technologies draw on more and more different domains of knowledge, even experts lose the ability to know them all.
To address the limits of our knowledge, we have to look at humanity’s pursuit of specialization. As we do, it should become clear that this new era of incomprehensibility is not entirely novel. Rather it is a continuation—though an extreme endpoint—of processes that have been occurring for much of human history.
The End of the Renaissance Man
I have on my shelf three books that have the phrase “The Last Man Who Knew Everything” as their title or subtitle. One is an edited volume about Athanasius Kircher, a German Jesuit priest who lived during the seventeenth century. He is viewed today as both a brilliant eccentric and probably a bit of a charlatan, who wrote about everything from astronomy to Egyptian hieroglyphics to a musical organ made from live cats. The second book is about Thomas Young, who was born in 1773 and studied such topics as physics, medicine, and linguistics. And the third concerns Joseph Leidy, born in 1823, a Philadelphia-area paleontologist and naturalist.
Which one was the Last Man Who Knew Everything? I don’t know. In fact, there probably wasn’t any one human being who ever knew everything we have generated as a civilization. But over the past few centuries, there has been such an explosion of knowledge that it would be remarkable, and almost impossible, to have even a passing awareness of all the new knowledge that was being generated. Our understanding of the universe has become complicated. But before the time of these Last Men passed, some people did try to understand everything around them. This understanding often involved the embrace of something known as a cabinet of curiosities.
Cabinets of curiosities, or wunderkammers, were crammed rooms proclaimed to contain the whole world of knowledge. Collections of sundry and bizarre objects, from stuffed and mounted animals to herbs and paintings, they were generally owned by wealthy and noble Europeans. Cabinets of curiosities were markers of social status, but also windows to understanding our universe and the wonders it was revealing to us. Collectors reveled in the man-made—musical instruments and weapons of war—as well as the natural—skeletons and minerals; their cabinets proclaimed the diversity of the world, unified only by the fact that all could be contained within these collections. As the writer Philip Ball notes in his book Curiosity, “The ideal collection was comprehensive—not in that it contained an example of every object or substance in the world (although efforts were sometimes made towards such exhaustiveness), but in that it created its own complete microcosm: a representation of the world in miniature.”
With a cabinet of curiosities, you could take in the entirety of the universe and all its complications at a glance. But not only did some of these cabinets appear to be little more than a miscellaneous hodgepodge; it was soon realized that they could never be big enough. Ball quotes a point made by the writer Patrick Mauries: that after the discovery of the Americas there was too much diversity to be contained within a single collection. The world was beginning to be recognized as too various and complex. Now, choices had to be made: What should make it into these rooms, and what could be ignored?
Cabinets of curiosities persisted for a long time, in their own way. A couple of decades ago I visited the Niagara Falls Museum. Now closed, it was owned at the time by a friend’s father and was one of the last of the wunderkammers. The Hall of Freaks of Nature had example after example of stuffed mutant creatures, from a five-legged cow to a two-headed sheep to another sheep that was nothing more than just two joined heads. Gazing at these mutants, and cases of mounted insects, and Egyptian mummies, I was given a sense of the breadth of the world—and its sheer weirdness.
But not everything in the world was on display in these wunderkammers. That would have been impossible. Choices had been made. In these choices, we can detect hints of a growth in specialization. As knowledge grew beyond the bounds of any one continent, or culture, or mind, to have a confident grasp of the systems around us we would have to specialize—to understand a small field very well, say, advanced weaponry, or some subfield of science. But of course this didn’t happen all at once.
For a period of time several centuries ago, there were numerous individuals who attempted to truly make sense of what was around them rather than simply collect everything—and who were still well-versed in discipline after discipline. One example was the philosopher, scientist, and mathematician Gottfried Leibniz, who lived during the seventeenth and eighteenth centuries. According to the scholar Daniel Boorstin, “Before he was twenty-six, Leibniz had devised a program of legal reform for the Holy Roman Empire, had designed a calculating machine, and had developed a plan to divert Louis XIV from his attacks on the Rhineland by inducing him to build a Suez Canal.” In the words of Frederick the Great, Leibniz was “a whole academy in himself.” Similarly, Isaac Newton stitched together a whole host of phenomena—from how objects fall to the orbit of Mars—through his theory of gravitation.
Around the same time, Gresham College, one of the oldest colleges in England, and devoted to providing public lectures on various topics, had a small faculty in areas such as astronomy, geometry, and music. But in reality, Gresham titles were relatively meaningless; some professors chose their titles based on the quality of the rooms they could get rather than their area of expertise. Specialization was far from anyone’s mind during this time. As the mathematician Isaac Barrow noted, “He can hardly be a good scholar, who is not a general one.”
But knowledge has grown far beyond any single person’s capacity to master it. To build models of the world and new technological systems at the frontiers of what we know, we have had to learn “more and more about less and less”—to specialize in specific domains. Benjamin Jones of Northwestern University has developed a theory about the “burden of knowledge”: the idea that to make advances at the frontier of knowledge, you must know a substantial amount
of what has come before you. As our collective knowledge has grown over time, this burden has grown, with ever more required to be learned in order to make novel contributions. In one article coauthored by Jones, the authors note that John Harvard, whose bequest, including his private library, gave him naming rights to Harvard University in 1639, donated only 320 books to the university. There are now more than 36 million books and other print materials in the United States Library of Congress. The burden of knowledge weighs ever heavier upon us. As we attempt to build and understand complicated systems, we are required to know more, but also to have increasingly specialized expertise.
The biologist E. O. Wilson described the change thus:
In 1797, when Jefferson took the president’s chair at the American Philosophical Society, all American scientists of professional caliber and their colleagues in the humanities could be seated comfortably in the lecture room of Philosophical Hall. Most could discourse reasonably well on the entire world of learning, which was still small enough to be seen whole. Their successors today, including 450,000 holders of the doctorate in science and engineering alone, would overcrowd Philadelphia. Professional scholars in general have little choice but to dice up research expertise and research agendas among themselves. To be a successful scholar means spending a career on membrane biophysics, the Romantic poets, early American history, or some other such constricted area of formal study.
Not only is knowledge itself expanding and bifurcating, but the numbers and the specialization of scholars have also greatly increased.
This puts us in a difficult position. Specialization is required in order to understand more and more about the intricate systems around us, such as the human body, now divided up among numerous specialties in medicine. But at the same time, the systems we are building—the technologies that run our world—are not only intricate and complicated, but also stitch together field after field. We have systems in the world of finance that require an understanding of physics; there are economists involved in the development of computer systems. The design of driverless cars is a good example, requiring collaboration among those with expertise in software, lasers, automotive engineering, digital mapping, and more.
In other words, even as specialization aids us in making advances, we are ever more dependent on systems that draw from many different areas, and require an understanding of each of these. Yet a single person can no longer possess all the necessary knowledge. To any one person, these systems as wholes are truly incomprehensible.
One solution is the growth in multidisciplinary teamwork: build a team of individuals with deep expertise in different areas, and you can make advances at the boundaries and build astonishingly powerful complex systems. In software, while some pieces of technology are built by a single person or a small team, more often it takes large numbers of people, who enter or leave a project or team, contributing to its development over long stretches of time. If you attempt to visualize these patterns of teamwork—and impressive infographics have been created, trying to show how key pieces of software were developed—you will find yourself staring at what look like convoluted bundles of strings, meeting and branching as individuals join a software collaboration, work on different files together, and leave again. It is unsurprising, then, that the products of these processes are not only very complex, but often so complicated that few fully understand them: the person who might know all about a specific feature may be long gone.
Specialization is a successful process that yields impressive technologies, but it, too, leads us into the Entanglement, where we are dependent on knowledge of complex technological systems that we as individuals do not have, and that, in fact, no one may have. There are ways of trying to overcome this predicament: for example, perhaps it’s time to bring back generalists and polymaths, inviting them to flourish anew in this modern era—a possibility we will reexamine later in this book. But for now, it is enough to recognize the fundamental clash between the amount of knowledge any individual has the capacity to process and the amount we need to know about the interlocking systems our lives depend on.
Unfortunately, we often fail to recognize this mismatch until it is too late. We build massively complex technologies, secure in the belief that they are constructed on a logical foundation, until they confront us with unexpected behavior: the bugs and glitches that send major systems such as global finance into a tailspin. These unexpected behaviors—the kind that even the creators of these systems have trouble anticipating—can be viewed as technological werewolves. In the imagination of the computer scientist Frederick Brooks, software projects have a tendency to morph into unmanageable monsters: “Of all the monsters who fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors.”
The werewolves of our time are the unexpected behaviors that lurch forth from the systems we build, the sinister embodiments of all the forces that have made our systems ever more complicated and less understandable. We turn to these next.
Chapter 4
OUR BUG-RIDDEN WORLD
Back in the 1980s, the video game Galaga was popular. A classic shooter in which your trusty spaceship had to eliminate all the bad guys, it was one of those archaic video games with simple graphics and goofy sounds—but it also had an intriguing glitch. Early on in the gameplay, if you eliminated nearly all enemies and then avoided those that remained for around fifteen minutes, the baddies would never shoot at you again. A curious situation, but one that could be nicely exploited for some satisfying high scores.
Why did this happen? It seems that the part of the code that held the “shots” misbehaved under certain conditions and neglected to refresh. Some speculate that this was an intentional feature of the game, to allow its developer to enter an arcade and rack up high scores. While that would make a great story, I’m not sure how likely it is, given the cleaner hidden features and cheats found in other games. It was probably just a bug.
This bug and others like it are signs of the fact that we do not fully understand the systems we build. It requires significant effort to understand what was going on inside Galaga, particularly when something went wrong. Even though the graphics and gameplay were simple, the true level of complexity might only become clear to players when it failed. Bugs are not just annoyances to be fixed. Bugs are how we realize that we are in the Entanglement.
In 1950, Alan Turing noted that machines can and will yield surprises for us in their behavior. And it seems that these surprises will only increase in frequency. As we build systems that become more and more complicated, there is a greater divergence between how we intend our systems to act and how they actually do act. This unexpected behavior is a symptom of exactly what we have been looking at so far: increased complexity and decreased understanding, based on the shortcomings of our brains discussed in the last chapter. This means all of us, not just the tech-challenged end user: even experts are caught by surprise when, say, a rocket self-destructs soon after launch, or carefully crafted pieces of legislation turn out to clash. Bugs and glitches are the unexpected and unwanted by-products of the complexity of our technology. They are not only inevitable whenever a system is complex enough; they are the first hints that we are inching ever closer to complete dependence on systems that we don’t understand well at all. These technological werewolves are the heralds of the Entanglement.
The philosophers John Symons and Jack Horner, mentioned in the last chapter, have studied software development, focusing on aspects of its construction that are often left unexamined, or at least examined far less than they should be. Unfortunately, these are the very aspects—such as how numbers get stored and rounded when performing calculations—that can lead to a profusion of problems but are often glossed over or ignored by software builders. For example, in a widely used simulator of gravitation, the kind of computer program used in astronomy research, many errors were found involving one w
ay that numbers are handled. Specifically, in this program there are about 10,000 instances of this kind of mistake in a piece of software only 30,000 lines long. Fixing these errors can actually yield different simulation results.
Over and over, we see these problems arise in our technologies as a consequence of increasing complexity. When a piece of technology becomes complicated, the werewolves appear when we least expect them, with unpredictable effects. These bugs range from fun and strange ones, like the glitch in Galaga, to ones as potentially devastating as the Heartbleed vulnerability, a mistake written into encryption software that may have compromised the security of as many as two-thirds of the websites online, including Facebook and Google, for more than two years.
Then there’s Microsoft Windows. In 1996, a computer “bug detective” published The Windows 95 Bug Collection—an entire book devoted just to detailing bugs related to Windows 95 and potential workarounds. Some solutions this book suggests for various bugs are easier to manage than others. For instance, one edition of a particular Windows program displays an error message when starting up. The solution? Just ignore the message. But others require a more drastic intervention. For example, if you have a specific controller card, it may not work properly with some computers running Windows 95. What does Microsoft suggest? “[U]se a slower computer or a different Bernoulli controller card.” In other words, this problem is not being dealt with and may not even be understood at all.
Overcomplicated Page 7