Overcomplicated

Home > Other > Overcomplicated > Page 10
Overcomplicated Page 10

by Samuel Arbesman


  We can see a similar principle at work in the search for new medicines. In pharmaceutical research, one way that new drugs are created is through an active process of sifting and checking countless variations of chemicals in order to find one that provides the desired effect. The actual chemical mechanism might be only imperfectly understood, but the testing process—the discovery of the rare and effective—provides a way of understanding the human body, even if only dimly and indirectly at times. This poking and prodding of the system in order to learn more and to sift out molecules that can be effective pharmaceuticals is another tinkering approach to discovery.

  Applying biological thinking to technology involves recognizing that tinkering is a way of both building a system and learning about it. As Stewart Brand noted about legacy systems, “Teasing a new function out of a legacy system is not done by command but by conducting a series of cautious experiments that with luck might converge toward the desired outcome.” Such is the approach of a “field biologist for technology.” We saw this put into practice in the previous chapter with Netflix’s Chaos Monkey, which is essentially a mutagenic piece of software designed to introduce errors in order to learn about the system and improve its reliability. Glitches, even ones introduced by us, must be chronicled and examined in hopes of gaining a better sense of how the greater system operates.

  This biological approach can also aid us in understanding disasters and catastrophes—much the way biologists think about cancer. When cells begin to grow into a tumor, it is rarely just a single thing that has gone wrong. Rather, cancer can result from an accumulation of many factors and biological responses that interact in complex ways, leading to a large-scale failure: a potentially deadly disease. Recognizing that such an accumulation of problems and responses can cause similar failure cascades in our technologies means thinking more like a biologist. For example, in a nuclear power plant, small problems can add up and cause serious issues—multiple independent causes seem to be implicated in what eventually led to a partial meltdown at the Three Mile Island plant. We can also identify interactions that cause massive problems in the world of finance.

  Happily, we are beginning to get help in these endeavors from our technologies themselves. There are now computational tools that can help find unexpected outcomes in a system, part of the domain known as “novelty detection.” Machines might be able to act in partnership with these field biologists and naturalists, helping us to better understand—even if only partly—our own technologies.

  When Physics and Biology Meet

  The biological aspects of technology—its klugeyness, its growth and change due to evolutionary tinkering, its many miscellaneous details—are extensive. But does this mean that we should abandon our search for underlying regularities in all this complexity? Absolutely not. Physics thinking still has a role in how we approach technology.

  When attempting to understand a complex system, we must determine the proper resolution, or level of detail, at which to look at it. How fine-grained a level of detail are we focusing on? Do we focus on the individual enzyme molecules in a cell of a large organism, or do we focus on the organs and blood vessels? Do we focus on the binary signals winging their way through circuitry, or do we examine the overall shape and function of a computer program? At a larger scale, do we look at the general properties of a computer network, and ignore the individual machines and decisions that make up this structure?

  These are not always easy questions to answer. Sometimes we must tend toward physics thinking, abstracting away the details to understand the system as a whole. And sometimes the details are important, as with our hapax legomena and edge cases: then we must rely on more biological thinking.

  But all too often, the different levels of resolution collide. Sickle-cell anemia, a quite serious systemic disease, is caused by a tiny change in a single base pair in our DNA. A large fraction of the United States electrical grid can be brought down by a cascade set off by trees touching power lines in Ohio, as happened in the summer of 2003. When systems become more and more interconnected, not only do resolution levels intersect, but domains thought to be separated are increasingly brought together. More and more we need to combine both the physics and the biological ways of thinking, looking at the order while not ignoring the rough edges. A biological mind-set partnered with a physics mind-set allows us to feel more comfortable with the kluges around us. In Neal Stephenson’s novel Cryptonomicon, one of the characters elaborates on the structure of the pantheon of Greek gods, making exactly this point:

  And yet there is something about the motley asymmetry of this pantheon that makes it more credible. Like the Periodic Table of the Elements or the family tree of the elementary particles, or just about any anatomical structure that you might pull up out of a cadaver, it has enough of a pattern to give our minds something to work on and yet an irregularity that indicates some kind of organic provenance—you have a sun god and a moon goddess, for example, which is all clean and symmetrical, and yet over here is Hera, who has no role whatsoever except to be a literal bitch goddess, and then there is Dionysus who isn’t even fully a god—he’s half human—but gets to be in the Pantheon anyway and sit on Olympus with the Gods, as if you went to the Supreme Court and found Bozo the Clown planted among the justices.

  The more we examine the systems around us with open eyes, we see this balance between biology and physics. We find it in our ecosystems and in the chaos of technology that we rely on every day. We find it in the Greek pantheon, and in many other stories we tell ourselves.

  Storytelling, in fact, allows us to indulge our desires for either biological or physics thinking. Some stories are finely crafted machines with no extraneous parts; everything fits together. We see this in “Chekhov’s Gun,” dramatist Anton Chekhov’s principle that any element introduced in a story must be crucial to advancing the plot. A loaded rifle introduced early in the first act of a play must go off by the third.

  On the other hand, there are some stories in which color is added, creating a richness of experience without necessarily moving the plot along. Homer’s catalog of the invading Greek ships in the Iliad, and Kramer’s growing list of never-seen eccentric friends on Seinfeld—Bob Sacamano, Lomez, Corky Ramirez—are not essential plot points, but they are important. They are the biology alongside the physics: both are needed to create the rich world we inhabit when we engage with a story.

  In the field of special effects, there is a delightfully evocative term: “greeblies.” When I hear it, I think of gremlins and gibberish mutterings. Greeblies are the little bits and pieces that get added to a scene, or to a single object, to make it look more believable. You can’t have a futuristic starship that is all angles and smooth sides; you need to add ports and vents and sundry other impenetrable doodads and whatsits, pipes and bumps, indentations and grooves. Think of the ships in Battlestar Galactica or Star Wars. They are more visually intriguing thanks to their complications of unknown purpose.

  This process of greebling is closely related to a well-known quote from the mathematician Benoit Mandelbrot, who coined the term “fractal”: “Why is geometry often described as ‘cold’ and ‘dry’? One reason lies in its inability to describe the shape of a cloud, a mountain, a coastline, or a tree. Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.”

  So, too, our technological systems, once embedded in the real world, are far from the cleanly pristine logical constructions of the drawing board; they are full of the miscellaneous details of biology that have accreted over time, much like the evolutionary hodgepodge found within living systems. Our stories are built with details and complications, and so are our complex systems. In fact, we seem to recognize something as more “realistic” when it is complicated, full of tiny crenellations and details that often elude our understanding. Ultimately, we need this messiness, we need the greeblies
, even if at another resolution we might abstract it all away.

  Biological thinking needs to exist alongside physics-based thinking. Recall “Funes the Memorious,” the short story by Jorge Luis Borges where the title character is burdened by a perfect memory. While many of us would view this as a gift (as does Funes), it’s not quite that. The reader learns that for Funes, nearly every detail and perspective generates a new memory and in turn a new category, a new kind. When gazing at a dog, Funes doesn’t just see a dog; he sees a specific dog from one angle, and then, as soon as it or he moves, a completely new dog memory is created. Funes does not unify the particulars into general concepts because his memory is too detailed. He can no longer form any abstractions because his tolerance for complications is too great.

  In fact, the end goal of biologists is to create models and identify regularities, even if on a smaller scale. So, when confronted with a complex piece of technology, we must begin by acting like field biologists, experimenting around its edges to see how it behaves, with the end goal of some degree of generalization. This is actually how a lot of people approach open-ended video games like Minecraft. You first collect huge amounts of information about your virtual world—what you can do, what you can’t, what kills you, how you successfully survive—and then begin to make little mental models, small-scale generalizations within a much larger whole.

  Or, when you are working with an advanced piece of software such as a gargantuan word-processing tool, and the endnotes in your document go haywire, do not panic. Instead, look at what went wrong: Did several endnotes all have the same numbering? Do they still connect to the correct places in the text? And so on. By being willing to tinker—a cue we take from living things—you get a better sense of the details and nuances of a very complex system. Acting as field biologists for technology allows us to look for and closely study the various pieces of our constructed world, while still recognizing that they are only a tiny part of a much bigger and highly connected whole.

  We now turn to one field that might help us accomplish this delicate balance in how we grasp technological systems.

  The Science of Complexity

  One natural path for managing and understanding complex systems is through complexity science: the quantitative study of these vast and complicated interconnected systems, ranging from living things and ecologies to the World Wide Web and even collaborations between film actors (think “Six Degrees of Kevin Bacon,” where you try to connect every actor back to Bacon through film co-appearances). This is a rich and exciting field—one that I am a part of—and it uses a variety of powerful ideas and mathematical frameworks in a quest to find patterns and meaning in these complex systems. These approaches range from understanding network structures to developing computer models with huge numbers of interacting entities, known as agent-based models.

  One of the main ways of grasping these systems, however, is by abstracting away a certain amount of messiness in order to find regularities that are amenable to clear mathematical shapes and understanding. For example, if you look at a massive network of interactions—such as who follows whom on Twitter—you can ignore the details of the interactions and note that the number of connections between individuals follows a specific category of probability curve known as a heavy-tailed distribution. This curve makes no claims about who is connected to whom or who has a lot of connections and who has few, but it shows that there is an underlying statistical regularity.

  Other models in complexity science explore what is known as percolation: how something diffuses based on the structure of some sort of porous space, such as how petroleum moves through a rock. These models are very powerful at showing how small alterations in the density of the material can yield huge shifts in the ability of the fluid to move through the system, though they again make no claims about the actual specific locations of the pores, for instance.

  Each model in complexity science is very good at providing a different angle of insight into what we are trying to understand. In fact, in many cases we can find simple mathematical models that seem to help unify whole swaths of our world. For example, there is a type of model known as a diffusion-limited aggregation. While on the surface this model looks like nothing more than an abstract computational plaything, it can be viewed as a unifying pattern for a wide variety of real-world phenomena.

  The diffusion-limited aggregation, or DLA, begins by placing a tiny dot in the center of a giant grid. You then randomly walk other dots around the grid until one of them hits the original dot, and then it stops. That dot is now part of the aggregation. The dots that are still meandering randomly continue to do their walk until they hit the slowly growing DLA, and stop as well. Slowly, you keep on adding moving dots to this growing aggregation.

  As you keep adding new randomly jittering dots to this thing, you end up getting a weird shape. It’s not a blob, it’s not a misshapen circle, or even a regular crystal. It’s something far more organic. It looks like a spindly growth, and in fact, depending on the specific type of model, DLAs can look like coral, bolts of lightning, even cities. A simple, almost trivial computer model can create rich pictures.

  This model and ones like it are almost paradoxical, because they seem to imply that massively complicated systems can be reproduced by really simple models. Of course, the details are not the same. These little toy models have a sort of “Potemkin complexity”—they look complex but contain an underlying simplicity, as opposed to a real giant city with the huge number of decisions that have gone into how it currently looks, or even a coral with all the details that went into how it grew. But sometimes these models are enough; for some purposes—such as trying to understand an entity’s basic organization—the overall shape is all we care about. This abstract and simplified approach to complex systems can help us understand the shape of a system’s behavior. It can teach us about feedback, interconnectivity, and the profound dependence of such systems on initial conditions. It can even place boundaries on our expectations for how a system might respond to changes. Being able to think effectively about complex systems using the concepts of complexity science is a skill necessary for anyone, expert or not, who wishes to engage with our ever-more complicated world. It is increasingly important to educate people as to how complex systems work, making these properties more intuitive.

  Those aspects of complexity science that generalize have their limits, however. When it comes to understanding the particularities, simplifying models are not sufficient. As the science writer Philip Ball has noted, “The patterns of a river network and of a retinal nerve are both the same and utterly different. It is not enough to call them both fractal, or even to calculate a fractal dimension. To explain a river network fully, we must take into account the complicated realities of sediment transport, of changing meteorological conditions, of the specific vagaries of the underlying bedrock geology—things that have nothing to do with nerve cells.”

  Happily, the tools of complexity science are not only used to create simplifying abstractions. They can also provide a window into the details of a system by actively sifting through its complexity in a rigorous fashion. For example, a team of researchers analyzed the United States Code using approaches derived from software engineering and complexity science, in order to determine various features of this body of law. Some of the analyses did provide results just about the overall features of these laws, such as the varying levels of complexity of different sections, or that there is a certain profile of complication in the Code. But their analyses also highlighted portions of the law that are particularly interesting or exceedingly byzantine. For example, the section detailing the “Powers and duties of the corporation” popped out as especially complex within the portion of the Code that deals with banking, at least in terms of the number of conditionals (if-then statements, just like in software). In the section dealing with taxes, the subsection on the “Qualified pension, profit-sharing, and stock bonus plan” al
so had a high complexity. These kinds of results can help us zoom in and focus on the parts that are most complex, and perhaps even take stabs at simplifying them.

  In the same vein as field biology, certain complexity science approaches can be used to learn about the behavior of a subset of a complex system rather than the system’s behavior as a whole. Approaches can even be used to find the outliers: the parts of the complex system that don’t fit the generalized principles. Ultimately, all these approaches can help us learn which parts of a system it might be worthwhile to look at more closely, in hopes of a better understanding of how they all interact. We must balance the complexity science that abstracts away information with the type of analysis that finds the particulars that don’t fit neatly into the model.

  We again are left with the tension between thinking informed by physics and by biology. On the one hand, we yearn for a simple elegance in our technological world, and wish the convoluted away. But on the other, acknowledging something as complicated—particularly something that has grown and evolved over time—is a sign of nuance, a sign of maturity. As we mature as individuals, we recognize complications in our relationships, nuances in our interactions with others. As we have matured as a society, we must recognize the complications and irregularities inherent in our constructions. Complexity science, while far from a panacea, can help us strike this balance: highlight details to focus on, but also place boundaries on our knowledge and our level of concern. For example, complexity science can show us how easily systems can become unstable, and where we must direct our efforts and attention. If a simple model demonstrates that a large technological network can be wiped out by only small changes, we can no longer remain blissfully ignorant of this fact.

 

‹ Prev