Overcomplicated

Home > Other > Overcomplicated > Page 12
Overcomplicated Page 12

by Samuel Arbesman


  The same is true of our own creations. We are seeing the limits of our understanding of computing, transportation, medical devices, and so many other technologies we ourselves have made. So as we continue to construct incomprehensible innovations, it might be time to return to the way people thought in an earlier time—a time when it was taken for granted that there is knowledge no human can possibly attain.

  What form should this response take? As mentioned in the introduction, as our technologies become more complicated, and we lose the ability to understand them, our responses tend toward two extremes: fear and awe.

  Contemplating a fantastically intricate technological system, some of us are overwhelmed by its power and complexity, and respond with fear of the unknown. Others tend toward an almost religious reverence when faced with technology’s beauty and power. The video game designer and writer Ian Bogost has even suggested that replacing the term “algorithm” with the word “God” changes little of what is being said about technology in today’s discourse.

  But technology, while it suffuses our society, is not the product of a perfect and immaculate process. Technologies are kluges. They are messes cobbled together over time from many pieces, and while they are indubitably exciting, they do not merit unquestioning wonder or profound existential concern.

  Neither fear nor awe is a productive response; both cut off questioning and the potential for gaining even a hint of understanding. While some caution is necessary when dealing with anything that is more than moderately complicated, fear in the face of what we don’t understand abdicates the responsibility to delve deeper and understand what we can, even just a bit. Unfortunately, the other extreme—the worship of technology—creates the same problem. If we view an algorithm or technology as far more beautiful and impressive than it actually is, this reverence also cuts off further questioning.

  In between lies a third path: humility. We must have humility without reverence, and curiosity without fear. The computer scientist Edsger Dijkstra has even written of the “humble programmer” who respects “the intrinsic limitations of the human mind.” Even if we could eliminate our mental biases, or massively increase our brainpower, we are still ultimately finite beings. And in the face of this finitude, humility is different from both hubris and humiliation. Humility recognizes our own limitations but is not paralyzed by them, nor does it enshrine them. A humble approach to our technologies helps us strive to understand these human-made, messy constructions, yet still yield to our limits. And this humble approach to technology fits quite nicely with biological thinking. While at every moment an incremental approach to knowledge provides additional understanding of a system, this iterative process will always feel incomplete. And that’s okay.

  New York Times columnist David Brooks has noted, “Wisdom starts with epistemological modesty.” Humility, alongside an interest in the details of complex systems, can do what both fear and worship cannot: help us peer and poke around the backs of our systems, even if we never look them in the face with complete understanding. In many instances, an incomplete muddle of understanding may be the best that we can do. But it’s far better than nothing.

  Both reverence and fear tempt us to throw our hands in the air and give up. That we cannot afford to do. We must continue to strive to understand these systems. Humility simply means accepting that scientific triumphalism is misplaced: we can never achieve complete or perfect understanding. And if we accept that, perhaps no longer will our soul long for such knowledge, as Maimonides said. Then we can become more philosophical and less dismayed about our failures to understand. When even software experts recognize that some computer bugs are simply in the realm of “metaphysics,” it is time for all of us to reconcile ourselves with humility in the face of technology.

  In fact, humility and muddling through are noble choices when confronting our complex technologies. As role models we might take the subset of scientists who look at an organism’s genome, rife with sections that may not have any purpose whatsoever—having evolved through a bizarre accretive process much as technology did—and nevertheless see a “glorious mess.” I find the juxtaposition of “glorious” and “mess” profoundly biological, but also filled with humble admiration. Our technologies are messes, and we can never divine their entirety, but recognizing that they are “glorious messes” is a powerfully optimistic feeling. Details and imperfect understanding can overwhelm us, or make us giddy with excitement. They may never lead to a profound understanding of the entire system, but that’s fine.

  John Gall, a retired pediatrician, is the author of a book called The Systems Bible. Originally titled General Systemantics, first published in 1975 and now in a revised and expanded third edition, it is a playful exploration of how to approach complex systems—though Gall uses the term more broadly, encompassing social systems as well as those technological systems we have constructed. The book includes maxims such as “The ghost of the old system continues to haunt the new,” “The system always kicks back,” and the Unawareness Theorem: “If you’re not aware that you have a problem, how can you call for help?” Gall’s rules and analyses of systems are insightful and fun. And, as you might expect from its original title, the book’s thesis is that systems are prone to antics—they do things we don’t expect, they bite back—and it’s quite hard to eliminate that behavior.

  As some of these maxims might suggest, Gall makes a number of points similar to those made here—systems accrete, expand “beyond human capacity to evaluate,” and are subject to unexpected behavior—but I am interested in a particular few. Gall notes that it is easier to work with what you have than to redesign something from scratch: the latter will likely cause more problems than you expect. And if you do feel that you have to create a whole new system, make it a small one, if possible. There are ways to mitigate the failures of systems, at least somewhat.

  Ultimately, Gall’s maxims and aphorisms seem to me to boil down to one perspective: humility in the face of systems that are so difficult to design, redesign, or rebuild from the ground up. These systems, no matter what their origin or function, will ultimately take similar unwieldy forms, and in our efforts to understand and control them, we must be comfortable with muddling through. When we fully grasp that our systems will always become complex, we will be better prepared to build them from the outset, and better able to recognize and even revel in the surprises and complications when they kick back, as they surely will.

  A humble approach to complex technology will serve us well. And one of the key features of this intellectual humility is an insight gleaned from biological thinking: that glimpses into the massive inner workings of these complex systems—little gateways into the machine—may be the best we can do, and that they can be enough.

  Glimpses Under the Hood

  When the designer Don Norman was backing up his computer to a server, he sat back and watched its progress, reading what it was doing at each step. At one point, Norman noticed that the computer program had reached the stage where it was “reticulating splines.” This phrase sounded complicated, and that was reassuring to Norman—this program must really know what it was doing. But he didn’t. He got curious, and after some research he discovered—as any good fan of SimCity 2000 would know—that this was actually an inside joke, a nonsensical phrase inserted into the game that only sounds like it means something. Ever since, it has cropped up in various games and other software.

  Think back to the last time you installed a new piece of software. Did you know what was going on? Did you clearly understand where various packages were being placed in the vast hierarchy of folders on your hard drive, and what bits of information were being modified based on the specific nature of your computer and its operating system?

  Unlikely. Rather, you monitored the progress of this installation by watching an empty rectangle slowly fill over time: a progress bar. This small interface innovation was developed by the computer scientist
Brad A. Myers, who initially called these bars “percent-done progress indicators” when he created them as a graduate student. They seem to soothe users by providing a small window into an opaque process. Is a progress bar completely accurate? Probably not. Sometimes progress bars are almost completely divorced from the underlying process. But for the most part, a progress bar and other design decisions—such as a bit of text that describes what is happening during a software installation—can provide a reassuring glimpse into a vast and complicated process.

  More and more, we have constructed user interfaces that abstract away complexity, or at least partially shield it from the user, bringing together the fields of complexity science and user interface design. Whether in our computers, our cars, or our appliances, these technologies lower a veil between us and how they operate. And rather than grapple with the increasingly byzantine tax code, many of us use the friendly user interface of TurboTax. Yet behind this software is an enormously complicated set of laws and regulations, rendered into the computer code of if-statements and exceptions.

  But as long as we have small ways of maintaining some intuition of what is going on beneath the surface—even if it’s not completely accurate—we can help users avoid an unnerving discomfort with the unknown.

  My family’s first computer was the Commodore VIC-20, billed by its pitchman, Star Trek’s William Shatner, as “the wonder computer of the 1980s.” I have many fond memories of this antiquated machine. I used to play games on it with cassette tapes that served as primitive storage devices. One of the cassettes we owned was a Pac-Man clone that my brother and I played a lot. Instead of a yellow pie with a mouth, it featured racing cars.

  But we also had games whose code we typed in ourselves. While you could buy software for the VIC-20 (like the race-car game), a major way that people acquired software in those days was through computer code published in the pages of magazines. Want to play a fun skiing game? Then type out the computer program into your computer, line by line, and fire it up for yourself. No purchase necessary. These programs were common then, but no longer. The tens of millions of lines of code that make up today’s game software would fill far more than one magazine.

  Typing code into our computer brought us closer to the machine. I saw how bugs occurred—I have a memory of that skiing program creating graphical gibberish on one side of the screen, until the text was corrected—and I also saw that there was a logic and texture to computer programs. Today’s computer programs are mysterious creations delivered whole to the user, but the old ones had a legible structure.

  Later in the 1980s, my family abandoned Commodore for Apple, and I have used some kind of Macintosh ever since. Our first Mac was something incredible to childhood me. I was entranced by the mouse, and the games, such as Cosmic Osmo, which offered rich, immersive realms that you could explore just by clicking. These early Macintoshes could even speak, converting text to speech in an inhuman monotone that delighted my family. The presentation Steve Jobs made introducing the Macintosh in 1984 is profoundly emotional and impressive to watch. And yet, something was lost in my family’s rush to embrace the Mac’s wonders. We became more distant from the machine. We see this trend continuing even today with the iPad, so slick and pristine that I don’t even know how files in it are stored.

  However, I had HyperCard for our Mac. HyperCard was this strange combination of programming language and exploratory environment. You could create virtual cards, stitch them together, and add buttons and icons that had specific functionality. You could make fun animations and cool sounds and even connect to other cards. If you’ve ever played the classic game Myst, it was originally developed using HyperCard. HyperCard was like a prototypical series of web pages that all lived on your own computer, but it could also do pretty much anything else you wanted. For a kid who was beginning to explore computers, this visual authoring space was the perfect gateway to the machine.

  One program I built with HyperCard was a rudimentary password generator: it could make a random string you could use as a password, but it also had options to make the random passwords more pronounceable, and hence more memorable over the long term. It was simple, but definitely ahead of its time, in my unstudied opinion.

  The computer game designer Chaim Gingold calls gateways like HyperCard “magic crayons.” Like the crayon in the children’s book Harold and the Purple Crayon that allows the young hero to draw objects that immediately take on reality, magic crayons are tools that, in Gingold’s words, “allow non-programmers to engage the procedural qualities of the digital medium and build dynamic things.” Even in the Apple world, commonly viewed as sterilized of messy code and computational innards, HyperCard allowed access to the complex powerhouse of the digital domain. HyperCard provided me with the comfort to enter this world, giving me a hint of the possibilities of working under the hood.

  All complex systems that we interact with have different levels that we can examine, created in technology by the deliberate abstractions we construct and in nature by the abstracting powers of scale and evolution. In biology, we can zoom up from biochemical enzymes to mitochondria to cells to organs to whole creatures, even entire ecosystems, with each level providing different layers of insight. As we abstract up from one level to the next, we lose fine-grained control and understanding, but we are also able to better comprehend the larger-level system. In computer software, we can move up from individual bits and bytes to assembly language to higher level computer code to the everyday user interface that allows us to click on, drag, and use a web browser. Each successive level brings us more functionality, but it also takes us further away from the underlying logic of the machine.

  Of course, as should be clear by now, it’s unlikely that that logic will ever be entirely comprehensible. But we should be able to glimpse under the hood a little. If we see our tablets and phones as mere polished slabs of glass and metal, performing veritable feats of magic, and have little clue what is happening beneath the surface or in their digital sinews, something is lost. In fact, this can cause problems: when our systems are so completely automated, we have little ability to respond when something goes wrong. This problem of being shielded from the inner workings of the technology around us has been called “concealed electronic complexity”: mind-boggling complexity lies within our devices but is entirely hidden from our view.

  In the 1960s, a component of the telephone system was designed in such a way that when it detected that it had failed, it simply connected the user to a wrong number. This redirected people to blame human error—users would think they had simply misdialed—rather than confront the fallibility of the technology itself. Without the knowledge of what was really happening, the person who dialed had a different sense of the system’s authority and mystery than she would have if she had seen it clearly as the complicated yet imperfect construction it was.

  We need glimpses under the hood to see, even if incompletely, what is going on. When these technologies exceed our ability to fully understand them, such glimpses will matter to the expert as well as to the average user of a technology. It’s not enough just for one person, or even a handful of individuals, to see under the hood and recognize the limits of our systems and ourselves. We can’t cede this responsibility. Each of us needs to pay attention to these glimpses. Without them, we drift away from a humble but vigilant intuition about these systems and toward reverence or fear. Being able to peek underneath the hood of technology isn’t just interesting or educational; it helps inoculate us against unhealthy perspectives toward our technologies. In the Entanglement, we need this protection more and more.

  But what if we can’t easily get glimpses under the hood? What if a system is so incredibly sophisticated that these little windows either are too difficult to construct or provide too little insight? There is another approach. Simulations are a way to provide us with the beginnings of intuition into how a complex technology works.

  While
we can’t actually control the weather or understand it in all its nonlinear details, we can predict it reasonably well, adapt to it, and even prepare for it. Weather models are incredibly complicated, though each individual part is still designed to be understandable. We look to these models to plan our wardrobe and our activities of the day and week, but also to get a sense, even if an imperfect one, of how the atmosphere operates. And, of course, when the outdoors delivers us an unexpected blizzard or deluge, we manage as best we can.

  Just as we have weather models, we can begin to make models of our technological systems, even somewhat simplified ones. Playing with a simulation of the system we’re interested in—testing its limits and fiddling with its parameters, without understanding it completely—can be a powerful path to insight, and is a skill that needs cultivation.

  For example, the computer game SimCity, a model of sorts, gives its users insights into how a city works. Before SimCity, I doubt many outside the realm of urban planning and civil engineering had a clear mental model of how cities worked, and we weren’t able to twiddle the knobs of urban life to produce counterfactual outcomes. We probably still can’t do that at the level of complexity of an actual city, but those who play these types of games do have a better understanding of the general effects of their actions. We need to get better at “playing” simulations of the technological world more generally, teaching students how to play with some system, examining its limits and how it works, at least “sort of.” This play—tweaking a simulation of technological failure and seeing how it responds—can provide a greater comfort with large and unwieldy systems and can help us as we move forward through this world of increasingly complicated technology.

 

‹ Prev