Algorithms to Live By

Home > Nonfiction > Algorithms to Live By > Page 27
Algorithms to Live By Page 27

by Brian Christian


  Exponential Backoff is also a critical part of networking security, when successive password failures in logging into an account are punished by an exponentially increasing lockout period. This prevents a hacker from using a “dictionary attack” against an account, cycling through potential password after password until eventually they get lucky. At the same time it also solves another problem: the account’s real owner, no matter how forgetful, is never permanently locked out after some arbitrary cutoff.

  In human society, we tend to adopt a policy of giving people some finite number of chances in a row, then giving up entirely. Three strikes, you’re out. This pattern prevails by default in almost any situation that requires forgiveness, lenience, or perseverance. Simply put, maybe we’re doing it wrong.

  A friend of ours recently mused about a childhood companion who had a disconcerting habit of flaking on social plans. What to do? Deciding once and for all that she’d finally had enough and giving up entirely on the relationship seemed arbitrary and severe, but continuing to persist in perpetual rescheduling seemed naïve, liable to lead to an endless amount of disappointment and wasted time. Solution: Exponential Backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of “retransmission” goes toward zero—yet you never have to completely give up.

  Another friend of ours agonized about whether to offer shelter and financial assistance to a family member with a history of drug addiction. She couldn’t bear to give up hope that he would turn things around, and couldn’t bear the thought of turning her back on him for good. But she also couldn’t bring herself to do all that it required to have him in her house—buying him clothes and cooking for him, reopening bank accounts for him, and driving him to work each morning—when at some mysterious and abrupt moment he would take all the money and disappear, only to call again several weeks later and ask to be forgiven and taken back in. It seemed like a paradox, a cruel and impossible choice.

  Exponential Backoff isn’t a magic panacea in cases like this, but it does offer a possible way forward. Requiring an exponentially increasing period of sobriety, for instance, would offer a disincentive to violate the house rules again. It would make the family member prove ever more assiduously that he was serious about returning, and would protect the host from the otherwise continuous stress of the cycle. Perhaps most importantly, the host would never have to tell her relative that she’d given up on him for good or that he was beyond redemption. It offers a way to have finite patience and infinite mercy. Maybe we don’t have to choose.

  In fact, the past decade has seen the beginnings of a quiet revolution in the way the justice system itself handles community supervision for drug offenders. That revolution is being spearheaded by a pilot program called HOPE, which uses the Exponential Backoff principles of the ALOHAnet—and which, in a striking coincidence, began at the birthplace of the ALOHAnet itself: Honolulu.

  Shortly after being sworn in to Hawaii’s First Circuit Court, Judge Steven Alm noticed a remarkable pattern. Probationers would repeatedly violate their probation terms, and circuit judges would routinely use their discretion to let them off with a warning. But at some point, perhaps after a dozen or more violations, the judge would decide to be strict, and assign the violator a prison sentence measured in years. Says Alm, “I thought, what a crazy way to try to change anybody’s behavior.” So Alm proposed almost exactly the opposite. In place of violation hearings scheduled a long time into the future, requiring uncertain judgment calls, and occasionally producing huge penalties, HOPE is based on immediate, predefined punishments that begin with just one day in jail and increase after each incident. A five-year study by the Department of Justice reported that HOPE probationers were half as likely as regular probationers to be arrested for a new crime or have their probation revoked. And they were 72% less likely to use drugs. Seventeen states have followed Hawaii’s lead and launched their own versions of HOPE.

  Flow Control and Congestion Avoidance

  The first efforts at computer networking focused on establishing reliable transmissions over unreliable links. These efforts proved to be so successful that a second concern immediately arose: making sure that an overloaded network could avoid catastrophic meltdown. No sooner had TCP solved the problem of getting data from point A to point B than it was confronted with the problem of gridlock.

  The most significant early warning came in 1986, on a line connecting the Lawrence Berkeley Laboratory and the UC Berkeley campus, which are separated by about the length of a football field. (At Berkeley, the space happens to be filled with an actual football field.) One day, the bandwidth of that line dropped abruptly from its typical 32,000 bits per second to just 40 bits per second. The victims, Van Jacobson at LBL and Michael Karels at UCB, “were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad.”

  Meanwhile, they heard murmurings from other networking groups across the country who were running into the same thing. Jacobson began looking into the underlying code. “Is there some mistake in the protocol?” he wondered. “This thing was working on smaller-scale tests, and then it suddenly fell apart.”

  One of the biggest differences between circuit switching and packet switching emerges in how they deal with congestion. In circuit switching, the system either approves a channel request, or denies it outright if the request cannot be accommodated. That’s why, if you’ve ever tried using a phone system during some peak time, you may have encountered the “special information tone” and message proclaiming that “all circuits are busy.”

  Packet switching is radically different. The phone system gets full; the mail system gets slow. There’s nothing in the network to explicitly tell a sender how many other senders there are, or how congested the network is at any given moment, and the amount of congestion is constantly changing. Therefore, the sender and receiver must not only communicate but metacommunicate: they need to figure out how fast the data should be sent. Somehow, assorted packet flows—without explicit management or coordination—must both get out of each other’s way and quickly take advantage of any newly available space.

  The result of Jacobson and Karels’s detective work was a revised set of flow control and congestion-avoidance algorithms—one of the biggest modifications to TCP in forty years.

  At the heart of TCP congestion control is an algorithm called Additive Increase, Multiplicative Decrease, or AIMD. Before AIMD kicks in, a new connection will ramp up its transmission rate aggressively: if the first packet is received successfully it sends out two more, if both of those get through it sends out a batch of four, and so on. But as soon as any packet’s ACK does not come back to the sender, the AIMD algorithm takes over. Under AIMD, any fully received batch of packets causes the number of packets in flight not to double but merely to increase by 1, and dropped packets cause the transmission rate to cut back by half (hence the name Additive Increase, Multiplicative Decrease). Essentially, AIMD takes the form of someone saying, “A little more, a little more, a little more, whoa, too much, cut way back, okay a little more, a little more…” Thus it leads to a characteristic bandwidth shape known as the “TCP sawtooth”—steady upward climbs punctuated by steep drops.

  Why such a sharp, asymmetrical decrease? As Jacobson and Karels explain, the first time AIMD kicks in is when a connection has experienced the first dropped packet in its initial aggressive ramping-up phase. Because that initial phase involved doubling the rate of transmission with every successful volley, cutting the speed back by half as soon as there’s been a problem is entirely appropriate. And once a transmission is in progress, if it starts to falter again that’s likely to be because some new connection is competing for the network. The most conservative assessment of that situation—namely, assuming you were the only person using the network and now there’s a second person taking half the resources—also leads to cutting back by half. Conservatism here is essential: a network
can stabilize only if its users pull back at least as fast as the rate at which it is being overloaded. For the same reason, a merely additive increase helps stabilize things for everyone, preventing rapid overload-and-recovery cycles.

  Though such a strict distinction between addition and multiplication is the kind of thing unlikely to be found in nature, the TCP sawtooth does find resonance in various domains where the idea is to take as much as one can safely get away with.

  In a serendipitous 2012 collaboration, for instance, Stanford ecologist Deborah Gordon and computer scientist Balaji Prabhakar discovered that ants appear to have developed flow control algorithms millions of years before humans did. Like a computer network, an ant colony faces an allocation problem in trying to manage its “flow”—in this case, the flow of ants heading out to forage for food—under variable conditions that may sharply affect the rate at which the ants make successful round-trips. And like computers on the Internet, ants must solve this shared problem without the benefit of a central decision maker, instead developing what Gordon calls “control without hierarchy.” It turns out the ants’ solution is similar, too: a feedback cycle where successful foragers prompt more to leave the nest, while unsuccessful returnees result in a diminishment of foraging activity.

  Other animal behavior also evokes TCP flow control, with its characteristic sawtooth. Squirrels and pigeons going after human food scraps will creep forward a step at a time, occasionally leap back, then steadily creep forward again. And it may be that human communications themselves mirror the very protocols that transmit them: every text message or email reply encourages yet another, while every unreturned message stanches the flow.

  More broadly, AIMD suggests an approach to the many places in life where we struggle to allocate limited resources in uncertain and fluctuating conditions.

  The satirical “Peter Principle,” articulated in the 1960s by education professor Laurence J. Peter, states that “every employee tends to rise to his level of incompetence.” The idea is that in a hierarchical organization, anyone doing a job proficiently will be rewarded with a promotion into a new job that may involve more complex and/or different challenges. When the employee finally reaches a role in which they don’t perform well, their march up the ranks will stall, and they will remain in that role for the rest of their career. Thus it stands to reason, goes the ominous logic of the Peter Principle, that eventually every spot in an organization will come to be filled by someone doing that job badly. Some fifty years before Peter’s formulation, Spanish philosopher José Ortega y Gasset in 1910 voiced the same sentiment. “Every public servant should be demoted to the immediately lower rank,” he wrote, “because they were advanced until they became incompetent.”

  Some organizations have attempted to remediate the Peter Principle by simply firing employees who don’t advance. The so-called Cravath System, devised by leading law firm Cravath, Swaine & Moore, involves hiring almost exclusively recent graduates, placing them into the bottom ranks, and then routinely either promoting or firing them over the following years. In 1980, the US Armed Forces adopted a similar “up or out” policy with the Defense Officer Personnel Management Act. The United Kingdom has likewise pursued what they call “manning control,” to great controversy.

  Is there any alternative, any middle path between the institutional stagnation of the Peter Principle and the draconian severity of the “up or out” system? The AIMD algorithm can offer just such an approach, since it is explicitly designed to handle the demands of a volatile environment. A computer network must manage its own maximum transmission capacity, plus the transmission rates of its clients, all of which may be fluctuating unpredictably. Likewise, in a business setting, a company has a limited pool of funds to pay for its operations, and each worker or vendor has a limited capacity for the amount of work they can do and the amount of responsibility they can handle. Everyone’s needs, capacities, and partnerships are always in flux.

  The lesson of the TCP sawtooth is that in an unpredictable and changing environment, pushing things to the point of failure is indeed sometimes the best (or the only) way to use all the resources to their fullest. What matters is making sure that the response to failure is both sharp and resilient. Under AIMD, every connection that isn’t dropping the ball is accelerated until it is—and then it’s cut in half, and immediately begins accelerating again. And though it would violate almost every norm of current corporate culture, one can imagine a corporation in which, annually, every employee is always either promoted a single step up the org chart or sent part of the way back down.

  As Laurence J. Peter saw it, the insidious Peter Principle arises in corporations because of “the first commandment of hierarchical life: the hierarchy must be preserved.” TCP, in contrast, teaches the virtues of flexibility. Companies speak of “flat” hierarchies and “tall” hierarchies, but they might consider speaking of dynamic ones. Under an AIMD system, no one is long anxious about being overtaxed, nor long resentful about a missed promotion; both are temporary and frequent correctives, and the system hovers near its equilibrium despite everything changing all the time. Perhaps one day we’ll speak not of the arc of one’s career, but rather of its sawtooth.

  Backchannels: Flow Control in Linguistics

  Looking into networking’s flow control makes it clear that upstream ACK packets not only acknowledge and confirm transmissions, but shape the contours of the entire interaction, its pace and cadence. This offers us both a reminder and an insight into how important feedback is to communication. In TCP, as we’ve seen, there’s no such thing as a one-way transmission: without consistent feedback, the sender will slow down almost immediately.

  Curiously, the rising awareness of the critical role of feedback in the field of networking mirrored an almost identical set of developments going on around the same time in the linguistics community. In the middle of the twentieth century, linguistics was dominated by the theories of Noam Chomsky, which considered language in its most perfect and ideal state—perfectly fluent, grammatical, uninterrupted sentences, as if all communication were written text. But starting in the 1960s and ’70s, a surge of interest in the practical aspects of spoken language revealed just how elaborate and subtle the processes are that govern turn-taking, interruption, and composing a sentence or story on the fly while being attuned to a listener’s reactions every step of the way. What emerged was a vision of even ostensibly one-way communication as a collaborative act. As linguist Victor Yngve would write in 1970, “In fact, both the person who has the turn and his partner are simultaneously engaged in both speaking and listening. This is because of the existence of what I call the back channel, over which the person who has the turn receives short messages such as ‘yes’ and ‘uh-huh’ without relinquishing the turn.”

  An examination of human “backchannels” opened a whole new horizon for the field of linguistics, prompting a complete re-evaluation of the dynamics of communication—specifically, the role of the listener. In one illustrative study, a team led by Janet Bavelas at the University of Victoria investigated what would happen when someone listening to a personal story got distracted: not what would happen to the listener’s comprehension, but what would happen to the story. With poor feedback, they discovered, the story falls apart.

  Narrators who told close-call stories to distracted listeners … told them less well overall and particularly poorly at what should have been the dramatic conclusion. Their story endings were abrupt or choppy, or they circled around and retold the ending more than once, and they often justified their story by explaining the obvious close call.

  We’ve all had the experience of talking to someone whose eyes drifted away—to their phone, perhaps—making us wonder whether our lackluster storytelling was to blame. In fact, it’s now clear that the cause and effect are often the reverse: a poor listener destroys the tale.

  Understanding the exact function and meaning of human backchannels continues to be an active area of research. In
2014, for instance, UC Santa Cruz’s Jackson Tolins and Jean Fox Tree demonstrated that those inconspicuous “uh-huhs” and “yeahs” and “hmms” and “ohs” that pepper our speech perform distinct, precise roles in regulating the flow of information from speaker to listener—both its rate and level of detail. Indeed, they are every bit as critical as ACKs are in TCP. Says Tolins, “Really, while some people may be worse than others, ‘bad storytellers’ can at least partly blame their audience.” This realization has had the unexpected side effect of taking off some of the pressure when he gives lectures—including, of course, lectures about that very result. “Whenever I give these backchannel talks, I always tell the audience that the way they are backchanneling to my talk right now is changing what I say,” he jokes, “so they’re responsible for how well I do.”

  Bufferbloat: It’s the Latency, Stupid

  Developing effective active queue management has been hampered by misconceptions about the cause and meaning of queues.

  —KATHLEEN NICHOLS AND VAN JACOBSON

  It was the summer of 2010, and like many parents, Jim Gettys was fielding frequent complaints from his children that the family wi-fi network was running slowly. Unlike most parents, though, Gettys has worked at HP, Alcatel-Lucent, the World Wide Web Consortium, and the Internet Engineering Task Force. He was literally the editor, in 1999, of the HTTP specification still in use today. So where most geek dads would look into the problem, Gettys looked into the problem.

  As Gettys would explain to a roomful of Google engineers, with networking jargon giving way to an urgent and unmistakable conviction:

 

‹ Prev