Emergence
Page 12
It’s conceivable that the software of today lies at the evolutionary foothills of some larger, distributed consciousness to come, like the SKYNET network from the Terminator films that “became self-aware on August 15, 1997.” Certainly the evidence suggests that genuinely cognizant machines are still on the distant technological horizon, and there’s plenty of reason to suspect they may never arrive. But the problem with the debate over machine learning and intelligence is that it has too readily been divided between the mindless software of today and the sentient code of the near future. The Web may never become self-aware in any way that resembles human self-awareness, but that doesn’t mean the Web isn’t capable of learning. Our networks will grow smarter in the coming years, but smarter in the way that an immune system or a city grows smarter, not the way a child does. That’s nothing to apologize for—an adaptive information network capable of complex pattern recognition could prove to be one of the most important inventions in all of human history. Who cares if it never actually learns how to think for itself?
An emergent software program that tracks associations between Web sites or audio CDs doesn’t listen to music; it follows purchase patterns or listening habits that we supply and lets us deal with the air guitar and the off-key warbling. On some basic human level, that feels like a difference worth preserving. And maybe even one that we won’t ever be able to transcend, a hundred years from now or more. But is it truly a difference in kind, or is it just a difference in degree? This is the question that has haunted the artificial intelligence community for decades now, and it hits close to home in any serious discussion of emergent software. Yes, the computer doesn’t listen to music or browse the Web; it looks for patterns in data and converts those patterns into information that is useful—or at least aims to be useful—to human beings. Surely this process is miles away from luxuriating in “The Goldberg Variations,” or reading Slate.
But what is listening to music if not the search for patterns—for harmonic resonance, stereo repetition, octaves, chord progressions—in the otherwise dissonant sound field that surrounds us every day? One tool scans the zeros and ones on a magnetic disc. The other scans the frequency spectrum. What drives each process is a hunger for patterns, equivalencies, likenesses; in each the art emerges out of perceived symmetry. (Bach, our most mathematical composer, understood this better than anyone else.) Will computers ever learn to appreciate the patterns they detect? It’s too early to tell. But in a world where the information accessible online is doubling every six months, it is clear that some form of pattern-matching—all those software programs scouring the Net for signs of common behavior, relevant ideas, shared sensibilities—will eventually influence much of our mediated lives, maybe even to the extent that the pattern-seekers are no longer completely dependent on the commands of the masters, just as city neighborhoods grow and evolve beyond the direct control of their inhabitants. And where will that leave the software then? What makes music different from noise is that music has patterns, and our ears are trained to detect them. A software application—no matter how intelligent—can’t literally hear the sound of all those patterns clicking into place. But does that make its music any less sweet?
4
Listening to Feedback
Late in the afternoon of January 23, 1992, during a campaign stop at the American Brush Company in Claremont, New Hampshire, the ABC political reporter Jim Wooten asked then-candidate Bill Clinton about allegations being made by an ex-cabaret singer named Gennifer Flowers. While rumors of Clinton’s womanizing had been rampant among the press corps, Wooten’s question was the first time the young Democratic front-runner had been asked about a specific woman. “She claims she had a long-standing affair with you,” Wooten said with cameras running. “And she says she tape-recorded the telephone conversations with you in which you told her to deny you had ever had an affair.”
Wooten said later that Clinton took the question as though he’d been practicing his answer for months. “Well, first of all, I read the story. It isn’t true. She has obviously taken money to change the story, a story she felt so strongly about that she hired a lawyer to protect her good name not very long ago. She did call me. I never initiated any calls to her… .” The candidate’s denials went on for another five minutes, and then the exchange was over. Clinton had responded to the question, but was it news? Across the country, a furious debate on journalistic ethics erupted: Did unproven allegations about the candidate’s sex life constitute legitimate news? And did it matter that the candidate himself had chosen to deny the allegations on camera? A cabaret singer making claims about the governor’s adulterous past was clearly tabloid material—but what happened when the governor himself addressed the story?
After two long hours of soul-searching, all three major television networks—along with CNN and PBS’s MacNeil/Lehrer show—chose not to mention Wooten’s question on their national news broadcast, or to show any of the footage from the exchange. The story had emphatically been silenced by some of the most influential figures in all of mass media. The decision to ignore Gennifer Flowers had been unanimous—even at the network that had originally posed the question. Made ten or twenty years before, a decision of that magnitude could have ended a story in its tracks (assuming the Washington Post and the New York Times followed suit the next morning). For the story to be revived, it would need new oxygen—some new development that caused it to be reevaluated. Without new news, the Flowers story was dead.
And yet the following day, all three networks opened with Gennifer Flowers as their lead item. Nothing had happened to the story itself: none of the protagonists had revealed any additional information; even Clinton’s opponents were surprisingly mute about the controversy. The powers that be in New York and Washington had decided the day before that there was no story—and yet here were Peter Jennings and Tom Brokaw leading their broadcasts with the tale of a former Arkansas beauty queen and her scandalous allegations.
How did such a reversal come to pass? It’s tempting to resort to the usual hand-wringing about the media’s declining standards, but in this case, the most powerful figures in televised media had at first stuck to the high road. If they had truly suffered from declining standards, the network execs would have put Jim Wooten on the first night. Something pushed them off the high road, and that something was not reducible to a national moral decline or a prurient network executive. Gennifer Flowers rode into the popular consciousness via the system of televised news, a system that had come to be wired in a specific way.
What we saw in the winter of 1992 was not unlike watching Nixon sweat his way through the famous televised debate of 1960. As countless critics have observed since, we caught a first glimpse in that exchange of how the new medium would change the substance of politics: television would increase our focus on the interpersonal skills of our politicians and diminish our focus on the issues. With the Flowers affair, though, the medium hadn’t changed; the underlying system had. In the late eighties, changes in the flow of information—and particularly the raw footage so essential to televised news—had pushed the previously top-down system toward a more bottom-up, distributed model. We didn’t notice until Jim Wooten first posed that question in New Hampshire, but the world of televised news had taken a significant first step toward emergence. In the hierarchical system of old, the network heads could willfully suppress a story if they thought it was best for the American people not to know, but that privilege died with Gennifer Flowers, and not because of lowered standards or sweeps week. It was a casualty of feedback.
*
It is commonplace by now to talk about the media’s disposition toward feeding frenzies, where the coverage of a story naturally begets more coverage, leading to a kind of hall-of-mirrors environment where small incidents or allegations get amplified into Major Events. You can normally spot one of these feedback loops as it nears its denouement, since it almost invariably triggers a surge of self-loathing that washes through the entire comm
entariat. These self-critical waters seem to rise on something like an annual cycle: think of the debate about the paparazzi and Princess Di’s death, or the permanent midnight of “Why Do We Care So Much About O.J.?” But the feedback loops of the 1990s weren’t an inevitability; they came out of specific changes in the underlying system of mass media, changes that brought about the first stirrings of emergence—and foreshadowed the genuinely bottom-up systems that have since flourished on the Web. That feedback was central to the process should come as no surprise: all decentralized systems rely extensively on feedback, for both growth and self-regulation.
Consider the neural networks of the human brain. On a cellular level, the brain is a massive network of nerve cells connected by the microscopic passageways of axons and dendrites. A flash of brain activity—thinking of a word, wrestling with a concept, parsing the syntax of the sentence you’re reading now—triggers an array of neuronal circuits like traffic routes plotted on the map of the mind. Each new mental activity triggers a new array, and an unimaginably large number of possible neuronal circuits go unrealized over the course of a human life (one reason why the persistent loss of brain cells throughout our adult years isn’t such a big deal). But beneath all that apparent diversity, certain circuits repeat themselves again and again. One of the most tantalizing hypotheses in neuroscience today is that the cellular basis of learning lies in the repetition of those circuits. As neurologist Richard Restak explains, “Each thought and behavior is embedded within the circuitry of the neurons, and … neuronal activity accompanying or initiating an experience persists in the form of reverberating neuronal circuits, which become more strongly defined with repetition. Thus habit and other forms of memory may consist of the establishment of permanent and semipermanent neuronal circuits.” A given circuit may initially be associated with the idea of sandwiches, or the shape of an isosceles triangle—and with enough repetition of that specific circuit, it marks out a fixed space in the brain and thereafter becomes part of our mental vocabulary.
Why do these feedback loops and reverberating circuits happen? They come into being because the neural networks of the brain are densely interconnected: each individual neuron contains links—in the form of axons and synapses—to as many as a thousand other neurons. When a given neuron fires, it relays that charge to all those other cells, which, if certain conditions are met, then in turn relay the charge to their connections, and so on. If each neuron extended a link to one or two fellow neurons, the chance of a reverberating loop would be greatly reduced. But because neurons reach out in so many directions simultaneously, it’s far more likely that a given neuron firing will wind its way back to the original source, thus starting the process all over again. The likelihood of a feedback loop correlates directly to the general interconnectedness of the system.
By any measure, the contemporary mediasphere is a densely interconnected system, even if you don’t count the linkages of the online world. Connected not just in the sense of so many homes wired for cable and so many rooftops crowned by satellite dishes, but also in the more subtle sense of information being plugged into itself in ever more baroque ways. Since Daniel Boorstin first analyzed the television age in his still-invaluable 1961 work, The Image, the world of media journalism has changed in several significant ways, with most of the changes promoting an increase of relays between media outlets. There are far more agents in the system (twenty-four-hour news networks, headline pagers, newsweeklies, Web sites), and far more repackagings and repurposings of source materials, along with an alarming new willingness to relay uncritically other outlets’ reporting. Mediated media-critique, unknown in Boorstin’s less solipsistic times, and formerly quarantined to early-nineties creations such as CNN’s Reliable Sources and the occasional Jeff Greenfield segment on Nightline, is now regularly the lead story on Larry King and Hardball. The overall system, in other words, has shifted dramatically in the direction of distributed networks, away from the traditional top-down hierarchies. And the more the media contemplates its own image, the more likely it is that the system will start looping back on itself, like a Stratocaster leaning against the amp it’s plugged into.
The upshot of all this is that—in the national news cycle at least—there are no longer any major stories in which the media does not eventually play an essential role, and in many cases the media’s knack for self-reflection creates the story itself. You don’t need much of an initial impulse to start the whole circuit reverberating. The Gennifer Flowers story is the best example of this process at work. As Tom Rosenstiel reported in a brilliant New Republic piece several years ago, the Flowers controversy blossomed because of a shift in the relationship between the national news networks and their local affiliates, a shift that made the entire system significantly more interconnected. Until the late eighties, local news (the six-and eleven-o’clock varieties) relied on the national network for thirty minutes of national news footage, edited according to the august standards of the veterans in New York. Local affiliates could either ignore the national stories or run footage that had been supplied to them, but if the network decided the story wasn’t newsworthy, the affiliates couldn’t cover it.
All this changed when CNN entered the picture in the mideighties. Since the new network lacked a pool of affiliates to provide breaking news coverage when local events became national stories, Ted Turner embarked on a strategy of wooing local stations with full access to the CNN news feed. Instead of a tightly edited thirty-minute reel, the affiliates would be able to pick and choose from almost anything that CNN cameras had captured, including stories that the executive producers in Atlanta had decided to ignore. The Flowers episode plugged into this newly rewired system, and the results were startling. Local news affiliates nationwide also had access to footage of Clinton’s comment, and many of them chose to jump on the story, even as the network honchos in New York and Washington decided to ignore it. “When NBC News political editor Bill Wheatley got home and turned on the eleven P.M. local news that night, he winced: the station NBC owned in New York ran the story the network had chosen not to air the same evening,” Rosenstiel writes. “By the next afternoon, even Jim Lehrer of the cautious MacNeil/Lehrer NewsHour on PBS told the troops they had to air the Flowers story against their better judgment. ‘It’s out of my hands,’ he said.”
The change was almost invisible to Americans watching at home, but its consequences were profound. The mechanism for determining what constituted a legitimate story had been reengineered, shifting from a top-down system with little propensity for feedback, to a kind of journalistic neural net where hundreds of affiliates participated directly in the creation of the story. And what made the circuit particularly vulnerable to reverberation was that the networks themselves mimicked the behavior of the local stations, turning what might have been a passing anomaly into a full-throttle frenzy. That was the moment at which the system began to display emergent behavior. The system began calling the shots, instead of the journalists themselves. Lehrer had it right when he said the Gennifer Flowers affair was “out of my hands.” The story was being driven by feedback.
*
The Flowers affair is a great example of why emergent systems aren’t intrinsically good. Tornadoes and hurricanes are feedback-heavy systems too, but that doesn’t mean you want to build one in your backyard. Depending on their component parts, and the way they’re put together, emergent systems can work toward many different types of goals: some of them admirable, some more destructive. The feedback loops of urban life created the great bulk of the world’s most dazzling and revered neighborhoods—but they also have a hand in the self-perpetuating cycles of inner-city misery. Slums can also be emergent phenomena. That’s not an excuse to resign ourselves to their existence or to write them off as part of the “natural” order of things. It’s reason to figure out a better system. The Flowers affair was an example of early-stage emergence—a system of local agents driving macrobehavior without any central authority calling the sh
ots. But it was not necessarily adaptive.
Most of the time, making an emergent system more adaptive entails tinkering with different kinds of feedback. In the Flowers affair, we saw an example of what systems theorists call positive feedback—the sort of self-fueling cycles that cause a note strummed on a guitar to expand into a howling symphony of noise. But most automated control systems rely extensively on “negative feedback” devices. The classic example is the thermostat, which uses negative feedback to solve the problem of controlling the temperature of the air in a room. There are actually two ways to regulate temperature. The first would be to design an apparatus capable of blowing air at various different temperatures; the occupant of the room would simply select the desired conditions and the machine would start blowing air cooled or heated to the desired temperature. The problem with that system is twofold: it requires a heating/cooling apparatus capable of blowing air at precise temperatures, and it is utterly indifferent to the room’s existing condition. Dial up seventy-two degrees on the thermostat, and the machine will start pumping seventy-two-degree air into the room—even if the room’s ambient temperature is already in the low seventies.
The negative feedback approach, on the other hand, provides a simpler solution, and one that is far more sensitive to a changing environment. (Not surprisingly, it’s the technique used by most home thermostats.) Instead of pumping precisely calibrated air into the room, the system works with three states: hot air, cool air, and no air. It takes a reading of the room’s temperature, measures that reading against the desired setting, and then adjusts its state accordingly. If the room is colder than the desired setting, the hot air goes on. If it is warmer, the cool air flows out. The system continuously measures the ambient temperature and continuously adjusts its output, until the desired setting has been reached—at which point it switches into the “no air” state, where it remains until the ambient temperature changes for some reason. The system uses negative feedback to home in on the proper conditions—and for that reason it can handle random changes in the environment.