by David Brin
Early attempts at systematic percolation are under way, however. Pascal Chesnais’s FishWrap is an electronic newspaper that mixes the editor’s personal choice of stories with encouragement for readers to hook on additional items they think may be of interest. Articles are then ranked according to the number of people who read each one. The process of collaborative attention moves according to a complex mixture of quality and passing enthusiasm, without subjecting the FishWrap community to strict homogenization of viewpoint. An element of serendipity and surprise remains.
There are potential drawbacks to such methods of ranking merit. Percolation may resemble a “free market” of creativity, but in any market some cheat, for example, by logrolling with friends, reviewing their own works, or “hacking” to raise their scores artificially. Some scandals we’ll see in future decades may make “payola” seem quaint and rather innocent.
Finally, what if the new era becomes too amorphous for any general sense of direction to coalesce out of several billion cantankerous individuals? If every person who says “right” is balanced by another who says “wrong,” the result may not be a rich amalgam, but a world where every meaning is canceled by its opposite. In other words, the great big stew of future culture may have no sense of up or down. No direction for the “best” to percolate toward. James Burke mulled over this possibility, worrying about an age when “it’s up to the new leaders of taste—that is, the entire population—to decide what the standards are. And I believe what will happen is the standards—that’s the old historical term—will disappear.”
Even those with a hankering for diversity might find such a world disorienting, and possibly lonely as well. So let’s consider how to avoid it.
Percolation may have drawbacks. Nevertheless, we are better off trying some innovative new ways for art and ideas to shift and rise by their own merit, allowing eclectic trends in our new renaissance to sort themselves out. A simple, semianarchic system of popular value could powerfully supplement hierarchies of media pundits and producers.
We are entering the age of mirages, illusions and make-believe.
While some people are blinded by all-pervading noise, others acquire
X-ray eyes, letting them see beyond all the old, traditional walls.
For a while, this will create a golden time of opportunity for swindlers, blackmailers, and all kinds of cheaters.
Then we will adapt.
M. N. PLANO
Credibility Ratings
Some years ago, writer-director Buck Henry illustrated “credibility ratings” through a skit on Saturday Night Live. Ostensibly, all the seats in the audience had been equipped with “attention monitors” that would make Henry’s television image diminish when viewers got bored, and grow when they were interested. As he droned on about the advantages of this technology, Henry’s face shrank and a worried expression took over ... until he shouted, “Sex!”
Abruptly, his image filled the screen. Thereafter, it stayed large so long as he pandered to the audience, telling them all the salacious, low-brow things he did not plan on talking about.
Of course there were no monitors in the seats. It was a spoof, and the audience guffawed appreciatively. But that may change. Picture a typical twenty-first-century television reporter coming on screen. Under his talking head, you see a numerical score. (Or several competing scores: one compiled by Consumer Reports, one by Nielsen, and others gathered directly from viewers, reacting in real time.) These little numbers show how trustworthy or believable customers find the product the video personality is trying to “sell,” whether merchandise, commentary, or news. Imagine these credibility ratings changing in real time. Envision perspiration popping on our ace authority’s brow as his score rapidly plummets before his eyes. Now picture the same thing happening to a politician, having such a figure flashing away at the bottom of her TelePrompter screen!
The downside is obvious. Although it could serve to elevate the level of debate, it could also debase it terribly. One can imagine some popular figure asking followers to downgrade a competitor’s ratings, all at once. Or some senator playing to the mob, reciting whatever words the numbers say they want to hear. Later, it might lead to demarchy, a chilling form of democracy, in which television viewers watch shallow five-minute arguments on the tube and then vote yes or no with a button on their remote control, no longer delegating their authority to elected deliberators, but instead exercising sovereign power each night, deciding issues of the day after the most superficial forms of “debate.”
Will we find Buck Henry’s Saturday Night Live skit about slavery to instant audience reactions dismally prophetic? Does it illustrate the decadent, homogenized future awaiting us as soon as the low-class masses gain total control of content through high-speed feedback mechanisms?
Or did Henry’s satirical little play demonstrate something else? Perhaps that people already have a sense of humor and perspective about this very topic, and are willing to laugh at such tendencies in themselves?
It is the latter possibility that offers hope. No one said a transparent society would come without drawbacks, or challenges to the good judgment of twenty-first-century citizens.
The classic Mayan civilization (now long extinct) had a superbly pampered class of brilliant astrological futurists.
BRUTE STERLING
I never make predictions, especially about the future.
YOGI BERRA
A Predictions Registry
“The secrets of flight will not be mastered within our lifetime ... not within a thousand years.”
This prognostication, singularly famous for its irony, was reportedly uttered in 1901 by none other than Wilbur Wright. We can presume he said it during a foul mood, after some temporary setback. Smiling with the benefit of hindsight, we know Wilbur and his brother would prove the forecast wrong in just two years. Indeed, most attempts to divine the future seem so ineffective that it is a wonder we humans keep trying.
But there’s the rub. We do keep attempting to look ahead. In fact, it may be one of our species’ most salient features.
Oversimplifying a bit, many neuroscientists picture the human brain having evolved through a process of layering. It still uses nearly all the same suborgans as the nervous system of a reptile, from cerebellum to hypothalamus. But atop those ancient circuits for action and emotion there later spread the sophisticated mammalian cortex, talented at visual and manipulative imagery. Upon this layer, primates laid down further strata in the frontal zones for more advanced styles of contemplation, such as planning sets of imminent actions some steps ahead. Finally, in humans, the prefrontal lobes appear to be the latest additions, perhaps just a few hundred thousand years old. When these tiny organs fail, following a lobotomy, for example, patients experience deficits that include lessened ability to meditate on the future. They no longer exhibit much curiosity, or worry, about tomorrow. In other words, they have lost something that makes us uniquely human.
For the unimpaired, no topic so captivates as the vista that lies ahead, in the future’s undiscovered country. One of our favorite pastimes is the thought experiment (Einstein and Mach called it Gedankenex periment)—dwelling on some planned or imagined action, considering possible consequences. By exploring potential outcomes in the tentative world of our thoughts, we hope to cull the most obviously flawed of our schemes, perhaps improving our chances of success.
No conceivable power entices humans more than improving
their accuracy at forecasting the future.
Now, in its pure form, prophecy is just a lot of hooey. If any psychic could do true divination, she would not be hawking her wares on late-night television, but would be a megabillionaire taking part in running the world. Yet the fact remains that we do spend a lot of time and money trying to improve our odds of being right about future events. Economists keep struggling to improve their conjectures about markets, shooting at an ever-moving target as those markets adjust rapidly to each new model. Stockbr
okers, politicians, and diplomats all seem eager to project “what might happen next.” The intelligence community devotes immense efforts to forecasting the behavior of nations and other international players, to satisfy the demands of policymakers. Pollsters claim insight into the next wave of opinions and trends among common citizens. Commodities traders are called geniuses as long as their luck holds, until statistical chance catches up, and they lose the bank. It can be fascinating to realize how much of our economy is dedicated to variants of the same theme—people buying and selling predictions. When you get right down to it, almost any advice or decision made in a human context is a kind of bet, based on some guess about future outcomes.
Nowhere is this more true than in science. Philosopher Karl Popper held that prediction is the one true test of any theory. It is not enough to offer a hypothesis that explains past observations. To gain respect, your model must explore unknown territory, calculating, estimating, or otherwise foretelling observations not yet made! Only by exposure to potential falsification can a theory prove its worth and become accepted as a useful “working model” of the world. For instance, despite wishful yearnings, cold fusion was tested and disproved in the late 1980s by open-minded investigators who held fast to objective procedures—the same procedures that vindicated other “rebel” theories about black holes, punctuated evolution, and new treatments for AIDS. Some pragmatic forecasting tools, for example, probability theory and weather modeling, save countless lives and billions of dollars, while the hot new field of risk analysis is helping researchers understand how real humans act to preserve their own safety.
Two of the highest human virtues, honesty and skill, are routinely tested by making open, accountable assertions, then observing the effects of time. Few statements enhance credibility with a spouse, subordinates, adversaries, or colleagues more than the cheerful proposal, “Let’s check out your objections, and find out if I’m wrong.”
Alas, it’s one thing to predict a mass for the Top quark, and test your theory by experiment. It is quite another to claim that you can divine future trends in culture, commerce, or politics, especially when the thing at stake is not a single reputation (like mine, in writing this book) but the future of a project, a company, or an army in the field. How will the Russian electorate react to the next expansion of NATO? Might the unstable North Korean regime ignite a desperate war? Should your consortium launch a communication satellite system, or will the falling price of fiber optics make such a venture untenable? Will more customers buy personal computers next year? Can harsh penalties deter crime? Will more people be lifted out of poverty via racial preferences, or by exposing them to the harsh discipline of the marketplace?
Each era seems to have its own fads regarding how best to do forecasts. In the 1960s and 1970s there was passionate interest in “Delphi polling,” which involved asking a large number of knowledgable people about the likelihood of certain future events. The average of their opinions was thought for a while to have some unbiased validity, when in fact it simply reflected the notions that were most fashionable at the time. In one infamous example, the renowned Rand Corporation released a set of predictions that included reliable long-range weather forecasting and mind control by 1975; manipulation of weather, controlled fusion, and electronic organs for humans by 1985; and sea floor mines, gene correction, and intelligent robots by 1990.
Modern institutions of government and private capital are deeply concerned over the murkiness of their projections. Each summer many hold workshops, encouraging top-level managers to consult with experts, futurists, and even science fiction authors in pondering the long view. Yet the management of great enterprises ultimately comes down to the judgment (and guesswork) of directors, generals, and public officials.
Things may be worse than most leaders believe. Earlier in this book we referred to modern observers who think we have entered an era of unpredictability. In Out of Control, Kevin Kelly described how chaos theory and new notions of emergent properties mean that complex systems will tend to behave in unpredictable ways as tiny perturbations propagate through time, almost as if they are taking on a life of their own. Elsewhere we discuss how open criticism can ameliorate such problems. But can it solve the basic dilemma of unpredictability? Jeff Cooper, director of the Center for Information Strategy and Policy for Science Applications International Corp., contends that the very notion of prediction may become untenable in the years ahead, forcing us to rely on developing new skills of rapid evaluation and response in real time.
All of that may be true. Any effort at basing our forecasts on a firm foundation may be doomed to fracture as the ground keeps shifting underfoot. Yet we won’t stop trying, because that is one of the things humans do. We try to predict events and potential consequences of our actions. The desire to peer into the future is hard-wired in our brains. Even if chaos rends our best projections, we’ll keep trying.
In fact, new electronic tools may offer an alternative. Not a better way, but maybe a chance to improve the ways we already have.
Once, a junior State Department officer caused a ruckus by predicting that Saddam Hussein was planning to invade Kuwait. The fellow grew irritated with his bosses, and they with him, until they parted company.
A while later, Saddam invaded.
Was the young prophet vindicated? Did he get his old job back, with a promotion?
In real life, social skills count for a lot—almost as much as whom you know, or where you went to school. Why would any normal person choose to hire back a fellow whose presence each day would be a living reminder of how wrong you once were? It is easier to rationalize. (Maybe he was just lucky that time.) Besides, nobody keeps records of who was right, how often, or when.
Until now, that is.
Lately, modern media have begun (crudely) to keep track of predictive successes and failures, by making available to journalists the complete records of statements made by public figures. All through the late 1980s, for instance, the Board of Supervisors of Orange County, California, largely ignored John M. W. Moorlach when he criticized their risky strategy for investing public funds. Later, when the county went bankrupt in one of America’s biggest financial scandals, Moorlach’s earlier jeremiads appeared on journalists’ computer screens. He subsequently was hailed as a visionary.
The idea of a predictions registry may have originated when Sir Francis Galton (1822—1911) attempted to perform experiments statistically measuring the efficacy of prayer. (He discovered what skeptics now call the “placebo effect.”) In the 1970s, efforts were made to catalog predictions using the crude technique of mailing postcards to a post office box in New York City, but sorting through shoe boxes did not prove an efficient or comprehensive method of correlating results, and the effort collapsed.
The Internet has changed all that. For example, a “predictions market” has been set up by Robin Hanson, a researcher at the University of California at Berkeley. In his Web space, visitors bet against each other about future trends in science, much like Vegas odds makers, or gamblers on the Chicago commodities exchange. Winners are those whose guesses (or sage insights) prove correct most often. The step to a more general registry would be simple. Anyone claiming to have special foresight should be judged by a simple standard: success or failure.
The first use of such a registry might be to debunk psychics and social vampires who now prey on the gullible, by having skeptical volunteers score all their predictions, not just those they later choose to remember. Each forecast would get a specificity multiplier, if it gives names, places, and exact dates. By this standard, Jean Dixon’s warnings that a youngish Democrat would be elected president in 1960 and die in office, and that Robert Kennedy would later be assassinated in California, would receive major credit—perhaps compensating for countless failures she swept under the rug. In contrast, all the vague arm wavings of Nostradamus would score near zero, no matter how often adherents claim success, since obscurity lets them be applied almost anywhere
, any time.
This is transparency in action. Just as citizens now rely on laws requiring truth in advertising and accurate product labeling, a time may come when we expect all would-be prophets to show accuracy scores before demanding our attention. Only there will be no need for government involvement, since predictions registries will be established privately, starting as amateur endeavors.
It could go beyond debunking scam artists to revealing anomalous positive scores by individuals who have a knack for being right noticeably more often than chance. However they achieve this—whether via clever models or unexamined intuition—society should take notice, for whatever insight their methods offer.
The most important predictions are warnings. Earlier we talked about one of the key ironies of human nature, that criticism is the best-known antidote to error, yet individuals and cultures find it painful. Leaders are naturally inclined to snub critics. Where are those who heroically warned about the dangers of Chernobyl-style nuclear reactors? Brezhnev sent them to gulags. Now that Brezhnev is gone, are the heroes in positions of influence?
Life has no guarantees. The more complex our undertakings become, the more we’ll face unexpected repercussions. Edward Tenner’s book Why Things Bite Back: Technology and the Revenge of Unintended Consequences lists many well-meant endeavors that had disagreeable side effects.
Disagreeable, yes. But wholly unanticipated? In how many cases did someone warn against the very unpleasantness that eventually happened? Someone who might have seemed irritating at the time, and was pushed aside? Would it serve a useful purpose to grant high prediction scores after the fact, as consolation prizes to Cassandras whose original dire warnings were ignored?