Future Shock

Home > Other > Future Shock > Page 41
Future Shock Page 41

by Alvin Toffler


  The incipient worldwide movement for control of technology, however, must not be permitted to fall into the hands of irresponsible technophobes, nihilists and Rousseauian romantics. For the power of the technological drive is too great to be stopped by Luddite paroxysms. Worse yet, reckless attempts to halt technology will produce results quite as destructive as reckless attempts to advance it.

  Caught between these twin perils, we desperately need a movement for responsible technology. We need a broad political grouping rationally committed to further scientific research and technological advance – but on a selective basis only. Instead of wasting its energies in denunciations of The Machine or in negativistic criticism of the space program, it should formulate a set of positive technological goals for the future.

  Such a set of goals, if comprehensive and well worked out, could bring order to a field now in total shambles. By 1980, according to Aurelio Peccei, the Italian economist and industrialist, combined research and development expenditures in the United States and Europe will run to $73 billion per year. This level of expense adds up to three-quarters of a trillion dollars per decade. With such large sums at stake, one would think that governments would plan their technological development carefully, relating it to broad social goals, and insisting on strict accountability. Nothing could be more mistaken.

  "No one – not even the most brilliant scientist alive today – really knows where science is taking us," says Ralph Lapp, himself a scientist-turned-writer. "We are aboard a train which is gathering speed, racing down a track on which there are an unknown number of switches leading to unknown destinations. No single scientist is in the engine cab and there may be demons at the switch. Most of society is in the caboose looking backward."

  It is hardly reassuring to learn that when the Organization for Economic Cooperation and Development issued its massive report on science in the United States, one of its authors, a former premier of Belgium, confessed: "We came to the conclusion that we were looking for something ... which was not there: a science policy." The committee could have looked even harder, and with still less success, for anything resembling a conscious technological policy.

  Radicals frequently accuse the "ruling class" or the "establishment" or simply "they" of controlling society in ways inimical to the welfare of the masses. Such accusations may have occasional point. Yet today we face an even more dangerous reality: many social ills are less the consequence of oppressive control than of oppressive lack of control. The horrifying truth is that, so far as much technology is concerned, no one is in charge.

  SELECTING CULTURAL STYLES

  So long as an industrializing nation is poor, it tends to welcome without argument any technical innovation that promises to improve economic output or material welfare. This is, in fact, a tacit technological policy, and it can make for extremely rapid economic growth. It is, however, a brutally unsophisticated policy, and as a result all kinds of new machines and processes are spewed into the society without regard for their secondary or long-range effects.

  Once the society begins its take-off for super-industrialism, this "anything goes" policy becomes wholly and hazardously inadequate. Apart from the increased power and scope of technology, the options multiply as well. Advanced technology helps create overchoice with respect to available goods, cultural products, services, subcults and life styles. At the same time overchoice comes to characterize technology itself.

  Increasingly diverse innovations are arrayed before the society and the problems of selection grow more and more acute. The old simple policy, by which choices were made according to short-run economic advantage, proves dangerous, confusing, destabilizing.

  Today we need far more sophisticated criteria for choosing among technologies. We need such policy criteria not only to stave off avoidable disasters, but to help us discover tomorrow's opportunities. Faced for the first time with technological overchoice, the society must now select its machines, processes, techniques and systems in groups and clusters, instead of one at a time. It must choose the way an individual chooses his life style. It must make super-decisions about its future.

  Furthermore, just as an individual can exercise conscious choice among alternative life styles, a society today can consciously choose among alternative cultural styles. This is a new fact in history. In the past, culture emerged without premeditation. Today, for the first time, we can raise the process to awareness. By the application of conscious technological policy – along with other measures – we can contour the culture of tomorrow.

  In their book, The Year 2000, Herman Kahn and Anthony Wiener list one hundred technical innovations "very likely in the last third of the twentieth century." These range from multiple applications of the laser to new materials, new power sources, new airborne and submarine vehicles, three-dimensional photography, and "human hibernation" for medical purposes. Similar lists are to be found elsewhere as well. In transportation, in communications, in every conceivable field and some that are almost inconceivable, we face an inundation of innovation. In consequence, the complexities of choice are staggering.

  This is well illustrated by new inventions or discoveries that bear directly on the issue of man's adaptability. A case in point is the so-called OLIVER (On-Line Interactive Vicarious Expediter and Responder. The acronym was chosen to honor Oliver Selfridge, originator of the concept.) that some computer experts are striving to develop to help us deal with decision overload. In its simplest form, OLIVER would merely be a personal computer programmed to provide the individual with information and to make minor decisions for him. At this level, it could store information about his friends' preferences for Manhattans or martinis, data about traffic routes, the weather, stock prices, etc. The device could be set to remind him of his wife's birthday – or to order flowers automatically. It could renew his magazine subscriptions, pay the rent on time, order razor blades and the like.

  As computerized information systems ramify, moreover, it would tap into a worldwide pool of data stored in libraries, corporate files, hospitals, retail stores, banks, government agencies and universities. OLIVER would thus become a kind of universal question-answerer for him.

  However, some computer scientists see much beyond this. It is theoretically possible, to construct an OLIVER that would analyze the content of its owner's words, scrutinize his choices, deduce his value system, update its own program to reflect changes in his values, and ultimately handle larger and larger decisions for him.

  Thus OLIVER would know how its owner would, in all likelihood, react to various suggestions made at a committee meeting. (Meetings could take place among groups of OLIVERs representing their respective owners, without the owners themselves being present. Indeed, some "computer-mediated" conferences of this type have already been held by the experimenters.)

  OLIVER would know, for example, whether its owner would vote for candidate X, whether he would contribute to charity Y, whether he would accept a dinner invitation from Z. In the words of one OLIVER enthusiast, a computer-trained psychologist: "If you are an impolite boor, OLIVER will know and act accordingly. If you are a marital cheater, OLIVER will know and help. For OLIVER will be nothing less than your mechanical alter ego." Pushed to the extremes of science fiction, one can even imagine pinsize OLIVERs implanted in baby brains, and used, in combination with cloning, to create living – not just mechanical – alter egos.

  Another technological advance that could enlarge the adaptive range of the individual pertains to human IQ. Widely reported experiments in the United States, Sweden and elsewhere, strongly suggest that we may, within the foreseeable future, be able to augment man's intelligence and informational handling abilities. Research in biochemistry and nutrition indicate that protein, RNA and other manipulable properties are, in some still obscure way, correlated with memory and learning. A large-scale effort to crack the intelligence barrier could pay off in fantastic improvement of man's adaptability.

  It may be that the historic m
oment is right for such amplifications of humanness, for a leap to a new superhuman organism. But what are the consequences and alternatives? Do we want a world peopled with OLIVERs? When? Under what terms and conditions? Who should have access to them? Who should not? Should biochemical treatments be used to raise mental defectives to the level of normals, should they be used to raise the average, or should we concentrate on trying to breed super-geniuses?

  In quite different fields, similar complex choices abound. Should we throw our resources behind a crash effort to achieve low-cost nuclear energy? Or should a comparable effort be mounted to determine the biochemical basis of aggression? Should we spend billions of dollars on a supersonic jet transport – or should these funds be deployed in the development of artificial hearts? Should we tinker with the human gene? Or should we, as some quite seriously propose, flood the interior of Brazil to create an inland ocean the size of East and West Germany combined? We will soon, no doubt, be able to put super-LSD or an anti-aggression additive or some Huxleyian soma into our breakfast foods. We will soon be able to settle colonists on the planets and plant pleasure probes in the skulls of our newborn infants. But should we? Who is to decide? By what human criteria should such decisions be taken?

  It is clear that a society which opts for OLIVER, nuclear energy, supersonic transports, macroengineering on a continental scale, along with LSD and pleasure probes, will develop a culture dramatically different from the one that chooses, instead, to raise intelligence, diffuse anti-aggression drugs and provide low-cost artificial hearts.

  Sharp differences would quickly emerge between the society that presses technological advance selectively, and that which blindly snatches at the first opportunity that comes along. Even sharper differences would develop between the society in which the pace of technological advance is moderated and guided to prevent future shock, and that in which masses of ordinary people are incapacitated for rational decision-making. In one, political democracy and broad-scale participation are feasible; in the other powerful pressures lead toward political rule by a tiny techno-managerial elite. Our choice of technologies, in short, will decisively shape the cultural styles of the future.

  This is why technological questions can no longer be answered in technological terms alone. They are political questions. Indeed, they affect us more deeply than most of the superficial political issues that occupy us today. This is why we cannot continue to make technological decisions in the old way. We cannot permit them to be made haphazardly, independently of one another. We cannot permit them to be dictated by short-run economic considerations alone. We cannot permit them to be made in a policy vacuum. And we cannot casually delegate responsibility for such decisions to businessmen, scientists, engineers or administrators who are unaware of the profound consequences of their own actions.

  TRANSISTORS AND SEX

  To capture control of technology, and through it gain some influence over the accelerative thrust in general, we must, therefore, begin to submit new technology to a set of demanding tests before we unleash it in our midst. We must ask a whole series of unaccustomed questions about any innovation before giving it a clean bill of sale.

  First, bitter experience should have taught us by now to look far more carefully at the potential physical side effects of any new technology. Whether we are proposing a new form of power, a new material, or a new industrial chemical, we must attempt to determine how it will alter the delicate ecological balance upon which we depend for survival. Moreover, we must anticipate its indirect effects over great distances in both time and space. Industrial waste dumped into a river can turn up hundreds, even thousands of miles away in the ocean. DDT may not show its effects until years after its use. So much has peen written about this that it seems hardly necessary to belabor the point further.

  Second, and much more complex, we must question the long-term impact of a technical innovation on the social, cultural and psychological environment. The automobile is widely believed to have changed the shape of our cities, shifted home ownership and retail trade patterns, altered sexual customs and loosened family ties. In the Middle East, the rapid spread of transistor radios is credited with having contributed to the resurgence of Arab nationalism. The birth control pill, the computer, the space effort, as well as the invention and diffusion of such "soft" technologies as systems analysis, all have carried significant social changes in their wake.

  We can no longer afford to let such secondary social and cultural effects just "happen." We must attempt to anticipate them in advance, estimating, to the degree possible, their nature, strength and timing. Where these effects are likely to be seriously damaging, we must also be prepared to block the new technology. It is as simple as that. Technology cannot be permitted to rampage through the society.

  It is quite true that we can never know all the effects of any action, technological or otherwise. But it is not true that we are helpless. It is, for example, sometimes possible to test new technology in limited areas, among limited groups, studying its secondary impacts before releasing it for diffusion. We could, if we were imaginative, devise living experiments, even volunteer communities, to help guide our technological decisions. Just as we may wish to create enclaves of the past where the rate of change is artificially slowed, or enclaves of the future in which individuals can pre-sample future environments, we may also wish to set aside, even subsidize, special high-novelty communities in which advanced drugs, power sources, vehicles, cosmetics, appliances and other innovations are experimentally used and investigated.

  A corporation today will routinely field test a product to make sure it performs its primary function. The same company will market test the product to ascertain whether it will sell. But, with rare exception, no one post-checks the consumer or the community to determine what the human side effects have been. Survival in the future may depend on our learning to do so.

  Even when life-testing proves unfeasible, it is still possible for us systematically to anticipate the distant effects of various technologies. Behavioral scientists are rapidly developing new tools, from mathematical modeling and simulation to so-called Delphi analyses, that permit us to make more informed judgments about the consequences of our actions. We are piecing together the conceptual hardware needed for the social evaluation of technology; we need but to make use of it.

  Third, an even more difficult and pointed question: Apart from actual changes in the social structure, how will a proposed new technology affect the value system of the society? We know little about value structures and how they change, but there is reason to believe that they, too, are heavily impacted by technology. Elsewhere I have proposed that we develop a new profession of "value impact forecasters" – men and women trained to use the most advanced behavioral science techniques to appraise the value implications of proposed technology.

  At the University of Pittsburgh in 1967 a group of distinguished economists, scientists, architects, planners, writers, and philosophers engaged in a day-long simulation intended to advance the art of value forecasting. At Harvard, the Program on Technology and Society has undertaken work relevant to this field. At Cornell and at the Institute for the Study of Science in Human Affairs at Columbia, an attempt is being made to build a model of the relationship between technology and values, and to design a game useful in analyzing the impact of one on the other. All these initiatives, while still extremely primitive, give promise of helping us assess new technology more sensitively than ever before.

  Fourth and finally, we must pose a question that until now has almost never been investigated, and which is, nevertheless, absolutely crucial if we are to prevent widespread future shock. For each major technological innovation we must ask: What are its accelerative implications?

  The problems of adaptation already far transcend the difficulties of coping with this or that invention or technique. Our problem is no longer the innovation, but the chain of innovations, not the supersonic transport, or the breeder reactor, or
the ground effect machine, but entire inter-linked sequences of such innovations and the novelty they send flooding into the society.

  Does a proposed innovation help us control the rate and direction of subsequent advance? Or does it tend to accelerate a host of processes over which we have no control? How does it affect the level of transience, the novelty ratio, and the diversity of choice? Until we systematically probe these questions, our attempts to harness technology to social ends – and to gain control of the accelerative thrust in general – will prove feeble and futile.

  Here, then, is a pressing intellectual agenda for the social and physical sciences. We have taught ourselves to create and combine the most powerful of technologies. We have not taken pains to learn about their consequences. Today these consequences threaten to destroy us. We must learn, and learn fast.

  A TECHNOLOGY OMBUDSMAN

  The challenge, however, is not solely intellectual; it is political as well. In addition to designing new research tools – new ways to understand our environment – we must also design creative new political institutions for guaranteeing that these questions are, in fact, investigated; and for promoting or discouraging (perhaps even banning) certain proposed technologies. We need, in effect, a machinery for screening machines.

  A key political task of the next decade will be to create this machinery. We must stop being afraid to exert systematic social control over technology. Responsibility for doing so must be shared by public agencies and the corporations and laboratories in which technological innovations are hatched.

 

‹ Prev