Book Read Free

The Coming of Post-Industrial Society

Page 34

by Daniel Bell


  The simplest and perhaps most important advantage is the sheer amount of statistical data. In 1790, it was debated in the English Parliament whether the population of England was increasing or decreasing. Different people, speaking from limited experiential evidence, presented completely contradictory arguments. The issue was resolved only by the first modern census. An important corollary is the number of competent persons who can work with the data. As J. J. Spengler said, tongue not altogether in cheek, “Not only can economists devote to the population question more attention than they gave it in the past eighty years when they devoted only about 1 to 1½ percent of their articles to population, but there are many more economists to do the job. Today more economists are practicing than lived and died in the past four thousand years, and their number is growing even faster than the world’s population.” 47

  In simple terms, the more data you have (look at the total number of entries needed to build up a picture of the national product accounts), the easier it is to chart the behavior of variables and to make forecasts. Most, if not all, of our basic economic and social projections today are built around the concept of Gross National Product. Yet it is startling to realize how recent are the government gathering and publication of such macroeconomic data, going back only to Franklin D. Roosevelt’s budget message in 1944. And systematic technological forecasting is still in its infancy. As Erich Jantsch writes, in a comprehensive survey of technological forecasting for the Organization for Economic Cooperation and Development (OECD),

  The bulk of technological forecasting today is done without the explicit use of special techniques. ... The need for formal techniques was not felt until a few years ago. While the beginning of systematic technological forecasting can be situated at around 1950, with forerunners since 1945, the existence of a more widespread interest in special techniques first made itself felt about a decade later, in 1960, with forerunners already experimenting in the late 1950s. Now, in the mid-1960s, a noticeable interest is developing in more elaborate multi-level techniques and integrated models that are amenable to computer programing.48

  Most technological forecasting is still based on what an imaginative engineer or writer can dream up as possible. In 1892, a German engineer, Plessner, forecast technological developments (supercritical steam and metal vapor turbines) and functional capabilities (television and voice-operated typewriters) which were—and to some extent still are—far in the future. Arthur C. Clarke, who has made some of the more speculative forecasts in his serious science fiction, has argued that anything that is theoretically possible will be achieved, despite the technical difficulties, if it is desired greatly enough. “Fantastic” ideas, he says, have been achieved in the past and only by assuming that they will continue to be achieved do we have any hope of anticipating the future.49 Much of this kind of expectation is “poetry,” because little attention is paid to constraints, especially economic ones. Fantasy may be indispensable, but only if it is disciplined by technique. With his pixyish gift for paradox, Marshall McLuhan has said that the improvement of intuition is a highly technical matter.

  Much of the early impetus to disciplined technological forecasting came in the recognition of its necessity by the military and was pioneered by Theodor von Karman, the eminent Cal Tech scientist in the field of aerodynamics. His report in 1944 on the future of aircraft propulsion is often referred to as the first modern technological forecast.50 Von Karman later initiated the concentrated technological forecasting, at five-year intervals, of the U.S. Air Force and the technological forecasting in NATO. His innovations were fairly simple. As described by Jantsch: von Karman looked at basic potentialities and limitations, at functional capabilities and key parameters, rather than trying to describe in precise terms the functional technological systems of the future; he emphasized the evaluation of alternative combinations of future basic technologies—i.e. the assessment of alternative technological options; and he sought to place his forecasts in a well-defined time-frame of fifteen to twenty years.

  In this respect, as James Brian Quinn has pointed out, technological forecasts are quite similar to market or economic forecasts. No sophisticated manager would expect a market forecast to predict the precise size or characteristics of individual markets with decimal accuracy. One could reasonably expect the market analyst to estimate the most likely or “expected” size of a market and to evaluate the probabilities and implications of other sizes. In the same sense, as Mr. Quinn has put it,

  Except in immediate direct extrapolations of present techniques, it is futile for the forecaster to predict the precise nature and form of the technology which will dominate a future application. But he can make “range forecasts” of the performance characteristics a given use is likely to demand in the future. He can make probability statements about what performance characteristics a particular class of technology will be able to provide by certain future dates. And he can analyze the potential implications of having these technical-economic capacities available by projected dates.51

  The “leap forward” in the 1960s was due, in great measure, to the work of Ralph C. Lenz, Jr., who is in the Aeronautical Systems Division of the Air Force Systems Command at the Wright-Patterson Air Force Base in Ohio. Lenz’s small monograph, Technological Forecasting, based on a Master’s thesis done ten years before at MIT, is the work most frequently cited for its classification and ordering of technological forecasting techniques.52 Lenz divided the types of forecasting into extrapolation, growth analogies, trend correlation, and dynamic forecasting (i.e. modeling), and sought to indicate the applications of each type either singly or in combination. Erich Jantsch, in his OECD survey, listed 100 distinguishable techniques (though many of these are only variations in the choice of certain statistical or mathematical methods), which he has grouped into intuitive, explorative, normative, and feedback techniques.

  Of those broad approaches that have demonstrated the most promise, and on which the greatest amount of relevant work seems to have been done, four will be illustrated here: -curves and envelope curves (extrapolation); the Delphi technique (intuitive); morphological designs and relevance tree methods (matrices and contexts); and the study of diffusion times, or the predictions of the rates of change in the introduction of new technologies that have already been developed.

  Extrapolation. The foundation of all forecasting is some form of extrapolation—the effort to read some continuing tendency from the past into a determinate future. The most common, and deceptive, technique is the straight projection of a past trend plotted on a line or curve. Linear projections represent the extension of a regular time series—population, or productivity, or expenditures—at a constant rate. The technique has its obvious difficulties. Sometimes there are “system breaks.” As I pointed out earlier, if one had computed agricultural productivity from the mid-1930s for the next twenty-five years at the rate it had followed for the previous twenty-five, the index (using 1910 as a base) would in 1960 have been about 135 or 140, instead of its actual figure of 400. In a different sense, if the rate of expenditures on research and development for the past twenty years were extrapolated in linear fashion for the next twenty, this would mean that by then, most of the Gross National Product would be devoted to that enterprise.

  Most economic forecasting is still based on linear projections because the rates of change in the economy seem to be of that order. In other areas, such as population or knowledge or sudden demand, where the growth seems to be exponential, various writers, from Verhulst to Price, have sought to apply -shaped or logistic curves. The difficulty with these curves, as we have seen, is either that they presume a fixed environment, or that in an open environment they become erratic. Recently, however, particularly in the field of technological forecasting, writers have been attracted to the idea of “escalation”; that is, as each curve in a single trajectory levels off, a new curve “takes off” following a similar upward pattern.

  The notion of “escalation” has bee
n taken up in recent years by Buckminster Fuller, Ralph Lenz, and Robert U. Ayres to become the most striking, if not the most fashionable, mode of technological forecasting, under the name of “envelope-curve” extrapolation.53 In this technique, the best performance of the parameters of any particular invention (say, the speed of aircraft), or a class of technology, is plotted over a long period of time until one reaches the maximum limit of performance—which is called the envelope. There is an assumption here of a final fixed limit, either because it is an intrinsic theoretical limit (e.g. of terrestrial flight 16,000 miles an hour, the point at which the increase of speed in flight sends a vehicle into atmospheric orbit), or because it is an extrinsic stipulation (e.g. because of rate of resource use, a trillion and a half GNP figure for the economy as a ceiling by 1985). Having stipulated a final saturation, then one plots previous escalations and presumed new intermediate escalations by the tangents along the “backs” of the individual curves. In effect, envelope curves are huge-curves, made up of many smaller ones, whose successive decreases in the rate of growth occur as the curve approaches upper limits of intrinsic or extrinsic possibilities.

  In other words, for any class of technology, one has to know, or assume, the absolute finite limits, and then estimate some regular rate of growth toward that limit. The fact that, at the existing moment, the engineering possibilities of moving beyond the present do not exist is, in and of itself, no barrier; it is assumed that the engineering breakthrough will occur.

  Envelope-curve analysis, as Donald Schon points out, is not, strictly speaking, a forecast of invention, but rather the presumed effects of sequential inventions on some technological parameters.54 It assumes that since there is some intrinsic logic in the parameter— e.g. the increasing efficiency of external combustion energy conversion systems, the accelerating rate of increase of operating energy in high-energy physics particle accelerators, in aircraft power trend or speed trend curves (see Figure 3-1)—there will necessarily be an immanent development of the parameter. It also assumes that some invention inevitably will come along which will send the curve shooting up along the big.

  Robert U. Ayres of the Hudson Institute, the most enthusiastic proponent of envelope-curve extrapolation, has argued that the technique works, even when one extrapolates beyond the current “state of the art” in any particular field, because the rate of invention which has characterized the system in the past may be expected to continue until the “absolute” theoretical limits (e.g. velocity of light, absolute zero temperature) are reached for any particular parameter. One therefore should not judge the existing performance capacity of a parameter (e.g. operating energy in a particle accelerator) by the limits of a particular kind of component, but should look at the broad “ma-crovariable” in a historical context. By aggregating the particular types one can see its possible growth in a “piggy-back” jump which takes off from the previous envelope point.

  FIGURE 3-1

  Speed Trend Curve

  SOURCE: Courtesy Robert U. Ayres, Hudson Institute.

  Technological forecasters have claimed that in most instances it is useful to extrapolate from such envelope curves in terms of continued logarithmic growth, and to deviate from this assumption only when persuasive reasons are found for doing so. For example, forecasts projecting the existing trend at any time after 1930 would have produced a more accurate forecast about the maximum speed of aircraft than those based on the existing technological limitations. The singular point about envelope-curve extrapolation is that it cannot deal with individual technologies but with performance characteristics of “macrovariables.” Ayres remarks that the more disaggregative (component-oriented) the analysis, the more it is likely to be biased intrinsically on the conservative side. In fact, it is almost normal for the maximum progress projected on the basis of analysis of components to be, in effect, the lower limit on actual progress, because it assumes no new innovations will come to change the technology. By dealing with a single class of technology, upper limits of growth can be readily stated for envelope curves, although these cannot be given for individual techniques, since these are subject to innovation, substitutability, and escalation.

  Macrovariables have their obvious limits as well.55 Thus in the plotting of the maximum energy curve in a thermal power plant since 1700 (see Figure 3-2), the curve has escalated in typical fashion from 1 or 2 percent to about 44 percent where it stands today. Increases have occurred rather sharply, and have been of the order of 50 percent each time, but at steadily decreasing intervals. Ayres predicts, on this basis, a maximum operating efficiency of 55-60 percent around 1980. In view of the long lead time of a commercial power plant, many persons might doubt this prediction. But since efficiencies of this magnitude are feasible by several means (fuel cells, gas turbines, magnetohydrodynamics) which are being explored, such a forecast may be realizable. However, increases after 1980 would pose a question for the curve since only one improvement factor of 1.5 would bring efficiencies up to 90 percent and improvement beyond that at the same rate is clearly impossible.

  It is important to understand the central logic of envelope-curve projections. What the method assumes is that, for any class of technology, there is somewhere a set of fixed limits (such as the absolute velocity of light). Then, having stipulated that outer point, they seek to estimate the intermediate trajectory of the technological development as a series of escalated -curves moving toward that upper limit.

  FIGURE 3-2

  Efficiency of External Combustion Energy Conversion Systems

  SOURCE: From Energy for Man by Hans Thirring, copyright 1958 by Indiana University Press, published by Harper & Row Torchbooks. Reproduced by permission of Indiana University Press and George C. Harrap and Company.

  The weakness in the theory is, in part, the problem of any forecasting; namely, the choice of parameters and the estimation of their place in the curve, relative to the present, at which successive leveling-offs will occur, and relative to the presumed final limit, whether intrinsic or extrinsic. More generally, when one takes performance characteristics of variables (such as aircraft speed) there is no developed theory why progress should occur in this fashion, other than the argument that certain engineering parameters tend to grow logarithmically, or the crucial assumption that some invention will be forthcoming to produce the next escalation. As to the latter point, forecasters come more and more to rely on the observation, first made by William F. Ogburn and refined considerably by Robert K. Merton, that invention is a “multiple” or simultaneous affair. Because invention is, increasingly, an impersonal social process and not dependent on the genius of individual inventors, such inventions are responses to social need and economic demand. The assumption is made that where there is a demand, the new process will be found. But there may be little reason to assume that such inventions will appear “on schedule.”

  Intuitive technique. In most common-sense forecasting, the simplest procedure is to ask the expert, presumably the man who knows most and best about a field. The problem, of course, is who is an expert, how to determine the test of reliability for his forecasts, and, if experts differ, how to choose among them. To deal with this problem of “the epistemology of the inexact sciences,” Olaf Helmer, then a mathematician at the Rand Corporation, devised the “Delphi technique” as an orderly, planned, methodological procedure in the elicitation and use of expert opinion. The rudiments of the procedure are simple: it involves the successive questioning individually of a large panel of experts in any particular field and the arrival by confrontations at some range, or consensus, of opinion in later rounds. Together with Theodore Gordon, Helmer conducted a long-range forecasting study at Rand to test the efficacy of the method.56

  In the Rand forecasting study, six broad areas were selected for investigations—scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapons systems—and a panel of experts was formed for each of the six areas.r />
  In the panel on inventions and scientific breakthroughs, individuals were asked by letter to list those innovations which appeared urgently needed and realizable in the next fifty years. A total of forty-nine possibilities were named. In the second round, again by letter, the panel was asked to estimate the fifty-fifty probability of the realization of each of the items within a specified time period. From the results, the median year and quartile range for each item were established. (Thus it was predicted that economically useful desalination of sea water would come-between 1965 and 1980, with 1970 as the median year, that controlled thermonuclear power would be available between 1978 to 2000, with 1985 as the median year.) In this second round the investigators found a considerable consensus for ten breakthroughs. They selected seventeen of the remaining thirty-nine for further probing. In a third round, the experts were asked to consider the probable time of these seventeen breakthroughs; if the individual opinion fell outside the range established by the middle 50 percent of the previous responses, the expert was asked to justify his answer. In the fourth round, the range of times was narrowed even further, thirty-one items were included in the final list on which reasonable consensus had been obtained, and majority and minority opinions were stated.

  Laborious as all this may be, the panel technique was adopted for a double reason: it eliminates or lessens undue influence that would result from face-to-face discussion (e.g. the bandwagon effects of majority opinion, embarrassment in abandoning a publicly expressed opinion, etc.) and it allows a feedback, through successive rounds, which gives the respondents time to reconsider an opinion and to reassert or establish new probabilities for their choices.

 

‹ Prev