Narrative Economics

Home > Other > Narrative Economics > Page 23
Narrative Economics Page 23

by Robert J Shiller

Fortunately, these expectations were wrong; there was no recurrence of depression. Yes, there was a fatalistic fear of a returned depression, but the angry narratives of the recent depressions had faded, including the angry narrative of profiteering that contributed to the post–World War I depression. That narrative just did not restart. In addition, the idea that prices should fall to 1913 levels no longer seemed realistic. The end of World War II was also a distraction that temporarily reduced attention to technological unemployment. Instead, a constellation of economic narratives after World War II began to suggest that it was all right to spend money now that the war was over. (We discuss profiteering and the expectation of lower prices in more detail in chapter 17.)

  Among these narratives was the story of the many expensive vacations that Americans were taking right after the war, which offset the frugality narratives of the Great Depression. “The greatest surge in travel in the history of the Americas” was on, and 1946, the year after the end of the war, was dubbed the “Victory Vacation Year.”2 Even a couple years before the war ended, travel agents and vacation resorts in the Western hemisphere had begun promoting the extravagant traveling victory vacation as a way for consumers to spend some of the wealth they’d socked away in government war bonds.

  When the vacations actually happened in 1946, the vacationers duly recorded them on new Ready-Mounts (35mm color slides) and stored those slides in a new case that complemented last year’s Christmas present, a slide projector.3 Also, consumers used home movie cameras (which had been mostly unavailable until the years after World War I) to create extensive travelogues. These slides and movies of the vacation, as well as of the new baby (that’s me, born in 1946), were shown to friends and relatives back home, spreading the sense of happy times and a patriotic feeling about the shared experience of spending extravagance.

  People also began to see their new optimism bolstered by their perceptions of others’ optimism. The baby boom, first noted in 1946, marked a big difference from the end of World War I, which was followed by a deadly influenza epidemic instead of a baby boom. The new optimistic stories after 1948 became a self-fulfilling prophecy, a term coined in 1948 by Robert K. Merton. A 1950 newspaper article asserted:

  With such an optimistic consensus as has developed at this year end, the forecasting itself can have the effect of helping to promote high activity.4

  But the question we must ask is this: Why did so many people in 1945, at the end of World War II, expect a postwar depression? And why did the intermittent recessions in the 1950s and 1960s interrupt the overall optimism? The answer must lie in good part in a Great Depression narrative that still had intermittent power in the postwar period: the same technological unemployment narrative but in mutated form.

  The Automation Recession Narrative

  The same “zero hour” for the labor-saving machinery economic narrative that appeared in 1929 reappeared late in the second half of the twentieth century, but in mutated forms.

  The term singularity began to be used after Einstein published his general theory of relativity in 1915. The word denotes a situation in which some terms in the equations became infinite, and it was used to describe the astronomical phenomenon of what came to be called the black hole: a “singularity in space-time.” But later the glamorous term singularity came to be defined as the time when machines are finally smarter than people in all dimensions.

  Such mutations in the economic narrative shifted attention from the muscles being replaced by electrical machines to the brain being replaced by artificial intelligence. The basic technological unemployment narrative is the same, but the examples have a wider scope. First, giant locomotives and electrical power equipment economized on human muscle power. After the mutation, the narrative focused on computers replacing human thinking. This mutation refreshed the narrative.

  The term automation differs from labor-saving in that automation suggests no one is near the production process, except perhaps for a technician in a distant control room who presses buttons to start the process. Automation was then described starting in the 1950s not just as machines, but rather as “machines running machines.”5 It suggests a process that runs by itself with no one even paying attention.

  Around 1955, the word automation suddenly launched into an epidemic. There was considerable public worry that jobs would be replaced. Notably, electronic data processing began to run whole business operations. The new narrative was of a more wholesale replacement of human involvement in production than in the technological unemployment narrative of the 1920s and 1930s. The year 1956 saw the first “automation strike … fomented by fear of the push-button age.”6 Stories were told of an unimaginable leap forward in automation. This from 1956:

  Visitors to an Eastern manufacturing plant stared in amazement recently as they viewed a new type of factory in operation. While they watched, enormous sheets of steel were fed into a conveyor system. Then the steel traveled along 27 miles of conveyors, was worked over by 2,613 machines and tools, and emerged as brand-new refrigerators—packed, crated, and ready for shipment.

  What amazed the visitors was the fact that no human hands touched the machines or steel while two gleaming-white refrigerators were being produced each minute.

  They were seeing automation in action.7

  Automation was also seen as foretelling the imminent end of labor unions, which had stood up for workers’ rights in the past. It is impossible for labor to organize the machines.8

  Surveys of workers show a sudden shift around the time of the 1957–58 and 1960–61 twin recessions. Public opinion analyst Samuel Lubell, famous for his success at predicting election outcomes, wrote during the slow economy in 1959 between the two recessions:

  In the Spring of 1958 when I conducted a survey of how the public felt about the recession relatively few persons talked of automation, even as a cause of unemployment.

  Currently every third or fourth worker one interviews is likely to cite some case history, drawn from personal experience, of workers displaced by machinery.

  Often the tag line to these stories is the rueful comment, “Some men will never get back their jobs.” Some say, “It’s only the beginning.”

  The same gloomy prediction, “in two years a machine will be doing my job,” was voiced by an elevator operator on Staten Island, an accountant in Cleveland, a switchman in Youngstown and a railway clerk in Detroit.9

  The twin recessions, the severest since the Great Depression, may have been caused by reduced spending attendant on public fears about the future amidst the automation scare. The 1957–58 recession was then dubbed “the automation recession.”10

  The 1957 motion picture The Desk Set,11 starring legendary actors Katharine Hepburn and Spencer Tracy, is set at a company about to acquire an IBM mainframe computer called Emerac. Hepburn plays the role of Bunny Watson, a super-knowledgeable reference librarian for the company. Tracy plays Richard Sumner, a computer engineer who is working on plans for the new computer. In the course of the movie Richard falls in love with Bunny and proposes to her, amidst tensions that he is working to destroy her livelihood. The movie notes that an earlier computer has already automated payroll and eliminated many jobs in the payroll department. Tension builds in the film when Emerac malfunctions and sends out pink slips firing not only Bunny but also everyone in the whole company. The mistake is later corrected.

  The film shows the computer taking over some of the functions of the company’s reference library by answering questions typed on a console. For example, Emerac is asked, “What is the total weight of the earth?” Emerac answers, “With or without people?” (I recently asked the voice-activated Google Assistant, OK Google, the same question, and it answered matter-of-factly: 5.972 × 1024 kg.) Bunny then asks Emerac, “Should Bunny Watson marry Richard Sumner?” Emerac answers, “No,” perhaps suggesting that the computer was romantically involved with her creator. (I asked OK Google the same question, and it responded by directing me to a 2011 New Yorker art
icle, “Is I-Pad the New Emerac?”)

  Extensive concern about the dangers of automation continued into the 1960s. In 1962, the Center for the Study of Democratic Institutions issued a report on cybernation (a word that started to take off as a synonym for automation but fizzled after the 1960s). The report concluded that:

  Cybernation presages changes in the social system so vast and so different from those with which we have traditionally wrestled that it will challenge to their roots our current perceptions about the viability of our way of life. If our democratic system has a chance to survive at all, we shall need far more understanding of the consequences of cybernation.12

  In 1963, labor leader George Meany tied a demand for a thirty-five-hour workweek to concerns about automation. In 1964, US president Lyndon Johnson signed into law during the presidential election a bill creating the National Commission on Technology, Automation, and Economic Progress. The commission’s report13 was delayed until 1966, when the scare was mostly over.

  The 1957–66 automation scare seemed to dissipate rather quickly, and for a number of years. In 1965, the Wall Street Journal ran a story by Alfred L. Malabre, Jr., titled “Automation Alarm Is Proving False.” The article noted that people in 1965 seemed just to have forgotten about automation. Malabre found it interesting that automation wasn’t even mentioned at a major United Auto Workers labor convention in 1965. The article concluded, “The degree to which this pessimism pervaded the leading councils of labor, the campus, the Government and even management was, to say the least, extensive.”14

  Star Wars Stories

  The automation scare came roaring back to life in the 1980s. We’ve seen that narratives often recur in mutated forms. Sometimes the new narratives make use of new words, but sometimes an old word comes back. Figure 14.1 shows an enormous spike of automation in the early 1980s. Use of the word robot, coined in the 1920s, also shows an enormous spike in the early 1980s. One possible explanation: the contagiousness of robot stories was encouraged by the phenomenal success of home computer manufacturers Atari and Apple, which led people to believe that technical progress was accelerating. A company called The Robot Store began manufacturing and selling humanoid robots in 1983. These robots looked like people, and the company’s president predicted that between 10% and 20% of American households would own robots within two years.15 In fact, these devices were practically useless, and the product line flopped.

  Consistent with this observed spike of the word robot around 1980, we observe a sequence of very successful robot movies around the same time, showing how contagion can change over time and bring new viral stories with it. George Lucas’s Star Wars trilogy, a sequence of three movies that appeared between 1977 and 1983, featured the world’s most famous (to date) robots, R2-D2 and C-3PO. The American television cartoon feature The Transformers, which focused on the adventures of gigantic robots with the ability to transform themselves into vehicles and weaponry, aired from 1984 to 1987. Both of these series were accompanied by massive sales of children’s toy figures. Blade Runner (1982) and The Terminator (1984) were other successful robot films of that time.

  Of course, robots had appeared in movies long before the 1970s, and they continue to do so today. In fact, robots in movies precede even the word robot coined by Čapek, the Czech playwright, which started to go viral in 1922. Notably, film robots (or automatons) were called dummies (as in The Dummy, 1917) or mechanical men (as in L’uomo meccanico, 1921). Many more robots appeared in movies after 1922, notably Futura in Fritz Lang’s 1927 Metropolis, which called a robot a Machinen-Mensch, or Machine-Man. However, most films featuring robots were B-grade horror movies with wildly implausible and juvenile themes, analogous to space-aliens-destroy-the-world films that have had relatively little impact on public thinking.16 These mostly silly movies probably did not have much impact on economic activity except where they may have lent emotional color to fears about the automated future.

  Another spike in successful robot movies preceded the automation scare, 1957–64. Film robots of that era included Ro-Man in The Robot Monster (1953), Tobor (robot spelled backward) in Tobor the Great (1954), Chani in Devil Girl from Mars (1954), the Venusian Robots in Target Earth (1954), Robby the Robot in Forbidden Planet (1956), Kronos in Kronos: Destroyer of the Universe (1957), the Colossus in The Colossus of New York (1957), and M.O.G.U.E.R.A. in The Mysterians (1957).

  A significantly mutated form of the automation narrative came back with the twin recessions of 1980 and 1981–82, when the unemployment rate reached into the double digits. The unemployment encouraged the thought that automation might again be responsible for the loss of jobs, an idea that must have fed back into reduced aggregate demand and even higher unemployment. In 1982, Andrew Pollack of the New York Times discerned a “new automation,” exemplified by the now very visible beginnings of automation of offices:

  Those affected so far by office automation have been mainly secretaries—who are still in short supply—and other clerical workers, whose tasks can be speeded by replacing typewriters with electronic word processors and filing cabinets with computerized storage systems. But new office automation systems are affecting management as well, because they give managers the ability to call up information out of the company computers and analyze it themselves, a function that once required a staff of subordinates and middle-level management.17

  Once again, a narrative went viral that we had reached a singularity that made all past experience with labor-saving machinery irrelevant, that might just now be producing a huge army of unemployed. “I don’t see where we can run to this time,”18 Pollack says. This viral narrative may well be the real reason that these twin 1980s recessions were so damaging.

  As Figure 14.1 shows, there was a third spike in automation around 1995. Once again, narratives surged that a singularity was at hand that made all past experience with labor-saving devices obsolete. In 1995 at the very beginning of the Internet boom, there was a narrative about the advent of computer networks:

  Most economists think the ill-effects of automation are transitory, but a growing minority of their colleagues and many technologists think the current surge of technological change differs from anything seen before, for two reasons.

  First, tractors put only farmers out of work, and machine-tool automation only factory workers, but smart devices and computer networks can invade almost every job category involving computing, communicating or simple deduction. They can fill out and check mortgage-loan forms and transfer phone calls, and even allow cows to milk themselves without human assistance at microcontrolled milkers. No technology has ever been as protean, so unrestrained by physical limits, so capable of cutting huge swaths through unrelated industries such as banking, power utilities, insurance and telecommunications.

  Second, the power of devices and networks run by microprocessors and software is increasing at a rate never seen before, roughly doubling in performance every 18 months or so. Among other things, this trend leads to unprecedented reductions in the cost of microchip-based technology, allowing it to be used much more widely and rapidly.19

  This new twist in the fear-of-automation narrative around 1995 did not immediately produce a recession. Most people were not moved to curtail spending because of it, and the world economy boomed. The dominant narratives in the 1990s seemed to be focused on the wonderful business opportunities brought by the coming new millennium. The automation narratives trailed off again in the 2000s, with the distractions of the dot-com boom, the real estate boom, and the world financial crisis of 2007–9. But the automation narratives are still with us, described by new catchphrases.

  The Dot-Com or Millennium Boom in the Stock Markets

  The Internet, first available to the public around 1994, launched a narrative of the amazing power of computers. Before the turn of the century, the Internet Age appeared to coincide with the coming of the new millennium in 2000, much talked about when it was an imminent future event. Dot-com stocks were the primary ben
eficiaries in the years leading up to 2000. During the market expansion from 1974 to 2000, stock prices rose more than twentyfold.20 The period marked the biggest stock market expansion in US history, and descriptions of the expansion suggested exactly that. (This story is beginning to be forgotten now, as it is being replaced by the narratives surrounding the mere threefold expansion following the world financial crisis of 2007–9, which are more contagious at the time of this writing.)

  Discussions of the stock market expansion in the last quarter of the twentieth century did not stress fears of being replaced by machines as a motive to buy dot-com stocks. Why? People tend to speak more of the opportunity provided by investments in information-age inventions than of their personal feelings of inadequacy in the face of technological progress. But it appears that such feelings may have driven people’s motivation to be part of the dot-com phenomenon as the stockholders of tech companies.

  Fears of the Singularity Gain Strength after the 2007–9 World Financial Crisis

  According to Google Trends, the latest wave of automation/technology-based fears began around 2016 and continues unabated at the time of this writing.

  How do we explain this recent surge in automation fears? To answer this question, we must consider the advent of Apple’s Siri, the iPhone app launched in 2011 that uses automatic speech recognition (ASR) and natural language understanding (NLU) to (attempt to) answer the questions you’ve asked it.21 To many, Siri’s ability to talk, understand, and provide information looked like the advent of that long-awaited singularity when machines become as smart as, or smarter than, people. That same year, IBM presented its talking computer Watson as a competitor on the television quiz show Jeopardy, and Watson beat the human champions who played against it. Now these are followed by Amazon Echo’s Alexa, Google’s “OK Google,” and other variations and improvements such as Alibaba’s Tmall Genie, LingLong’s DingDong, and Yandex’s Alice. These inventions were amazing; the time prophesied by Star Wars, The Transformers, and The Jetsons seemed finally to have arrived.

 

‹ Prev