The Glass Cage: Automation and Us
Page 4
Some of the heavy spending on robots and other automation technologies in recent years may reflect temporary economic conditions, particularly the ongoing efforts by politicians and central banks to stimulate growth. Low interest rates and aggressive government tax incentives for capital investment have likely encouraged companies to buy labor-saving equipment and software that they might not otherwise have purchased.29 But deeper and more prolonged trends also seem to be at work. Alan Krueger, the Princeton economist who chaired Barack Obama’s Council of Economic Advisers from 2011 to 2013, points out that even before the recession “the U.S. economy was not creating enough jobs, particularly not enough middle-class jobs, and we were losing manufacturing jobs at an alarming rate.”30 Since then, the picture has only darkened. It might be assumed that, at least when it comes to manufacturing, jobs aren’t disappearing but simply migrating to countries with low wages. That’s not so. The total number of worldwide manufacturing jobs has been falling for years, even in industrial powerhouses like China, while overall manufacturing output has grown sharply.31 Machines are replacing factory workers faster than economic expansion creates new manufacturing positions. As industrial robots become cheaper and more adept, the gap between lost and added jobs will almost certainly widen. Even the news that companies like GE and Apple are bringing some manufacturing work back to the United States is bittersweet. One of the reasons the work is returning is that most of it can be done without human beings. “Factory floors these days are nearly empty of people because software-driven machines are doing most of the work,” reports economics professor Tyler Cowen.32 A company doesn’t have to worry about labor costs if it’s not employing laborers.
The industrial economy—the economy of machines—is a recent phenomenon. It has been around for just two and a half centuries, a tick of history’s second hand. Drawing definitive conclusions about the link between technology and employment from such limited experience was probably rash. The logic of capitalism, when combined with the history of scientific and technological progress, would seem to be a recipe for the eventual removal of labor from the processes of production. Machines, unlike workers, don’t demand a share of the returns on capitalists’ investments. They don’t get sick or expect paid vacations or demand yearly raises. For the capitalist, labor is a problem that progress solves. Far from being irrational, the fear that technology will erode employment is fated to come true “in the very long run,” argues the eminent economic historian Robert Skidelsky: “Sooner or later, we will run out of jobs.”33
How long is the very long run? We don’t know, though Skidelsky warns that it may be “uncomfortably close” for some countries.34 In the near term, the impact of modern technology may be felt more in the distribution of jobs than in the overall employment figures. The mechanization of manual labor during the Industrial Revolution destroyed some good jobs, but it led to the creation of vast new categories of middle-class occupations. As companies expanded to serve bigger and more far-flung markets, they hired squads of supervisors and accountants, designers and marketers. Demand grew for teachers, doctors, lawyers, librarians, pilots, and all sorts of other professionals. The makeup of the job market is never static; it changes in response to technological and social trends. But there’s no guarantee that the changes will always benefit workers or expand the middle class. With computers being programmed to take over white-collar work, many professionals are being forced into lower-paying jobs or made to trade full-time posts for part-time ones.
While most of the jobs lost during the recent recession were in well-paying industries, nearly three-fourths of the jobs created since the recession are in low-paying sectors. Having studied the causes of the “incredibly anemic employment growth” in the United States since 2000, MIT economist David Autor concludes that information technology “has really changed the distribution of occupation,” creating a widening disparity in incomes and wealth. “There is an abundance of work to do in food service and there is an abundance of work in finance, but there are fewer middle-wage, middle-income jobs.”35 As new computer technologies extend automation into even more branches of the economy, we’re likely to see an acceleration of this trend, with a further hollowing of the middle class and a growing loss of jobs among even the highest-paid professionals. “Smart machines may make higher GDP possible,” notes Paul Krugman, another Nobel Prize–winning economist, “but also reduce the demand for people—including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots.”36
The news is not all dire. As the U.S. economy gained steam during the second half of 2013, hiring strengthened in several sectors, including construction and health care, and there were encouraging gains in some higher-paying professions. The demand for workers remains tied to the economic cycle, if not quite so tightly as in the past. The increasing use of computers and software has itself created some very attractive new jobs as well as plenty of entrepreneurial opportunities. By historical standards, though, the number of people employed in computing and related fields remains modest. We can’t all become software programmers or robotics engineers. We can’t all decamp to Silicon Valley and make a killing writing nifty smartphone apps.* With average wages stagnant and corporate profits continuing to surge, the economy’s bounties seem likely to go on flowing to the lucky few. And JFK’s reassuring words will sound more and more suspect.
Why might this time be different? What exactly has changed that may be severing the old link between new technologies and new jobs? To answer that question we have to look back to that giant robot standing at the gate in Leslie Illingworth’s cartoon—the robot named Automation.
THE WORD automation entered the language fairly recently. As best we can tell, it was first spoken in 1946, when engineers at the Ford Motor Company felt the need to coin a term to describe the latest machinery being installed on the company’s assembly lines. “Give us some more of that automatic business,” a Ford vice president reportedly said in a meeting. “Some more of that—that—‘automation.’ ”37 Ford’s plants were already famously mechanized, with sophisticated machines streamlining every job on the line. But factory hands still had to lug parts and subassemblies from one machine to the next. The workers still controlled the pace of production. The equipment installed in 1946 changed that. Machines took over the material-handling and conveyance functions, allowing the entire assembly process to proceed automatically. The alteration in work flow may not have seemed momentous to those on the factory floor. But it was. Control over a complex industrial process had shifted from worker to machine.
The new word spread quickly. Two years later, in a report on the Ford machinery, a writer for the magazine American Machinist defined automation as “the art of applying mechanical devices to manipulate work pieces . . . in timed sequence with the production equipment so that the line can be put wholly or partially under push-button control at strategic stations.”38 As automation reached into more industries and production processes, and as it began to take on metaphorical weight in the culture, its definition grew more diffuse. “Few words of recent years have been so twisted to suit a multitude of purposes and phobias as this new word, ‘automation,’ ” grumbled a Harvard business professor in 1958. “It has been used as a technological rallying cry, a manufacturing goal, an engineering challenge, an advertising slogan, a labor campaign banner, and as the symbol of ominous technological progress.” He then offered his own, eminently pragmatic definition: “Automation simply means something significantly more automatic than previously existed in that plant, industry, or location.”39 Automation wasn’t a thing or a technique so much as a force. It was more a manifestation of progress than a particular mode of operation. Any attempt at explaining or predicting its consequences would necessarily be tentative. As with many technological trends, automation would always be both old and new, and it would require a fresh reevaluation at each stage of its advance.
&
nbsp; That Ford’s automated equipment arrived just after the end of the Second World War was no accident. It was during the war that modern automation technology took shape. When the Nazis began their bombing blitz against Great Britain in 1940, English and American scientists faced a challenge as daunting as it was pressing: How do you knock high-flying, fast-moving bombers out of the sky with heavy missiles fired from unwieldy antiaircraft guns on the ground? The mental calculations and physical adjustments required to aim a gun accurately—not at a plane’s current position but at its probable future position—were far too complicated for a soldier to perform with the speed necessary to get a shot off while a plane was still in range. This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections of a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots.
As for the members of the gunnery crews, their work would have to change to accommodate the new generation of automated weapons. And change it did. Artillerymen soon found themselves sitting in front of screens in darkened trucks, selecting targets from radar displays. Their identities shifted along with their jobs. They were no longer seen “as soldiers,” writes one historian, but rather “as technicians reading and manipulating representations of the world.” 40
In the antiaircraft cannons born of the Allied scientists’ work, we see all the elements of what now characterizes an automated system. First, at the system’s core, is a very fast calculating machine—a computer. Second is a sensing mechanism (radar, in this case) that monitors the external environment, the real world, and communicates essential data about it to the computer. Third is a communication link that allows the computer to control the movements of the physical apparatus that performs the actual work, with or without human assistance. And finally there’s a feedback method—a means of returning to the computer information about the results of its instructions so that it can adjust its calculations to correct for errors and account for changes in the environment. Sensory organs, a calculating brain, a stream of messages to control physical movements, and a feedback loop for learning: there you have the essence of automation, the essence of a robot. And there, too, you have the essence of a living being’s nervous system. The resemblance is no coincidence. In order to replace a human, an automated system first has to replicate a human, or at least some aspect of a human’s ability.
Automated machines existed before World War II. James Watt’s steam engine, the original prime mover of the Industrial Revolution, incorporated an ingenious feedback device—the fly-ball governor—that enabled it to regulate its own operation. As the engine sped up, it rotated a pair of metal balls, creating a centrifugal force that pulled a lever to close a steam valve, keeping the engine from running too fast. The Jacquard loom, invented in France around 1800, used steel punch cards to control the movements of spools of different-colored threads, allowing intricate patterns to be woven automatically. In 1866, a British engineer named J. Macfarlane Gray patented a steamship steering mechanism that was able to register the movement of a boat’s helm and, through a gear-operated feedback system, adjust the angle of the rudder to maintain a set course.41 But the development of fast computers, along with other sensitive electronic controls, opened a new chapter in the history of machines. It vastly expanded the possibilities of automation. As the mathematician Norbert Wiener, who helped write the prediction algorithms for the Allies’ automated antiaircraft gun, explained in his 1950 book The Human Use of Human Beings, the advances of the 1940s enabled inventors and engineers to go beyond “the sporadic design of individual automatic mechanisms.” The new technologies, while designed with weaponry in mind, gave rise to “a general policy for the construction of automatic mechanisms of the most varied type.” They paved the way for “the new automatic age.” 42
Beyond the pursuit of progress and productivity lay another impetus for the automatic age: politics. The postwar years were characterized by intense labor strife. Managers and unions battled in most American manufacturing sectors, and the tensions were often strongest in industries essential to the federal government’s Cold War buildup of military equipment and armaments. Strikes, walkouts, and slowdowns were daily events. In 1950 alone, eighty-eight work stoppages were staged at a single Westinghouse plant in Pittsburgh. In many factories, union stewards held more power over operations than did corporate managers—the workers called the shots. Military and industrial planners saw automation as a way to shift the balance of power back to management. Electronically controlled machinery, declared Fortune magazine in a 1946 cover story titled “Machines without Men,” would prove “immensely superior to the human mechanism,” not least because machines “are always satisfied with working conditions and never demand higher wages.” 43 An executive with Arthur D. Little, a leading management and engineering consultancy, wrote that the rise of automation heralded the business world’s “emancipation from human workers.” 44
In addition to reducing the need for laborers, particularly skilled ones, automated equipment provided business owners and managers with a technological means to control the speed and flow of production through the electronic programming of individual machines and entire assembly lines. When, at the Ford plants, control over the pace of the line shifted to the new automated equipment, the workers lost a great deal of autonomy. By the mid-1950s, the role of labor unions in charting factory operations was much diminished.45 The lesson would prove important: in an automated system, power concentrates with those who control the programming.
Wiener foresaw, with uncanny clarity, what would come next. The technologies of automation would advance far more rapidly than anyone had imagined. Computers would get faster and smaller. The speed and capacity of electronic communication and storage systems would increase exponentially. Sensors would see, hear, and feel the world with ever greater sensitivity. Robotic mechanisms would come “to replicate more nearly the functions of the human hand as supplemented by the human eye.” The cost to manufacture all the new devices and systems would plummet. The use of automation would become both possible and economical in ever more areas. And since computers could be programmed to carry out logical functions, automation’s reach would extend beyond the work of the hand and into the work of the mind—the realm of analysis, judgment, and decision making. A computerized machine didn’t have to act by manipulating material things like guns. It could act by manipulating information. “From this stage on, everything may go by machine,” Wiener wrote. “The machine plays no favorites between manual labor and white-collar labor.” It seemed obvious to him that automation would, sooner or later, create “an unemployment situation” that would make the calamity of the Great Depression “seem a pleasant joke.” 46
The Human Use of Human Beings was a best seller, as was Wiener’s earlier and much more technical treatise, Cybernetics, or Control and Communication in the Animal and the Machine. The mathematician’s unsettling analysis of technology’s trajectory became part of the intellectual texture of the 1950s. It inspired or informed many of the books and articles on automation that appeared during the decade, including Robert Hugh Macmillan’s slim volume. An aging Bertrand Russell, in a 1951 essay, “Are Human Beings Necessary?,” wrote that Wiener’s work made it clear that “we shall have to change some of the fundamental assumptions upon which the world has been run ever since civilization began.” 47 Wiener even makes a brief appearance as a forgotten prophet in Kurt Vonnegut’s first novel, the 1952 dystopian satire Player Piano, in which a young engineer’s rebellion against a rigidly automated world ends with an epic episode of machine-breaking.
THE IDEA of a robot invasion may have seemed threatening, if not apocalyptic, to a public already rattled by the bomb, but automation technologies were st
ill in their infancy during the 1950s. Their ultimate consequences could be imagined, in speculative tracts and science-fiction fantasies, but those consequences were still a long way from being experienced. Through the 1960s, most automated machines continued to resemble the primitive robotic haulers on Ford’s postwar assembly lines. They were big, expensive, and none too bright. Most of them could perform only a single, repetitive function, adjusting their movements in response to a few elementary electronic commands: speed up, slow down; move left, move right; grasp, release. The machines were extraordinarily precise, but otherwise their talents were few. Toiling anonymously inside factories, often locked within cages to protect passersby from their mindless twists and jerks, they certainly didn’t look like they were about to take over the world. They seemed little more than very well-behaved and well-coordinated beasts of burden.
But robots and other automated systems had one big advantage over the purely mechanical contraptions that came before them. Because they ran on software, they could hitch a ride on the Moore’s Law Express. They could benefit from all the rapid advances—in processor speed, programming algorithms, storage and network capacity, interface design, and miniaturization—that came to characterize the progress of computers themselves. And that, as Wiener predicted, is what happened. Robots’ senses grew sharper; their brains, quicker and more supple; their conversations, more fluent; their ability to learn, more capacious. By the early 1970s, they were taking over production work that required flexibility and dexterity—cutting, welding, assembling. By the end of that decade, they were flying planes as well as building them. And then, freed from their physical embodiments and turned into the pure logic of code, they spread out into the business world through a multitude of specialized software applications. They entered the cerebral trades of the white-collar workforce, sometimes as replacements but far more often as assistants.