Book Read Free

Don't Be Evil

Page 19

by Rana Foroohar


  This is a problem that affects not just sharing-economy companies and their workers, but a host of others, on- and offline, that use technology to monitor and control labor in ever more invasive ways. Amazon is well-known for its atrocious treatment of workers in its warehouses, which were included on the National Council for Occupational Safety and Health’s list of most dangerous places to work in the United States in 2018. Many Amazon workers report higher than average levels of stress and health problems as a result of constant digital monitoring.13 An investigation by The Guardian found accident and injury reports to be common. In one case, The Guardian reports, an injured worker was fired before the company would authorize medical treatment. Other injured workers were reportedly denied workers’ comp, or had their medical leave cut short—the predictable result of a management style that treats workers less like humans than robots.14

  I once interviewed a well-known AI scientist, Vivienne Ming, who was offered a job as chief scientist by Amazon, one that she turned down for exactly these reasons. Jeff Bezos apparently told Ming he wanted to hire her with the aim of running real-time experiments on how “technology could make people’s lives better,” she recalled. “What I decided in the end is that Jeff and I had different definitions of better .  .  .  pretty profoundly different.” How so? “I’ll give an example,” Ming said, and proceeded to tell me about a patent the company had recently taken out on a little wristband that factory workers would wear, and, if they started reaching for the wrong package, it would buzz. “I’m thinking, I would never want to build that sort of thing!”15

  Even Starbucks, a company that’s been lauded for its treatment of workers (giving part-timers things like health insurance and offering to pay for online college education for all employees), has been dinged for its use of algorithmic scheduling software, which can wreak havoc on lives by forcing workers to be on call whenever store traffic grows, rather than being given set weekly or monthly schedules that they can work their lives around. After New York Times reporter Jodi Kantor did a front-page exposé about the topic in 2014, then-chairman Howard Schultz was forced to apologize and promise to clean up the company’s scheduling system.16 Yet, at Starbucks and, alas, at most other retailers, algorithmic scheduling has become the norm—just like “surge” pricing at Uber or Lyft.

  Clearly, the advent of the high-tech gig economy means different things to different kinds of workers. For the Uber driver or the delivery person, it may feel like a kind of neo-serfdom. They get no pension, health insurance, or worker-rights protection, and work at the mercy of metrics. Many of the drivers profiled in Rosenblat’s book struggle to make much more than minimum wage, after paying for their car, their gas, maintenance, self-employment taxes, and so on. Certainly, in my own interviews with Uber drivers, I’ve found that most see a tight trade-off between the benefits of their theoretical freedom and the fact that always-on technology can actually mean less flexibility than they might have in a higher-quality job. Many of the best-paying rides come in places and at times that may be inconvenient or stressful for them, and if they don’t accept they don’t get paid. Certainly, most don’t share in the equity value of the company that they have helped create.

  The result is that for a huge number of low-level workers (who represent the bulk of the gig economy), “you’ve got a labor market that looks increasingly like a feudal agricultural hiring fair in which the lord shows up and says, ‘I’ll take you, and you, and you today,’ ” says Adair Turner, chairman of the Institute for New Economic Thinking, one of many nonprofit groups studying the effect of companies like Uber on local economies.

  Turner’s conclusion, which mirrors that of a growing number of economists, is that the gig economy reduces friction in labor markets, meaning it solves a real need and creates convenience, but it also creates fragmentation that tends to work better for employers, who can leverage superior technology and information, than for workers. The fact that all the data is owned by Uber, and not the driver, and that drivers can’t see any of it, also creates a huge information asymmetry between workers and the company, as a study done by another nonprofit group, the Data & Society Research Institute, found.

  Drivers risk “deactivation” for canceling unprofitable fares and absorb the risk of unknown fares, “even though Uber promotes the idea that they are entrepreneurs who are knowingly investing in such risk.” These “entrepreneurial consumers,”17 as the Federal Trade Commission has described Uber drivers themselves (buying into the company’s own language, which would designate drivers themselves as consumers of Uber’s value rather than producers of it), have no access to the wealth of data on consumers that allow the company itself to make such incredible profits. As Rosenblat points out, this asymmetry is similar to that enjoyed by other Big Tech firms, like Amazon, who can steer customers to more costly products through rankings, or Google, which promotes itself as a neutral arbiter of information, even as PageRank’s algorithms remain inside the black box, with any biases that might be present known only to the company itself.18

  These strategies allow only “the fantasy that there are no more issues of power in the workplace,” said AFL-CIO policy director Damon Silvers in a Harvard Business School podcast on the future of work. “In reality, companies like Uber know more about their employees, and have a tighter grip on their behavior, than any steel or auto company ever did. In the absence of workers having collective power, digital technology, AI, and cheap surveillance technology will combine to make information advantages that accrue to employers…at a scale and intensity we’ve never seen before.”19

  Superstars Take All

  There’s no question that low-level gig workers—from handymen to yoga instructors to childcare providers—get the short end of the stick in the digital economy. But for highly educated professionals, the digital gig economy is pure upside: a way to earn more money in less time, in ever more flexible ways. Consider the life of a freelance management consultant. He or she may charge $10,000 a day per client, using cloud computing, smartphones, social networking platforms, and video conferencing to work anywhere, anytime, which makes it easy to earn a high six- or even seven-figure salary. Those same technologies reduce operational costs to pretty close to zero, given that the price of a virtual assistant based in India is negligible for this new cadre of high-end freelance worker, and the fact that he or she can work from home, or cheaply rent work spaces via membership schemes in companies like WeWork.

  The digital gig economy, it turns out, is no less bifurcated than the analog one. This is concerning, given that a spate of new research by various organizations, from McKinsey to the Organization for Economic Co-operation and Development, points to the fact that in the next ten to twenty years, the number of people working as freelancers, independent contractors, or part-time for multiple employers will increase dramatically. In the United States, 35 percent of the labor force is already working this way. If “freelance nation” is the future, the divides in this new world will only further exacerbate the winner-takes-all trend that has driven the polarized politics of the moment.

  The digital economy more broadly has already widened the gap between the haves and the have-nots, the winners being those who have the ability to access, control, and leverage technology—which is, in and of itself, connected to education, or, in other words, to money and class. As Harvard academics Claudia Goldin and Lawrence Katz outlined in their book The Race Between Education and Technology, technological advancements lift all boats only when people have the skills and access to utilize those advancements.20

  The networking platforms and software of this new digital economy are resulting in cheaper prices for consumers, cost reductions for employers, and higher wages for the most skilled and educated workers, who can do more highly paid work in less time. But they have also contributed to the concentration of wealth in fewer hands, in part because there is a large body of less-educated p
eople who are left at the mercy of technology—and those who leverage it.21

  “Think of, say, how a top surgeon using cutting-edge video conferencing technology might now be able to do more consultations in many different countries with a wide variety of clients,” says James Manyika, director of the McKinsey Global Institute. “Compare that to a retail service worker whose life has been made chaotic by scheduling software that constantly changes his or her hours.”22

  It’s a powerful narrative that isn’t altogether new. In 1981, economist Sherwin Rosen published the paper “The Economics of Superstars,” which argued that technological disruptions gave disproportionate power to a few players in any given market. Television, for example, made it possible for the world’s highest paid athletes and pop stars to earn exponentially more than others in their fields. Rosen predicted that the rise of superstars would be bad for the bottom line of everyone else, and he was all too right.23 Today, labor’s share of the pie is at its lowest point in half a century. But Silicon Valley companies like Uber, Google, Apple, Facebook, and Amazon—as well as their top-tier workers—are enjoying the superstar effect in spades.24

  This divide has a massive and underexplored impact not just on individual gig workers, but on the economy at large. Many economists believe that one of the reasons that wage growth remains relatively flat is because of job-disrupting technology itself. Rob Kaplan, the head of the Dallas Fed, believes that technology—and in particular its penetration more broadly and deeply into non-tech industries—is a key reason that we haven’t seen wages rising, despite unemployment being at nearly pre–financial crisis lows. What’s more, he believes that the Trump corporate tax cuts only exacerbated the trend, as companies incentivized to spend capital on long-term investments put that capital into technology, not people.

  “I do about thirty to thirty-five CEO calls with people in and out of the tech sector each month, and it’s all about how non-tech firms are implementing technology [in the place of people].” Kaplan believes that we are going to see call centers, airline baggage handlers, reservations agents, and even car dealers replaced by technology in the near future.25

  The numbers are proving him right. Back in 1998, toward the end of the previous economic expansion, 48.3 percent of business investment went to new structures and industrial equipment (things like factories, machinery, and other brick-and-mortar infrastructure), and about 30 percent went into technology, such as information processing equipment and various types of intellectual property, according to data compiled by Daniel Alpert of Westwood Capital. In 2018, only 28.6 percent of all new investment went to structures and industrial equipment, while technology and intellectual property made up 52 percent.

  The difference highlights the shift away from physical investments and toward intangible ones—a trend we see not just in the United States, but also in other wealthy countries like the United Kingdom and Sweden, where investment into intangible assets now exceeds that of tangible ones. The problem is that new factories and machinery tend to create jobs, whereas investments in data processing equipment and software upgrades, which make up a big chunk of current tech-related spending, tend to be job killing, at least in the short term. That can change once workers are able to use the technology to increase their own productivity, as we’ve already learned. But such an outcome is possible only if education and skill levels keep ahead of the pace of technological change. Sadly, in the United States, education is falling woefully behind the digital revolution.26

  There are a few sectors, such as finance and information technology, which have seen wages grow. Yet they create relatively few jobs. Finance, for example, takes 25 percent of all corporate profits while creating only 4 percent of jobs. And while half of all American businesses that generate profits of 25 percent or more are tech companies, the tech giants of today—Facebook, Google, Amazon—create far fewer jobs than the big industrial groups of the past, like General Motors and General Electric, but also less than even the previous generation of tech companies such as IBM and Microsoft.

  Then there is the growing fear of white-collar job destruction at the hands of Big Tech. A recent study of global executives found that the majority believed they would be retraining or laying off two-thirds of the workforces in the future thanks to digital disruption.

  “I think the global professional middle class is about to be blindsided,” says Vivienne Ming, the AI expert I interviewed on the topic in 2018. Ming cites a recent competition at Columbia University between human lawyers and their artificial counterparts to see which group could spot the most loopholes in a series of nondisclosure agreements. “The AI found ninety-five percent of them, and the humans eighty-eight percent,” she says. “But the human took ninety minutes to read them. The AI took twenty-two seconds.” Game, set, and match to the robots. All of this is one reason why Ming is working with firms such as Accenture to figure out how they can retrain staff to do more creative jobs—the kind that incorporate human emotional intelligence with machine IQ—so that they won’t have to lay off hundreds of thousands of accountants, back-office sales staff, and even lower-level programmers in the future.27

  * * *

  —

  MEANWHILE, THE EVER-WIDENING gap between the winners and losers is reflected in employee pay as well. Consider that the most profitable 10 percent of U.S. businesses are eight times more profitable than the average company. (In the 1990s, that multiple was just three.) Workers in those super-profitable businesses are paid extremely well, but their competitors cannot offer anywhere near the same packages. Research from the Bonn-based Institute of Labor Economics shows that pay differences between—not within—companies are a major factor in the disparity in worker pay. Another piece of research, from the Centre for Economic Performance in London, shows that this pay differential between top-tier companies and everyone else is responsible for the vast majority of inequality in the United States.

  Unsurprisingly, these top sectors and top businesses that take so much of the economic pie tend to be the ones that are the most digitized. As the McKinsey Global Institute’s analysis of the haves and have-mores in digital America shows, industries that adopt more technology quickly are more profitable. Tech and finance sit at the top of that chart, whereas sectors that actually create the most jobs—such as retail, education, and government—remain woefully behind. That means you end up with a two-tiered economy: a top level that’s very productive, takes the majority of wealth, and creates few jobs, and a bottom one that stagnates.28

  There are large digital divides along geographic lines as well, which further exacerbates the winner-takes-all trend. For companies to exploit a more entrepreneurial, digitalized economy—whatever sector they operate in—they need access to high-speed broadband, which is three times as likely in urban areas compared with rural ones. There are even big gaps in individual cities. In New York, for example, 80 percent of residents in affluent Manhattan have access to broadband, while only 65 percent of the poorer borough of the Bronx do.29 The result is a concentration of superstar companies—creating new jobs for superstar workers—in a handful of highly connected cities. Indeed, a 2016 report by the Economic Innovation Group revealed that seventy-five of America’s three-thousand-plus counties make up 50 percent of all new job growth. It’s a trend that snowballs, as the most talented job seekers are attracted to a handful of cities, driving up property prices and making it tougher for anyone who isn’t part of the superstar club to get in the door. This, of course, aggravates the rich-poor divide that is at the heart of partisan politics of the United States, and any number of other countries.30

  To understand the impact of all this, one has only to visit tech hotbeds like San Francisco or Seattle (or, overseas, places like Tel Aviv, Israel, or Shenzhen, China) and see not only spiraling home prices but the equally spiraling problems of homelessness. The one thing you won’t see, however, is average middle-class America
ns, given that the basics of a middle-class life—housing, healthcare, and retirement savings—are no longer affordable on a middle-class income, thanks to the hordes of paper millionaires created by the tech firms, who increasingly run roughshod over local governments themselves. In Seattle, for example, the city council had proposed imposing a modest $500 per employee tax on local businesses to help address the city’s growing homelessness epidemic. But businesses like Starbucks and Amazon complained, and the tax was promptly dropped to $275.31 And in San Francisco, tech billionaires Jack Dorsey of Twitter, Stripe cofounder Patrick Collison, and Zynga founder Mark Pincus fought tooth and nail against a 2018 ballot proposition to tax companies with revenues of over $50 million a mere 0.5 percent in order to fund local housing and homelessness services. (The measure passed, and was subsequently challenged in court.)

  This was brought into focus in 2018 by Amazon’s well-publicized search for a second headquarters, which it had to undertake since its own growth was making further expansion in Seattle impossible: Even the company’s own workers were finding the inflated prices and unrelenting traffic unbearable. The initial winners were New York and Washington, D.C., which Amazon claimed to have picked on the basis of metrics like the quality of infrastructure, human capital, and transport, even though it rejected many cities that scored well if not better in some such areas. The short list was heavy on locations represented by high-ranking U.S. senators and bids that included billions of dollars in tax credits and other subsidies (both of which NYC and D.C. offered).

  Politicians sold the deal, which was contentious in both places, to constituents based on the narrative that Amazon is a huge job creator. But New Yorkers weren’t buying it, and not without reason; research shows that communities that offer subsidies to lure big headquarters may see positive headlines and short-term gains, but the end result from an economic standpoint is almost always negative. One recent study found that 70 percent of such city subsidies fall into the category of property tax breaks and job creation tax credits, which means that the big companies pay less for their real estate, but human capital is undermined, because property taxes fund schools and various public services. In other words, the employers that demand skilled workers and good infrastructure are degrading the tax base that creates them. Yet such subsidies have tripled since the 1990s, which leaves states less prepared for economic downturns than they have been in years, thanks to growing municipal debt. Ultimately, it was public outrage over the amount of subsidies being paid by New York City to Amazon that killed the deal; following a spate of public protests, Jeff Bezos decided to pull the HQ offer and leave the city.

 

‹ Prev