by Martin Ford
The other reason to worry about the wage question comes from a deeper examination of the narrative that many of us as technologists have been saying so far. When we say, “No, don’t worry about it. We’re not going to replace jobs, machines are going to complement what people do,” I think this is true, our own MGI analysis suggests that 60% of occupations will only have about a third of their activities automated by machines, which means people will be working alongside machines.
But if we examine this phenomenon with wages in mind, it’s not so clear-cut because we know that when people are complemented by machines, you can have a range of outcomes. We know that, for example, if a highly skilled worker is complemented by a machine, and the machine does what it does best, and the human is still doing highly value-added work to complement the machine, that’s great. The wages for that work are probably going to go up, productivity will go up and it’ll all work out wonderfully well all round, which is a great outcome.
However, we could also have the other end of the spectrum, where if the person’s being complemented by a machine—even if the machine is only 30% of the work, but the machine is doing all the value-added portion of that work—then what’s left over for the human being is deskilled or less complex. That can lead to lower wages because now many more people can do those tasks that previously required specialized skills, or required a certification. That means that what you’ve done by introducing machines into that occupation could potentially put pressure on wages in that occupation.
This idea of complementing work has this wide range of potential outcomes, and we tend just to celebrate the one end of the result spectrum, and not talk as much about the other, deskilled, end of the spectrum. This by the way also increases the challenge of reskilling on an ongoing basis as people work alongside ever evolving and increasingly capable machines.
MARTIN FORD: A good example of that is the impact of GPS on London taxi drivers.
JAMES MANYIKA: Yes, that’s a great example of where the labor-supplied limiting portion was really “the Knowledge” of all the streets and shortcuts in the minds of the London taxi drivers. When you devalue that skill because of GPS systems, what’s left over is just the driving, and many more people can drive and get you from A to B.
Another example here, in an old form of deskilling, is to think about call center operators. It used to be that your call center person actually had to know what they were talking about often at a technical level in order to be helpful to you. Today, however, organizations embedded that knowledge into the script that they read. What’s left over for the most part is just someone who can read a script. They don’t really need to know the technical details, at least not as much as before; they just need to be able to follow and read the script, unless they get to a real corner case, where they can escalate to a deep expert.
There are many examples of service work and service technician work, whether it’s through the call center, or even people physically showing up to done on-site repairs, where some portions of that work are going through this massive deskilling—because the knowledge is embedded in either technology, or scripts, or some other way to encapsulate the knowledge required to solve the problem. In the end, what’s left over is something much more deskilled.
MARTIN FORD: So, it sounds like overall, you’re more concerned about the impact on wages than outright unemployment?
JAMES MANYIKA: Of course you always worry about unemployment, because you can always have this corner-case scenario that could play out, which results in a game over for us as far as employment is concerned. But I worry more about these workforce transition issues, such as skills shifts, occupational shifts and how will we support people through these transitions.
I also worry about the wage effects, unless we evolve how we value work in our labor markets. In a sense this problem has been around for a while. We all say that we value people who look after our children, and we value teachers; but we’ve never quite reflected that in the wage structure for those occupations, and this discrepancy could soon get much bigger, because many of the occupations that are likely to grow are going to look like that.
MARTIN FORD: As you noted earlier, that can feed back into the consumer-demand problem, which in itself dampens down productivity and growth.
JAMES MANYIKA: Absolutely. That would create a vicious cycle that further hurts demand for work. And we need to move quickly. The reason why the reskilling and on-the-job training portions are a really important thing is, first of all, because those skills are changing pretty rapidly, and people are going to need to adapt pretty rapidly.
We already have a problem. We have pointed this out in our research that if you look across most advanced economies at how much these countries spend on on-the-job training, the level of on-the-job training has been declining in the last 20 to 30 years. Given that on-the-job training is going to be a big deal in near the future, this is a real issue.
The other measure you can also look at is what is typically called “active labor-market supports.” These are things that are separate from on-the-job training and are instead the kind of support you provide workers when they’re being displaced, as they transition from one occupation to another. This is one of the things I think we screwed up in the last round of globalization.
With globalization, one can argue all day along about how globalization was great for productivity, economic growth, for consumer choice, and for products. All true, except when you look at the question of globalization through the worker lens; then it’s problematic. The thing that didn’t happen effectively was providing support for the workers who were displaced. Even though we know the pain of globalization was highly localized in specific places and sectors, they were still significant enough and really affected many real people and communities. If you and your 9 friends worked in apparel manufacturing in the US in 2000, a decade later only 3 of those jobs still exist, and the same is true if you and your 9 friends worked in a textile mill. Take Webster County in Mississippi where one third of jobs were lost due to what happened to apparel manufacturing, which was a major part of that community. We can say this will probably work out at an overall level, but that isn’t very comforting if you’re one of the workers in these particularly hard-hit communities.
If we say that we’re going to need to support both workers who have been, and those who are going to be, dislocated through these work transitions and will need to go from one job to another, or one occupation to another, or one skill-set to another, then we’re starting from behind. So, the worker transition challenges are a really big deal.
MARTIN FORD: You’re making the point that we’re going to need to support workers, whether they’re unemployed or they’re transitioning. Do you think a universal basic income is potentially a good idea for doing that?
JAMES MANYIKA: I’m conflicted about the idea of universal basic income in the following sense. I like the fact that we’re discussing it, because it’s an acknowledgment that we may have a wage and income issue, and it’s provoking a debate in the world.
My issue with it is that I think it misses the wider role that work plays. Work is a complicated thing because while work provides income, it also does a whole bunch of other stuff. It provides meaning, dignity, self-respect, purpose, community and social effects, and more. By going to a UBI-based society, while that may solve the wage question, it won’t necessarily solve these other aspects of what work brings. And, I think we should remember that there will still be lots of work to be done.
One of the quotes that really sticks with me and I find quite fascinating is from President Lyndon B. Johnson’s Blue-Ribbon Commission on “Technology, Automation, and Economic Progress,” which incidentally included Bob Solow. One of the report’s conclusions is that “The basic fact is that technology eliminates jobs, not work.”
MARTIN FORD: There’s always work to be done, but it might not be valued by the labor market.
JAMES MANYIKA: It doesn’t always sho
w up in our labor markets. Just think about care work, which in most societies tends to be done by women and is often unpaid. How do we reflect the value of that care work in our labor markets and discussions on wages and incomes? The work will be there. It’s just whether it’s paid work, or recognized as work, and compensated in that way.
I like the fact that UBI is provoking the conversation about wages and income, but I’m not sure it solves the work question as effectively as other things might do. I prefer to consider concepts like conditional transfers, or some other way to make sure that we are linking wages to some kind of activity that reflects initiative, purpose, dignity, and other important factors. These questions of purpose, meaning and dignity may in the end be what defines us.
JAMES MANYIKA is a senior partner at McKinsey & Company and chairman of the McKinsey Global Institute (MGI). James also serves on McKinsey’s Board of Directors. Based in Silicon Valley for over 20 years, James has worked with the chief executives and founders of many of the world’s leading technology companies on a variety of issues. At MGI, James has led research on technology, the digital economy, as well as growth, productivity, and globalization. He has published a book on AI and robotics, another on global economic trends as well as numerous articles and reports that have appeared in business media and academic journals.
James was appointed by President Obama as vice chair of the Global Development Council at the White House (2012-16) and by Commerce Secretaries to the US Commerce Department’s Digital Economy Board of Advisors and the National Innovation Advisory Board. He serves on the boards of the Council on Foreign Relations, John D. and Catherine T. MacArthur Foundation, Hewlett Foundation, and Markle Foundation.
He also serves on academic advisory boards including the Oxford Internet Institute, MIT’s Initiative on the Digital Economy. He is on the standing committee for the Stanford-based 100 Year Study on Artificial Intelligence, a member of the AIIndex.org team, and a fellow at DeepMind.
James was on the engineering faculty at Oxford University and a member of the Programming Research Group and the Robotics Research Lab, a fellow of Balliol College, Oxford, a visiting scientist at NASA Jet Propulsion Labs, and a faculty exchange fellow at MIT. A Rhodes Scholar, James received his DPhil, MSc, and MA from Oxford in Robotics, Mathematics, and Computer Science, and a BSc in electrical engineering from University of Zimbabwe as an Anglo-American scholar.
Chapter 14. GARY MARCUS
It’s not clear to me that you get to the accuracy levels you need for driving in Manhattan simply by adding more data to these big data-driven systems. You might get to 99.99% accuracy, but if you do the numbers on that, that’s much worse than humans.
FOUNDER AND CEO, GEOMETRIC INTELLIGENCE (ACQUIRED BY UBER) PROFESSOR OF PSYCHOLOGY AND NEURAL SCIENCE, NYU
Gary Marcus was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber, and is a professor of psychology and neural science at New York University, as well as the author and editor of several books, including The Future of the Brain and the bestseller Guitar Zero. Much of Gary’s research has focused on understanding how children learn and assimilate language. His current work is on how insights from the human mind can inform the field of artificial intelligence.
MARTIN FORD: You wrote a book, Kluge, about how the brain is an imperfect organ; presumably, then, you don’t think the route to AGI is to try to perfectly copy the human brain?
GARY MARCUS: No, we don’t need to replicate the human brain and all of its inefficiencies. There are some things that people do much better than current machines, and you want to learn from those, but there are lots of things you don’t want to copy.
I’m not committed to how much like a person an AGI system will look. However, humans are currently the only system that we know of that can make inferences and plans over very broad ranges of data and discuss them in a very efficient way, so it pays to look into how people are doing that.
The first book that I wrote, published in 2001, was titled The Algebraic Mind, and it compared neural networks with humans. I explored what it would take to make neural networks better, and I think those arguments are still very relevant today.
The next book I wrote was called The Birth of the Mind, and was about understanding how genes can build the innate structures in our mind. It comes from the Noam Chomsky and Steven Pinker tradition of believing that there are important things built into the mind. In the book, I tried to understand what innateness might mean in terms of molecular biology and developmental neuroscience. Again, I think the ideas there are quite relevant today.
In 2008 I published Kluge: The Haphazard Evolution of the Human Mind. For those who may not know, “kluge” is an old engineer’s term for a clumsy solution to a problem. In that book, I argued that in many ways the human mind was actually something like that. I examined discussions about whether humans are optimal—to which I think they’re clearly not—and tried to understand from an evolutionary perspective why we’re not optimal.
MARTIN FORD: That’s because evolution has to work from an existing framework and build from there, right? It can’t go back and redesign everything from scratch.
GARY MARCUS: Exactly. A lot of the book was about our memory structure, and how that compares to other systems. For example, when you compare our auditory systems to what’s theoretically possible, we come very close to optimal. If you compare our eyes to the theoretical optimum, we’re again close—given the right conditions, you can see a single photon of light, and that’s amazing. Our memory, however, is not optimal.
You could very quickly upload the complete works of Shakespeare to a computer, or in fact, most of what’s been written ever, and a computer won’t forget any of it. Our memories are nowhere near theoretically optimal in terms of their capacity or in terms of the stability of the memory that you store. Our memories tend to blur together over time. If you park in the same space every day you can’t remember where you parked today, because you can’t keep today’s memory distinct from yesterday’s memory. A computer would never have trouble with that.
The argument I made in the book was that we could examine and understand why humanity had such crummy memories, in terms of what our ancestors needed from their memory. It was mostly broad statistical summaries like: “there’s more food up the mountain than down the mountain.” I don’t need to remember what individual days I derived those memory traces from, I just need the general trend that it’s more fertile up the mountain as opposed to down the mountain.
Vertebrates evolved that kind of memory—instead of what computers use, which is a location-addressable memory where every single location in the memory is assigned to a particular stable function. That’s what allows you to store essentially infinite information on a computer without having the problem of blurring things together. Humans went down a different path in the evolutionary chain, and it would be very costly in terms of the number of genes that we would need to change in order to just rebuild the system from scratch around location-addressable memory.
It’s actually possible to build hybrids. Google is a hybrid, as it has location-addressable memory underneath and then cue-addressable memory, which is what we have, on top. That’s a much better system. Google can take reminder cues as we can, but then it has a master map of where everything is, so it serves up the right answer instead of arbitrarily distorting the answer.
MARTIN FORD: Could you explain that in more detail?
GARY MARCUS: Cue-addressable memory is where memories are triggered or aided by other factors. There are crazy versions of this like posture-dependent memory. This is where if you learned something standing up then you’ll remember it better if you try to recall it standing up than if you’re lying down. The most notorious one is state-dependent memory. For example, if you study for an exam while you’re stoned, you might actually be better off being stoned when you take the exam. I don’t suggest doing that…the point is that the state and the cues around you infl
uence what you remember.
On the other hand, you can’t say, “I want memory location 317” or “the thing I learned on March 17, 1997.” As a human, you can’t pull things out the way a computer could. A computer has these indexes that are actually like a set of post-office boxes, and what is put in box number 972 stays there indefinitely, unless you deliberately tamper with it.
It doesn’t even appear that our brain has a handle on this. The brain does not have an internal addressing system to know where individual memories are stored. Instead, it seems like the brain does something more like an auction. It says, “Is there anything out there that can give me information about what I should be doing in a car on a sunny day?” What you get back is a set of relevant memories without knowing, at least consciously, where they are physically stored in the brain.
The problem is sometimes they blur together, and that leads for example, to problems with eyewitness testimonies. You can’t actually keep the state of what happened at a particular moment, separate from what you thought about later, or what you saw on television or read in the newspaper. All these things blur together because they’re not distinctly stored.
MARTIN FORD: That’s interesting.
GARY MARCUS: The first central claim of my book Kluge was that there are basically two kinds of memory and that humans got stuck with the one that’s less useful. I further argued that once we have that in our evolutionary history, it becomes astonishingly unlikely that you’re going to start from scratch, so you just build on top of that. This is like Stephen Jay Gould’s famous arguments about the panda’s thumb.