Architects of Intelligence
Page 31
There are several critical questions like this that still need a fair amount of technical work, where we must make progress, instead of everybody just running away and focusing on the upsides of applications for business and economic benefits.
The silver lining of all this is that groups and entities are emerging and starting to work on many of these challenges. A great example is the Partnership on AI. If you look at the agenda for the Partnership, you’ll see a lot of these questions are being examined, about bias, about safety, and about these kinds of existential threat questions. Another great example is the work that Sam Altman, Jack Clarke and others at OpenAI are doing, which aims to make sure all of society benefits from AI.
Right now, the entities and groups that are making the most progress on these questions have tended to be places that have been able to attract the AI superstars, which, even in 2018, tends to be a relatively small group. That will hopefully diffuse over time. We’ve also seen some relative concentrations of talent go to places that have massive computing power and capacity, as well as places that have unique access to lots of data, because we know these techniques benefit from those resources. The question is, in a world in which there’s a tendency for more progress to go to where the superstars are, and where the data is available, and where the computer capacity is available, how do you make sure this continues to be widely available to everybody?
MARTIN FORD: What do you think about the existential concerns? Elon Musk and Nick Bostrom talk about the control problem or the alignment problem. One scenario is where we could have a fast takeoff with recursive improvement, and then we’ve got a superintelligent machine that gets away from us. Is that something we should be worried about at this point?
JAMES MANYIKA: Yes, somebody should be worrying about those questions—but not everybody, partly because I think the time frame for a super intelligent machine is so far away, and because the probability of that is fairly low. But again, in a Pascal-wager like sense, somebody should be thinking about those questions, but I wouldn’t get society all whipped up about the existential questions, at least not yet.
I like the fact that a smart philosopher like Nick Bostrom is thinking about it, I just don’t think that it should be a huge concern for society as a whole just yet.
MARTIN FORD: That’s also my thinking. If a few think tanks want to focus on these concerns, that seems like a great idea. But it would be hard to justify investing massive governmental resources at this point. And we probably wouldn’t want politicians delving into this stuff in any case.
JAMES MANYIKA: No, it shouldn’t be a political issue, but I also disagree with people who say that there is zero probability that this could happen and say that no-one should worry about it.
The vast majority of us shouldn’t be worried about it. I think that we should be more worried about these more specific questions that are here now, such as safety, use and misuse, explainability, bias, and the economic and workforce effects questions and related transitions. Those are the bigger, more real questions that are going to impact society beginning now and running over the next few decades.
MARTIN FORD: In terms of those concerns, do you think there’s a place for regulation? Should governments step in and regulate certain aspects of AI, or should we rely on industry to figure it out for themselves?
JAMES MANYIKA: I don’t know what form regulation should take, but somebody should be thinking about regulation in this new environment. I don’t think that we’ve got any of the tools in place, any of the right regulatory frameworks in place at all right now.
So, my simple answer would be yes, somebody should be thinking about what the regulation of AI should look like. But I think the regulation shouldn’t start with the view that its goal is to stop AI and put back the lid on a Pandora’s box, or hold back the deployment of these technologies and try and turn the clock back.
I think that would be misguided because first of all, the genie is out of the bottle; but also, more importantly, there’s enormous societal and economic benefit from these technologies. We can talk more about our overall productivity challenge, which is something these AI systems can help with. We also have societal “moonshot” challenges that AI systems can help with.
So, if regulation is intended to slow things down or stop the development of AI then I think that’s wrong, but if regulation is intended to think about questions of safety, questions of privacy, questions of transparency, questions around the wide availability of these techniques so that everybody can benefit from them—then I think those are the right things that AI regulation should be thinking about.
MARTIN FORD: Let’s move on to the economic and business aspects of this. I know the McKinsey Global Institute has put out several important reports on the impact of AI on work and labor.
I’ve written quite a lot on this, and my last book makes the argument that we’re really on the leading edge of a major disruption that could have a huge impact on labor markets. What’s your view? I know there are quite a few economists who feel this issue is being overhyped.
JAMES MANYIKA: No, it is not overhyped. I think we’re on the cusp and we’re about to enter a new industrial revolution. I think these technologies are going to have an enormous, transformative and positive impact on businesses, because of their efficiency, their impact on innovation, their impact on being able to make predictions and to find new solutions to problems, and in some case go beyond human cognitive capabilities. The impact of AI on business to me, based on our research at MGI, is for the businesses undoubtedly positive.
The impact on the economy is also going to be quite transformational too, mostly because this is going to lead to productivity gains, and productivity is the engine of economic growth. This will all take place at a time when we’re going to have aging and other effects that will create headwinds for economic growth. AI and automation systems, along with other technologies, are going to have this transformational and much-needed effect on productivity, which in the long term leads to economic growth. These systems can also significantly accelerate innovation and R&D, which leads to new products and services and even business models that will transform the economy.
I’m also quite positive about the impact on society in the sense of being able to solve the societal “moonshot” challenges I hinted at before. This could a new project or application that yields new insights into a societal challenge or proposes a radical solution or leads to the development of a breakthrough technology. This could be in healthcare, climate science, humanitarian crises or in discovering new materials. This is another area that my colleagues and I are researching where it’s clear that AI techniques from image classification to natural language processing and object identification can make a big contribution in many of these domains.
Having said all of that, if you say AI is good for business, good for economic growth, and helps tackle societal moonshots, then the big question is—what about work? I think this is a much more mixed and complicated story. But I think if I were to summarize my thoughts about jobs, I would say there will be jobs lost, but also jobs gained.
MARTIN FORD: So, you believe the net impact will be positive, even though a lot of jobs will be lost?
JAMES MANYIKA: While there will be jobs lost, there’ll also be jobs gained. In the “jobs gained” side of the story, jobs will come from the economic growth itself, and from the resulting dynamism. There’s always going to be demand for work, and there are mechanisms, through productivity and economic growth, that lead to the growth of jobs and the creation of new jobs. In addition, there are multiple drivers of demand for work that are relatively assured in the near- to mid-term, these include, again, rising prosperity around the world as more people enter the consuming class and so on. Another thing which will occur is something called “jobs changed,” and that’s because these technologies are going to complement work in lots of interesting ways, even when we don’t fully replace people doing that work.
We�
��ve seen versions of these three ideas of jobs lost, jobs gained, and jobs changed before with previous eras of automation. The real debate is, what are the relative magnitudes of all those things, and where do we end up? Are we going to have more jobs lost than jobs gained? That’s an interesting debate.
Our research at MGI suggests that we will come out ahead, that there will be more jobs gained than jobs lost; this of course is based on a set of assumptions around a few key factors. Because it’s impossible to make predictions, we have developed scenarios around the multiple factors involved, and in our midpoint scenarios we come out ahead. The interesting question is, even in a world with enough jobs, what will be the key workforce issues to grapple with, including the effect on things like wages, and the workforce transitions involved? The jobs and wages picture is more complicated than the effect on business and the economy, in terms of growth, which as I said, is clearly positive.
MARTIN FORD: Before we talk about jobs and wages, let me focus on your first point: the positive impact on business. If I were an economist, I would immediately point out that if you look at the productivity figures recently, they’re really not that great—we are not seeing any increases in productivity yet in terms of the macro-economic data. In fact, productivity has been pretty underwhelming, relative to other periods. Are you arguing that there’s just a lag before things will take off?
JAMES MANYIKA: We at MGI recently put out a report on this. There are a lot of reasons why productivity growth is sluggish, one reason being that in the last 10 years we’ve had the lowest capital intensity period in about 70 years.
We know that capital investment, and capital intensity, are part of the things that you need to drive productivity growth. We also know the critical role of demand—most economists, including here at MGI, have often looked at the supply-side effects of productivity, and not as much at the demand side. We know that when you’ve got a huge slowdown in demand you can be as efficient as you want in production, and measured productivity still won’t be great. That’s because the productivity measurement has a numerator and a denominator: the numerator involves growth in value-added output, which requires that output is being soaked up by demand. So, if demand is lagging for whatever reason, that hurts growth in output, which brings down productivity growth, regardless of what technological advances there may have been.
MARTIN FORD: That’s an important point. If advancing technology increases inequality and holds down wages, so it effectively takes money out of the pockets of average consumers, then that could dampen down demand further.
JAMES MANYIKA: Oh, absolutely. The demand point is absolutely critical, especially when you’ve got advanced economies, where anywhere between 55% and 70% of the demand in those economies is driven by consumer and household spending. You need people earning enough to be able to consume the output of everything being produced. Demand is a big part of the story, but I think there is also the technology lag story that you mentioned.
To your original question, I had the pleasure between 1999 and 2003 to work with one of the academic advisors of the McKinsey Global Institute, Bob Solow, the Nobel laureate. We were looking at the last productivity paradox back in the late 1990s. In the late ‘80s, Bob had made the observation that became known as The Solow Paradox, that you could see computers everywhere except in the productivity numbers. That paradox was finally resolved in the late ‘90s, when we had enough demand to drive productivity growth, but more importantly, when we had very large sectors of the economy—retail, wholesale, and others—finally adopting the technologies of the day: client-server architectures, ERP systems. This transformed their business processes and drove productivity growth in very large sectors in the economy, which finally had a big enough effect to move the national productivity needle.
Now if you fast-forward to where we are today, we may be seeing something similar in the sense that if you look at the current wave of digital technologies, whether we’re talking about cloud computing, e-commerce, or electronic payments, we can see them everywhere, we all carry them in our pockets, and yet productivity growth has been very sluggish for several years now. But if you actually systematically measure how digitized the economy is today, looking at the current wave of digital technologies, the surprising answer is: not so much, actually, in terms of assets, processes, and how people work with technology. And we are not even talking about AI yet or the next wave of technologies with these assessments of digitization.
What you find is that the most digitized sectors—on a relative basis—are sectors like the tech sector itself, media and maybe financial services. And those sectors are actually relatively small in the grand scheme of things, measuring as a share of GDP or as a share of employment, whereas the very large sectors are, relatively speaking, not that digitized.
Take a sector like retail and keep in mind that retail is one of the largest sectors. We all get excited by the prospect of e-commerce and what Amazon is doing. But the amount of retail that is now done through e-commerce is only about 10%, and Amazon is a large portion of that 10%. But retail is a very large sector with many, many small- and medium-sized businesses. That already tells you that even in retail, one of the large sectors which we’d think of as highly digitized, in reality, it turns out we really haven’t yet made much widespread progress yet.
So, we may be going through another round of the Solow paradox. Until we get these very large sectors highly digitized and using these technologies across business processes, we won’t see enough to move the national needle on productivity.
MARTIN FORD: So, you’re saying that globally we haven’t even started to see to the impact of AI and advanced forms of automation yet?
JAMES MANYIKA: Not yet. And that gets to another point worth making: we’re actually going to need productivity growth even more than we can imagine, and AI, automation and all these digital technologies are going to be critical to driving productivity growth and economic growth.
To explain why, let’s look at the last 50 years of economic growth, and you look at that for the G20 countries (which make up a little more than 90% of global GDP), the average economic GDP growth over the last 50 years where we have the data, so between 1964 and 2014, was 3.5%. This was the average GDP growth across those countries. If you do classic growth decomposition and growth accounting work, it shows that GDP and economic growth comes from two things: one is productivity growth, and the other is expansions in the labor supply.
Of the 3.5% of average GDP growth we’ve had in the last 50 years, 1.7% has come from expansions in the labor supply, and the other 1.8% has come from productivity growth over those 50 years. If you look to the next 50 years, the growth from expansions in the labor supply is going to come crashing down from the 1.7% that it’s been the last 50 years to about 0.3%, because of aging and other demographic effects.
So that means that in the next 50 years we’re going to rely even more than we have in the past 50 years on productivity growth. And unless we get big gains in productivity, we’re going to have a downdraft in economic growth. If we think productivity growth matters right now for our current growth, which it does, it’s going to matter even more for the next 50 years if we still want economic growth and prosperity.
MARTIN FORD: This is kind of touching on the economist Robert Gordon’s argument that may be there’s not going to be much economic growth in the future. (Robert Gordon’s 2017 book The Rise and Fall of American Growth, offers a very pessimistic view of future economic growth in the United States)
JAMES MANYIKA: While Bob Gordon’s saying there may not be economic growth, he’s also questioning whether we’re going to have big enough innovations, comparable to electrification and other things like that, to really drive economic growth. He’s skeptical that there’s going to be anything as big as electricity and some of the other technologies of the past.
MARTIN FORD: But hopefully AI is going to be that next thing?
JAMES MANYIKA: We hope it will be! It is ce
rtainly a general-purpose technology like electricity, and in that sense should benefit multiple activities and sectors of the economy.
MARTIN FORD: I want to talk more about The McKinsey Global Institute’s reports on what’s happening to work and wages. Could you go into a bit more detail about the various reports you’ve generated and your overall findings? What methodology do you use to figure out if a particular job is likely to be automated and what percentage of jobs are at risk?
JAMES MANYIKA: Let’s take this in three parts: “jobs lost,” “jobs changed,” and then “jobs gained,” because there’s something to be said about each of these pathways.
In terms of “jobs lost,” there’s been lots of research and reports, and it’s become a cottage industry speculating on the jobs question. At MGI the approach we’ve taken we think is a little bit different in two ways. One is that we’ve conducted a task-based decomposition, and so we’ve started with tasks, as opposed to starting with whole occupations. We’ve looked at something like over 2,000 tasks and activities using a variety of sources, including the O*NET dataset, and other datasets that we’ve got by looking at tasks. Then, the Bureau of Labor Statistics in the US tracks about 800 occupations; so, we mapped those tasks into the actual occupations.
We’ve also looked at 18 different kinds of capabilities required to perform these tasks, and by capabilities, I’m talking everything from cognitive capabilities to sensory capabilities, to physical motor skills that are required to fulfill these tasks. We’ve then tried to understand to what extent technologies are now available to automate and perform those same capabilities, which then we can map back to our tasks and show what tasks machines can perform. We’ve looked at what we’ve called “currently demonstrated technology,” and what we’re distinguishing there is technology that has actually been demonstrated, either in a lab or in an actual product, not just something that’s hypothetical. By looking at these “currently demonstrated technologies,” we can provide a view into the next decade and a half or so, given typical adoption and diffusion rates.