Why is there such a gap between the demand and supply of the highly educated in the United States, as suggested by the wage premium? US colleges and universities are, by every ranking measure, including the number of foreign students they attract, still the best in the world—for instance, according to Shanghai Jiao Tong University’s ranking of world universities, eight of the top ten, and sixteen of the top twenty universities in the world in 2017 were in the United States.29 It is hard to argue the problem is with the quality of the universities, though, of course, not all are at the same level. Instead, the problem seems to be that too many students who enter college, especially those who do not complete high school and instead eventually get a General Educational Diploma (GED), are unprepared for higher studies and drop out before completing degrees.
In addition to inadequate preparation in school, though, the cost of tertiary education in the United States is high, and despite the availability of scholarships, student borrowing builds up quickly. This is especially the case if the student has to take a number of remedial courses to come up to speed, which then prolongs their stay in college and increases their eventual debt. Both inadequate preparation and high costs contribute to a high dropout rate. In 2015, only 55 percent of students who entered US colleges graduated with a degree. The graduation rate was much higher for US women at 65 percent and was only 45 percent for men, with the lowest graduation rates in for-profit institutions.30
The problem in the United States thus seems to lie squarely not in its universities but in its schooling system, which once was the best in the world. Indeed, the inadequacy of schools—which we will see stems partly from the decline of economically mixed communities—may help explain the high college premium in the United States. If employers cannot trust that high school graduates know what they are supposed to have learned by the time they leave school, they may insist on a college degree just to be sure of basic skills. Indeed, as we will see, there seems to be an escalation in the credentials demanded of various jobs in the United States. With higher-than-warranted demand for job candidates with degrees and lower-than-desirable demand for candidates with high school diplomas, it is less surprising that the wage premium in the United States is higher than elsewhere despite the high average years of education.
THE ONE PERCENT AND THE WINNER-TAKE-MOST EFFECTS OF TECHNOLOGY
While incomes for those with a bachelor’s degree, especially in technology and engineering, have grown relative to the rest, incomes at the very top have truly exploded in a number of countries. As economists Thomas Piketty and Emmanuel Saez have documented in various studies, in the United States, the top 1 percent of earners took only 8 percent of total income in 1970, but this grew to 18 percent by 2010.31 In the United Kingdom, starting from similar shares in 1970, the top 1 percent earned about 15 percent of total income by 2010. Such an explosion of the incomes of the rich has not happened in continental Europe.32 Each year, the top 1 percent have earned about 8 percent of total income in France since 1950, and about 11 percent in Germany over that period with little variation. Japanese top income shares have remained relatively flat at about 8 percent.
We should not rule out the possibility of mismeasurement here. For example, many of the very rich in Europe have closely held firms, and because of high taxes, may be unwilling to pay profits out as dividends. The wealth of these individuals may build as undistributed profit grows, but it may not show up as income. Instead, this would show up in rising inheritance amounts, and indeed inheritance as a share of total wealth has been rising in Germany and France over the last few decades while it has been relatively flat in the United Kingdom over the same period.33 Thus top incomes may be understated in high-tax countries, and their rise may be a more general phenomenon across developed countries.34
The increase in top incomes is not because countries are dominated by the idle rich. Even for the richest 0.01 percent of Americans toward the end of the twentieth century, 80 percent of income consisted of wages and income from self-owned businesses, while only 20 percent consisted of income from financial investments.35 This is in stark contrast to the pattern in the early part of the twentieth century when the richest got most of their income from property. The rich are now more likely to be the working self-made rich rather than the idle inheriting rich.
A recent study of tax returns from 2000 onward by my colleagues Owen Zidar and Eric Zwick, along with others, finds that the spurt in top incomes in the United States can be traced to the rising incomes of private business owners who manage their own firms.36 The majority of top earners receive business income, and tend to be owners of single-establishment, skill-intensive, midsized firms in areas like law, consulting, dentistry, or medicine. These firms tend to be twice as profitable per worker than other similar firms, and the rise in incomes appears to be driven by greater profitability rather than an increase in scale. The study finds owners typically are at an age where they take active part in the business. The premature death of an owner cuts substantially into profitability, suggesting their skills are critical to income generation. The authors conclude the working rich remain central to rising top incomes even today.
In another study of the four hundred wealthiest individuals in the United States (the Forbes 400), my colleague Steve Kaplan with Joshua Rauh of Stanford find that the Forbes 400 today are less privileged than those in the past in that they are less likely to have been born wealthy.37 They did get a good education when young (hence, they mostly come from upper-middle-class families) and entered rapidly expanding and scalable industries like technology, finance, and mass retail.
Perhaps more important than hard work and a good education, technological change helps explain the rise in inequality at the very top—it has created a “winner-take-most” economy. When a farmer wants his fruit plucked, the more workers the better (until the orchard becomes overcrowded). Each worker contributes, no matter how unskilled and how many fruits he picks, and can be paid accordingly. On the other hand, if the farmer wants to listen to music, one good fiddler is far preferable to ten mediocre ones. Furthermore, for such activities, the larger the accessible market, the more the performer will get paid.
As markets expand and become more integrated across the world, and communication becomes easier, the best singers and sportsmen can use myriad channels to reach households everywhere. While there is still some charm in watching a live performance by a moderately talented local artist in a small local theater, more of the household budget increasingly goes to watching supremely talented international superstars. Sherwin Rosen, the Chicago economist who first analyzed the growing superstar economy, noted that Elizabeth Billington, the star of the London Opera in the 1801 season, earned between £10,000 and £15,000.38 When adjusted for inflation, that would imply an income of between £680,000 to £1 million, or between $825,000 and $1.25 million today. In comparison, Forbes reports that Taylor Swift, the top-earning music diva, pulled in $170 million in 2016, while Adele, the top UK female singer, grossed $80.5 million. Superstars earn far more today because, through technology, they go beyond merely the audience in the London opera house into a global market—Taylor Swift’s hit single, “Shake It Off,” had 2.4 billion views on YouTube at the time of writing.
The “winner-take-most” structure has spread beyond the performance arts to a variety of occupations. With improvements in communication, corporations can be more effectively managed even as they get bigger and access larger markets—Julie Wulf and I find that the span of control for corporate CEOs, as measured by the number of direct reports, has been increasing.39 CEOs can manage more people, perhaps because much more communication and reporting can be routinized today, with the CEO able to act quickly on exceptions that are flagged up to her.
As corporate size increases, corporations also seek out the most capable suppliers of key inputs, magnifying the returns to small differences in talent. In corporate law, for example, international companies seek the
same handful of lawyers to represent them in multi-billion-dollar lawsuits—why settle for anything less than the best when lawyer fees are small compared to the potential penalties for losing the suit? Differences in capability, even small ones, now can mean large differences in income. All this adds to the incomes for the very skilled or talented, who already benefit from the premium that skills command today.
How much of these superstar or top 1 percent effects are because of human responses to the liberalization and integration of markets, and not just to technological change alone? Probably some. The private sector’s typical reaction to increases in competition, whether generated by shifts in technology or in policy, is to first become more efficient, and then figure out ways to limit the competition. This pattern has indeed replicated itself in the liberalizations since the 1980s. While there are some differences between the Anglo-American economies and continental Europe based on their different reform paths, ultimately practices spread. What follows relies heavily on studies in the United States, but the analysis applies more generally to developed countries.
THE PRIVATE SECTOR’S REACTION TO LIBERALIZATION
Both Ronald Reagan and Margaret Thatcher pushed back against the state. They believed this would imply a greater role for markets and ensure greater individual freedom. Rolling back state oversight did not free everyone. While too much government leads to privileges for some, so does too little government. Moreover, in the fervid evangelical individualistic environment they had unleashed, what was privately optimal for the individual could be detrimental to the community. Doctrinaire reform, as we will see, proved problematic.
A CHANGE IN ATTITUDES TOWARD PROFIT AND INCOMES
A stark example of the individualism that was being reasserted once more, partly as a reaction to the collectivist pressures that had dominated since the Depression, was the change in attitudes toward corporate profit and managerial incomes. In the postwar years of the expansionary state, the clamor in the United States for corporations to do more than simply focus on their business became louder. Influential commentators argued that corporations ought to work with the state to fulfill their corporate social responsibility, and some government officials in the 1960s even asked corporations to hold back price increases as their social contribution to the fight against inflation.
Economists who were drawn into this debate on the proper role of the corporation started by noting that the owners of the corporation, the shareholders, were the residual claimants; they were paid only after fixed claimants such as suppliers, workers, and creditors were paid. Given that they bore all the risk, economists argued, it was appropriate that they should have ownership and exercise control, and the corporation ought to be run in their interests.
What, though, were the owner’s interests in the large professionally managed corporations with many dispersed small shareholders that now dominated the economy? With each shareholder owning a tiny fraction of the firm, whose interests should management, which itself had a tiny stake, focus on?
Milton Friedman was characteristically bold in his answer to these questions: “There is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.”40 Since profits are what go to shareholders, Friedman was saying management should maximize the value of the corporation’s shares, allowing each shareholder the maximum freedom to use her valuable shares to fund causes dear to her heart. Let her support the local football team in her neighborhood or donate to the firefighters’ fund if she chooses to, Friedman insisted; after all, it is her money, earned from bearing risk. Friedman’s dictum had an “invisible hand” aspect to it—by maximizing the value of the only claim to the corporation that was not fixed, management would not just be maximizing shareholder value but also the corporation’s value, and thus the corporation’s contribution to society. Friedman firmly rejected any role for the corporation in helping the state do its job, for example, in containing inflation, or in undertaking charitable activities, especially if it impinged on its profitability.
Friedman’s views had enormous influence, both in academia and outside. The notion that corporate social responsibility began and ended with the corporation maximizing shareholder value was very clear and was consistent with the growing ethic of individualism. Instead of being a sin, avarice was now a duty, in part because it could be spelled out clearly to firm management. With such straightforward marching orders, shareholders could evaluate performance without the noise, hypocrisy, and occasional self-aggrandizement introduced by social responsibility. It suggested three courses of action to put corporate management back on the right track.
First, management’s incentives should be aligned more with shareholder interests by paying management for performance, preferably in stock. This view became particularly influential when a study by Michael Jensen and Kevin Murphy in 1990 found that for every $1,000 change in shareholder wealth in the United States, the wealth of top management went up by only $3.25.41 The authors suggested it should be much more. Corporate chieftains obviously loved this message. Second, large activist shareholders ought to monitor firm management and push it to do the right thing for shareholders—a recent example was when the influential shareholders of the vehicle hire company Uber came together to depose the CEO, Travis Kalanick, whose aggressive management style and actions were apparently eroding Uber’s business prospects. Finally, there should be an active market for corporate control, where raiders could take over the management of underperforming corporations, even if existing management resisted. The raiders would gain from bringing in their own management and increasing share value in the most poorly managed firms, while the fear of hostile takeovers would discipline behavior in even the better-managed firms.
In a postwar world that had gotten used to gentle competition and easy profits, management refocusing was indeed necessary in the more competitive liberalized environment. There were tremendous societal benefits if management increased profitability and reduced waste—and this would increase the long-run likelihood of the firm’s survival to the benefit of all. Friedman’s assertion that the business of business was only business was a valuable corrective for corporations that had lost their way. However, Friedman’s dictum was theoretically valid in fewer situations than it was applied, so the courses of action that benefited shareholders were not always beneficial for society. Moreover, his caveats were frequently overlooked, undermining his message. Most important, his dictum, especially when some of the aberrant consequences were publicly highlighted, undermined support for corporations.
Shareholders are residual claimants and all others are fixed claimants only in a somewhat textbook view of the corporation where all inputs to the firm are essentially like commodities, bought in competitive markets and paid for through explicit short-term contracts. In practice, not all inputs are commodities and not all contracts are short-term or explicit. For instance, corporations enter into implicit contracts with their employees in many ways. They often ask employees to go the extra mile—work overtime on an express order or staff a difficult position temporarily—with the promise that the company will make it up to them later. Employees who expect to be with a corporation for a long time also invest in acquiring company-specific skills and in building relationships with other corporate employees, investments that may have little value elsewhere but make the company work better. There is usually an understanding that the company will compensate the employee for these investments, even if there is no written contract to the effect, and thus no legal power for the employee to enforce compliance.
When a corporate raider takes over a company where most employees have already made such investments, repudiates these implicit contracts, eliminates jobs and cuts wages, the raider benefits, as do shareholders. Workers take a large hit, though, and the
y, as well as future employees in the industry, may forever lose trust in management.
Harvard economists Andrei Shleifer and Larry Summers emphasized this point in the context of airline takeovers in the 1980s, after the industry was deregulated. When corporate raider Carl Icahn took over Trans World Airlines (TWA) in 1985, they argue that much of the value he squeezed out for shareholders came from abrogating wage agreements and renegotiating worker wages down.42 To the extent that workers were overpaid because of lax prior management and strong union bargaining, this was beneficial for shareholders, but unless lower costs led to lower ticket prices and more travel, this was a wash for society since no additional value was created. To the extent that the renegotiation breeched employee trust, we may all have been the losers—it may have transformed airline workers from being customer-friendly and willing to go the extra mile for the airline to being suspicious of management, unhappy, transactional, and working only by the book. Even seen from the best interests of the corporation, let alone society, shareholder value maximization may be inappropriate in some circumstances.
In a sense, the principle of maximizing shareholder value strips transactions of their corporate and social context. This is a good starting point for deciding whether a transaction is worth doing, and is particularly useful when custom and tradition obscure underlying economic rationales. However, transactions do take place in the real world with all its incompleteness and uncertainties. The richer noncontractual addenda—relationships, implicit contracts, promises, trust—often improve outcomes, and have to be added back to understand whether the transaction is still worth doing. To focus only on the contractual is to be myopic, and this can serve the corporation poorly.
The Third Pillar Page 25