The Internet of Us
Page 13
Rifkin thinks the same revolution is happening at the level of knowledge—perhaps especially there, since the wide availability of knowledge is the fuel powering the rest of our economy. But while Rifkin’s collaborative vision might be appealing, the death of capitalism—and exploitative versions of it—is hardly near.
Take crowdsourcing as an example. Instead of an economy of skilled laborers that require resources to train, equip and compensate, crowdsourcing makes it possible for companies to distribute and generate knowledge without the expense of hiring those experts. This isn’t necessarily more “democratic.” But it is more capitalistic. Even some of the most active workers for Amazon’s Mechanical Turk can make very little, two to five dollars an hour. This may seem reasonable if you think of such laborers as amateurs—doing such work in their “spare time.” But as Brabham has convincingly argued, the idea of the “amateur crowd” is largely a myth. Turkers working for Amazon are generally highly educated professionals working in areas of the world where financially rewarding employment for those skills is significantly less than elsewhere (hence the attraction of Turk). In the case of InnoCentive, while it may be that nonspecialists are better solvers in many cases than those who self-identify as specialists in an area, this should not be taken to mean that the solvers are amateurs. Far from it: they are typically professional scientists. As Brabham sums it up: “these so-called amateurs are really outsourced professionals, and the products and media content that we are sold are not much different than the old products.”
That’s a key point. Crowdsourcing is really a type of outsourcing. And outsourcing knowledge production is as profitable as outsourcing anything else. It is simply a mistake to think that such outsourcing is making knowledge production more democratic. Indeed, the opposite seems to be the case: outsourced knowledge producers such as crowd workers are professionals without the protection of a profession—without, in short, basic labor rights. Crowd workers don’t own what they produce. Indeed, as Brabham notes, in some cases, designers working “on spec” give up the rights to their designs, thereby forfeiting any future income from their intellectual labor. This is a win for the companies that employ such labor, but it hardly seems a win for democracy. With a large enough network, it doesn’t matter if the individual nodes themselves have rights. If you have enough people, you still get similar results, and at a low wage. Cheap labor, good enough results. It is enough to make Sam Walton smile.
In short, the globalization of the economy of knowledge may be having some of the same effects as the globalization of the economy generally. One of the worst consequences of an unfettered, deregulated global economy is gross income inequality. This phenomenon isn’t simply a matter of some people making more than others. It concerns a structural fact: that only a few control half of all global resources. It is part of a larger pattern of financial injustice But the unfettered global economy is not only increasing economic inequality, it is also encouraging epistemic inequality.7
To understand what I mean by epistemic inequality, let’s think first about the value of equality itself. Equality, like liberty, is a core value of democracy, but it is often misunderstood. When we say, with Locke and Jefferson, that all persons are “created equal,” we aren’t saying that we want everyone to be exactly the same, that we don’t want diversity in abilities or talents. What we mean is that we are equal in our basic rights as individuals and, in particular, equal in having a claim on access to various resources. Thus the value of epistemic equality: the idea that all persons have a basic claim to the same epistemic resources. An epistemic resource is a structure or institution that provides information and at least the basis for knowledge. Thus, epistemic inequality is the result of an unfair distribution of structural epistemic resources.
The most obvious example of an epistemic resource is education. The United Nations holds education to be a fundamental human right. Arguably it is a basis for many other rights, or at least necessary for one to fully enjoy those rights. Without a basic education, people are unable to fully participate in contemporary societies (or almost any society): it is difficult to hold a job, access healthcare or make informed democratic decisions.
The rise of Web 2.0 has made the Internet a similar epistemic resource. Thus the UN has argued recently that preventing access to the Internet is itself a violation of fundamental rights. According to a UN special report, societies have an obligation to recognize the “unique and transformative nature of the Internet not only to enable individuals to exercise their right to freedom of opinion and expression, but also a range of other human rights, and to promote the progress of society as a whole.”8 Consequently, blocking that access is harmful. The concept of epistemic equality allows us to explain that harm directly. Removing access to the Internet, whether by criminalizing participation in online activities or explicitly blocking content, is wrong simply because it is an infringement of epistemic equality.
Epistemic inequality increases between groups when there is unequal access among those groups to epistemic resources: libraries, the Internet, education. The most obvious, and urgent, reason for this is poverty. Epistemic resources like libraries and Internet access come after food, shelter and health; without the latter, the former are unnecessary. This is a simple point but one often underemphasized. The set of digital human beings is not equivalent to the set of human beings, period. Most people on the planet are not participating in the Internet of Things, and many have never participated in the glories of Web 2.0. The Collaborative Commons is a first world dream. That doesn’t make it a bad one. But predictions of the death of capitalism ignore the fact that much of the human population is exploited for its labor in order to make the 3-D printers and iPhones that we enjoy so much. And that fact isn’t going away in a world where the black carbon soot emitted by small cooking fires is still a contributing cause of climate change.
Another cause of epistemic inequality is closed politics. A political society that is, roughly speaking, “open”—one that has a diverse and independent media, that protects freedom of information and communication and that exercises little government censorship—is apt to be more epistemically equal than one that is not as open. It is an ongoing question to what degree any society is truly open. But one thing is clear: the more closed a political system happens to be, the more apt the people in charge are to keep the epistemic resources to themselves. And as we saw in the last chapter, the more tempted they will be to abuse those resources at the expense of average citizens.
Even in societies that are relatively open in the political sense (that is, to the degree that political rights of expression and communication are protected), epistemic inequality can occur simply if the Internet is not relatively free and open. This is a third obvious cause for a rise in epistemic inequality. Because even if legally you can access whatever you like on the Internet in a given society, access is unequal if its cost is prohibitive or stratified by levels of service.
This is why the battle going on over Net neutrality is so important. It is about epistemic equality. Net neutrality is the idea that governments and Internet service providers should treat the information flowing through the Net equally. In particular, companies shouldn’t be allowed to charge more for certain types of data. The argument on the other side is based on free market economics: when demand for a certain kind of traffic—say, Netflix or HBO GO—skyrockets, access to those services should cost more. At heart this is a debate about how to see the Internet. On one side are those who define the Internet as something that can itself be owned and profited from. On the other are those who feel it is an epistemic resource, like education or public libraries. In that case, if you start limiting access, you not only contribute to epistemic inequality, you contribute to inequality, period.
Issues around Internet access are also why, as Rifkin himself acknowledges, we should worry about the monopolization of the Internet. “What does it mean when the collective knowledge of much of human history is con
trolled by the Google search engine? Or when Facebook becomes the sole overseer of a virtual public square, connecting the social lives of 1 billion people?”9 What it means is that the gatekeepers are back, and this time the gates, while now small, enclose more. And that, if we are not careful, could ultimately mean less epistemic equality.
So, even if knowledge is more “democratized” now—its production and distribution is more inclusive and available—that means little in conditions of increasing epistemic inequality. If you are too poor and oppressed to access anything online, the digital wonders of the world mean nothing to you. The value of epistemic equality is the value of open and fair access to epistemic resources. But “access” here means more than just the ability to go to school or look things up on the Internet. It also means something more abstract but just as important: having the status of a full participant in the economy of knowledge.
To be a full participant in a monetary economy, you need to be more than just a laborer. Slaves are laborers, but their labor is not shared or exchanged by them; it is stolen from them. To be a true economic participant, you need to be someone who has the resources and willingness to participate in buying and selling. But more than that: you have to be recognized as such by others. Otherwise, you end up just trading with yourself. Likewise with the economy of knowledge. To participate in that economy, you need to be more than just a receptive knower and reasonable believer. You need to be seen or understood as such. Otherwise your epistemic labor will be ignored or exploited. You won’t be counted as a reasonable believer, as someone who can be trusted; you’ll suffer what the philosopher Miranda Fricker labels “epistemic injustice.”10
The history of racism in this country and many others is replete with examples of people being excluded from not only the monetary economy but the epistemic economy. In 1854, for example, the California Supreme Court infamously ruled that it was perfectly legal that “no Black or mulatto person, or Indian, shall be allowed to give evidence in favor of, or against a white man.” In writing the opinion, Chief Justice Charles J. Murray pointed to what he thought was a slippery slope:
The same rule which would admit them to testify, would admit them to all the equal rights of citizenship, and we might soon see them at the polls, in the jury box, upon the bench, and in our legislative halls. This is not a speculation . . . but an actual and present danger.”11
Murray, for all his terrifying racism, sees the very point at issue. To recognize a class of people as possible testifiers in a court of law is a slippery slope—because it grants them the status of a reasonable believer. It treats them as credible participants in the economy, and as such, as persons who have autonomy over their thoughts and actions. That’s one point that Fricker’s work has brought to the fore in recent discussion: epistemic injustice of this sort has crippling effects. Once you are no longer recognized as a possible credible source of information—even about yourself—then the dominating class will excuse itself for ignoring your basic rights.
Epistemic injustice of this sort has been much discussed by writers in postmodern critical theory. But the general drift there has been to abandon the category of “reasonable” or “justified” belief—to see these as inherently dominating categories. What is interesting and important in Fricker’s work is that she doesn’t see it this way. For her, abandoning standards of reasonableness would be giving up on the goal of epistemic equality. In short, what we need is not to abandon reasonableness but instead, in philosopher Lewis Gordon’s words, to “shift the geography of reason.”12 And that is the question we would be wise to ask with regard to our digital life as well: how is it contributing to, or inhibiting, that shift? A central cause for worry is the increasing fragmentation of reasons themselves. In the context of our present discussion, we might worry that this fragmentation doesn’t just have bad political effects. It has bad epistemic effects. It promotes epistemic inequality and a loss of intellectual autonomy. And that in turn can affect people’s ability to filter out bullshit—simply because their filtering is so one-sided.
Web 2.0 and the Internet of Things can be forces for democratic values. But we must not let our enthusiasm blind us to the existence of epistemic inequality, and the fact that its causes—racism, income inequality—pollute the infosphere just as much as they pollute the minds that make it up.
Walmarting the University
Standard procedure for university exams these days involves prohibiting the use of smartphones. As I was reminding my students of this recently, one of them joked that the university better come up with policies on wearable tech like smart watches ASAP. We all laughed, but nervously, because he was right. And as another student noted, whatever policy that is, it is going to be outmoded by the time it is enacted—not just because universities are slow to adapt to change but because technology is moving so fast. While Google’s initial experiment with Glass may not have been successful, the idea isn’t going away; and one day that too may seem quaint, should something like neuromedia emerge.
That raises a question: if the Internet is available to you at the blink of an eye—and available in a way that seems like memory—then what are we testing for when giving exams? What, in general, is the point of higher education in the age of big data?
These questions come at a time when the idea of the university itself is often said to be in crisis—especially in the United States. In one sense, the American university system continues to flourish. American institutions of higher learning dominate world rankings, making up more than half of the top 100 and a large majority of the top ten. Go to any top research conference in the world and you’ll find many of the keynote speakers and top researchers there are from American universities. American institutions continue to lead in the production of scientific research in the best journals, and produce the most Nobel laureates. And students from across the world continue to come to the United States to study. In economic terms, university education continues to be one of America’s leading industries.
But at the same time, there is the increasing worry that we are in something of an education bubble, and that the model is no longer sustainable. The cost per university student for an education has risen almost five times the rate of inflation since 1983. Thus it is not surprising that the amount of debt per student has so dramatically increased; two-thirds or more of students now take out loans.13 Private institutions routinely charge around $60,000 a year, and an “affordable” public institution, like my own, can cost more than $25,000. The explanations for these depressing facts vary, although it is clear that part of the matter is that state funding, on whom both public and, to a lesser extent, private institutions have long depended, has dramatically decreased in the last three decades.14 Taxpayers, for good or for ill, no longer clearly favor paying for the epistemic equality brought about by public institutions—and public education, at all levels, is obviously a primary victim of this change in mentality. But whatever the explanation, it is hard to avoid the conclusion that something needs to change.
Starting around 2012, many pundits, and more than a few academic administrators, started forecasting that information technology was going to lead this change. In particular, the advent of MOOCs (Massive Open Online Courses) was thought to signal a shift to a different model of education. MOOCs are free (or mostly free) online courses, composed generally of video lectures, various forms of computer-enabled discussion forums and computerized grading. In the wake of several high-profile successes attracting thousands of students, startups and nonprofits promoting and hosting MOOCs, such as Coursera and edX, sprang up almost overnight. Universities began creating their own MOOCs. The anticipation, and the hype, ran high, with the president of edX, Anant Agarwal, declaring that MOOCs would reinvent education and transform and “democratize education on a global scale.”15
MOOCs do indeed have much to offer. Many of the courses allow people who would never have a chance to take a course by a world-renowned expert on a subject the ability to do
so, and for free too. In many cases, students can even receive college credit if they finish the course successfully. Already millions of people around the globe have taken advantage of this opportunity. As a result, it is hard not to see it as crashing the gates of the university and helping to promote epistemic equality. It is also simply edifying, as a friend of mine (a superstar teacher who designed and created a MOOC while at the University of Virginia) said to me. Few things are more inspiring than to find yourself talking philosophy to 80,000 people worldwide, from all levels of income and backgrounds. Who can argue against free philosophy?
Not me. Yet only two years later, it is becoming clearer that, for all their many virtues, MOOCs are not exactly the revolutionary product they have been hyped to be. To see why, let’s go back to Rifkin. According to Rifkin, the old model of higher education maintained that “the teacher was akin to the factory foreman, handing out standardized assignments that required set answers in a given time frame.”16 The old model was “authoritarian” and “top-down.” It emphasized lectures, was hierarchical in its power structure and privileged memorization over discussion. The new model emerging in the Collaborative Commons is more lateral, egalitarian and interdisciplinary.
Rifkin is certainly right that the old, old, old model of education has many of the features he describes. But the Mad Men era has been gone for some time now, and the shift to more discussion-orientated, problem-solving models of education began as far back as Dewey. And this was the result not of a technological shift but a pedagogical one. This helps explain why many educators have been skeptical about using MOOCs as a replacement for, as opposed to an addition to, brick-and-mortar classroom teaching. Most MOOCs, after all, just are paradigm examples of the old model in action. They consist of lectures. Their methods of assessment are standardized. They privilege memorization over discussion. While those are not essential features of MOOCs, of course, the technology is only as innovative as we want it to be, and, right now, it seems as if we don’t want it to be that innovative. The fact that MOOCs are more like big lectures is why faculties at Amherst and Duke have rebelled against involving their institutions in MOOCs. Their point was not that there is something inherently wrong with making education free online—far from it. Their point was that the present models of MOOCs are simply extensions of what is already happening at universities worldwide: large classroom lecture-style courses. Pedagogically, many (although not all) MOOCs are not innovative; they are old school.