Smart Mobs

Home > Other > Smart Mobs > Page 27
Smart Mobs Page 27

by Howard Rheingold


  Remember the point of sale display at the IBM Almaden Research Center (Chapter 4) that observes consumers and tailors its message to what it learns about them? Increasingly, the most sophisticated privacy intrusions are instigated by merchants, not secret police. Merchants want personal information about people in order to tailor their products and pitches, and they are willing to spend money to gain customers. Smart mob technologies, because they sense and communicate what users/wearers transact and experience, greatly increase the chances that consumers will voluntarily trade their privacy for various enticements from merchants, from money to bargains to the latest, coolest, algorithmically recommended identity-signifiers.

  If the day comes when millions of people go about their lives while wearing sensor-equipped wearable computers, the population itself could become a collective surveillant: Big Everybody. Steve Mann proposed that communities of wearable computer users will monitor, warn, and aid each other, creating virtual “safety nets” for voluntary affinity groups.10 Steven Feiner, who has pioneered “wearable augmented reality systems” at Columbia University, has proposed a chilling counter-scenario. Feiner asks what might happen in a future world of wearable computer communities if some organization offers individuals a small payment for continuous real-time access to their digital experience-stream.11 Individuals in Feiner’s scenario would have the power to protect their own personal privacy while displaying what they see of the rest of the world. Feiner conjectures that the enabling technology for peer-to-peer journalism also enables many-to-many surveillance.

  Massively parallel image and audio processing could make it possible to reconstruct a selected person’s activities from material recorded by others who have merely seen or heard that person in passing. Imagine a private two-person conversation, recorded by neither participant. That conversation might be reassembled in its entirety from information obtained from passersby, who each overheard small snippets and who willingly provided inexpensive access to their recordings. The price paid for such material, and the particular users to whom that price is offered, might even change dynamically, based on a user’s proximity to events or people of interest to the buyer at that time. This could make it possible to enlist temporarily a well-situated user who may normally refuse access at the regular price, or to engage a user’s wearable computer in a “bidding war” among competing organizations, performed without any need for the user’s attention.12

  The Osaka police took the first steps toward Feiner’s scenario in April 2002, when they opened a call-in line for citizens with 3G phones to send video of crimes they might witness.13

  Sociologist Gary Marx was the first to describe a “surveillance society,” in which, “with computer technology, one of the final barriers to total control is crumbling.”14 Marx noted that the growing ability of computers to compile dossiers about individuals by piecing together countless tiny, otherwise harmless shards of information about transactions, medical conditions, buying habits, and demographic characteristics constituted a distinct class of “dataveillance,” distinguished from traditional snooping methodologies of audio or visual recording by its ease of computer automation: “Computers qualitatively alter the nature of surveillance—routinizing, broadening, and deepening it.”15

  Surveillance technologies become a threat to liberty as well as dignity when they give one person or group power to constrain the behavior of others. Any inquiry into the relationship between social control, surveillance, power, and knowledge must contend with the historian-philosopher-psychologist-sociologist Michel Foucault, a fiercely cross-disciplinary thinker who was to surveillance what Darwin was to evolutionary biology. Foucault’s fundamental insight was that power not only belongs to the powerful but permeates the social world. He wrote that power “reaches into the very grain of individuals, touches their bodies and inserts itself into their actions and attitudes, their discourses, learning processes and everyday lives.”16

  As Einstein showed that space and time could be understood only in relation to each other, Foucault revealed the reciprocal connections between knowledge and power. In Foucault’s view, power is so strongly connected to knowledge that he often wrote them this way: “power/knowledge.” About the relationship of the two, Foucault stated: “Knowledge, once used to regulate the conduct of others, entails constraint, regulation and the disciplining of practice. Thus, there is no power relation without the correlative constitution of a field of knowledge, nor any knowledge that does not presuppose and constitute at the same time, power relations.”17

  Examining the history of punishment, Foucault focused on a change over recent centuries in the way societies treat criminals and the mentally ill. The age-old techniques of torture and execution or consignment to dungeons were replaced by more subtle and effective methods. Rational institutions and authoritative specialists—modern prisons and police, hospitals, asylums, psychiatrists, and doctors—helped order society more effectively than the threat of physical punishment.

  “Discipline” was Foucault’s term for a mode of power/knowledge that included social welfare bureaucracy, armies and police forces, public education, and other practices that impose regular patterns on behavior and relationships. Foucault uses the word “discipline” to refer both to methods of control and to different branches of knowledge, for he saw knowledge specialization and social control as part of the same power/knowledge matrix. As an example of discipline and power/knowledge, Foucault cited the Panopticon (“all-seeing place”), an architectural design put forth by Jeremy Bentham in the mid-nineteenth century for prisons, insane asylums, schools, hospitals, and factories. Instead of employing the brutal and spectacular means used to control individuals under a monarchial state, the modern state needed a different sort of system to regulate its citizens. The Panopticon applied a form of mental, knowledge-based power through the constant observation of prisoners, each separated from the other and allowed no interaction. The Panoptic structure would allow guards to continually see inside each cell from their vantage point in a high central tower, unseen themselves. The system of unobserved observation created a kind of knowledge in the mind of the inmate that was in itself a form of power. It isn’t necessary to constantly surveil people who believe they are under constant surveillance:

  The major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power. So to arrange things that the surveillance is permanent in its effects, even if it is discontinuous in action; that the perfection of power should tend to render its actual exercise unnecessary; that this architectural apparatus should be a machine for creating and sustaining a power relation independent of the person who exercises it; in short, that the inmates should be caught up in a power situation of which they are themselves the bearers.18

  The emergence of surveillance and social control institutions marked a historical transition to a system of disciplinary power in which every movement is supervised and all events recorded. The result of this surveillance of every part of life, by parents, teachers, employers, police, doctors, accountants, is acceptance of regulations and docility as part of the way every “normal” person thinks and behaves. Disciplinary methods systematically isolated and neutralized “the effects of counter-power that spring from [an organized group] and which form a resistance to the power that wishes to dominate it: agitations, revolts, spontaneous organizations, coalitions— anything that may establish horizontal conjunctions.”19 For Foucault, the real danger was not necessarily that individuals are repressed by this form of social order but that they are “carefully fabricated in it.”20

  Power and counter-power sometimes combine with the human talent for cooperation rituals to create significant benefits. The rule of law, governance through social contracts, protection of civil rights, expansion of political enfranchisement, and evolution of cooperative enterprises (think of the Red Cross) demonstrate how power that goes around in the right circles can work to
common advantage. Social communication—what people in cities, on the Internet, in smart mobs do—is the means by which power and counter-power coevolve. Since we climbed down from the trees and started hunting together, human groups have found numerous ways to cooperate for mutual benefit. We do so in the face of significant challenges, and when we succeed, we do so with the help of mutual monitoring and sanctioning. When I encountered Foucault’s attention to mutual monitoring and sanctioning as a way in which groups self-enforce conformity and suppress potential rebellion, I recalled how Ostrom and others highlighted mutual monitoring and sanctioning in communities that solve collective action dilemmas.

  Every social order, not just repressive ones, requires methods of mutual social control. The key question is whether populations of users can use what we now know about cooperation to drive power/knowledge to a higher level of democracy. Isn’t that exactly what happened when printing made literacy available to entire populations, not just to a tiny elite? If discipline did not include the capacity for changing itself, the cooperation and democracy that exist today would not have come into being after millennia of slavery, tyranny, and feudalism. More to the point is a question for the present generations: Can discipline change in the future? Can people use mobile communication, peer-to-peer and pervasive computing, location awareness, and social accounting systems to evolve a higher form of discipline, transforming the forces revealed by Foucault according to the principles revealed by Ostrom and Axelrod?

  Softened Time, Blurred Places, Colonized Lives

  In addition to threats against liberty and privacy, critical questions about smart mobs arise in regard to the quality of life in an always-on, hyper-con- nected culture. A few of the most important questions about quality of life address ways that mobile and pervasive technology usage affects interpersonal relationships, the way individuals experience time, and the vitality of public spaces.

  Technology critic Langdon Winner challenges the image of the “electronic cottage” that was promoted twenty years ago, at the dawn of the PC era.21 Although Winner accurately points out that progress in personal information technology has not alleviated traffic jams or emptied office buildings, I must note that I am sitting in my garden at this moment, typing these words on my laptop computer, connecting to the Internet by way of a wireless network. The lawn as floor and sky as ceiling definitely improve my life—and you don’t find me contributing to rush hour traffic. I agree with Winner that infogadgetry has made possible a way of life that seems the very opposite of pastoral knowledge work, whereby telecommuters log in from their electronic cottages. Winner cited an anthropological study of Silicon Valley culture that concentrated on 450 people who “employ complex ecologies of electronic devices—cell phones, beepers, laptops, personal digital assistants, voice mail, personal Web pages,” noting:

  Preliminary findings reveal a world in which work has become everything, with electronic devices the glue that holds it all together. The people interviewed report that they are always on call. Through phone, beepers, email and the like, their time is totally interruptible. In the office, in their cars, and in their houses, the demands of work come pouring in. Work is so pervasive that conventional boundaries between work and home have all but collapsed.22

  Winner observed that a “gnawing dilemma” in the lives of the hyper-in-fomated Silicon Valleyites studied by the anthropologists was constant negotiation of communication access—seeking to maximize access to others while controlling the access others have to them.

  The intervention of digital technology into social relationships carries another danger: People may start reacting to mechanical artifacts as if they were reacting to people, and badly designed communication devices may cause people to blame each other for the shortcomings of their machines. These psychological reactions, closely tied to the way the input and output aspects of devices (the “user interface”) are designed, are part of a discipline known as “social interface theory.” When I first heard about the findings of Stanford Professor Byron Reeves and Associate Professor Clifford Nass, I visited Nass at Stanford, who explained his findings over lunch. Using experimental methods that have been developed to study the ways people react to one another, Reeves and Nass substituted automatic devices such as computers, or even representations of people such as video images, for one of the parties in classic two-person social psychology experiments. They found that although people claim that they know the difference between humans and machines, their cognitive, emotional, and behavioral responses to artificial representations of humans are identical to the reactions they have to real people.23 Humans evolved over a long time to pay close attention to other people, to the way people treat us, to facial expressions and tone of voice. We make social decisions and life and death decisions based on this human-centered response system we’ve evolved. Artificial representations of people or electronic mediators have only come along in the past one hundred years. Our artifacts might be in the information age, but our biology is still prehistoric.

  I was reminded of what Jim Spohrer had told me of his experiences with pervasive computing, when I visited him at IBM’s Almaden Research Center (Chapter 4): “Something startling happens when technology acts as if it is aware of the human user and responds to human behavior in context. Suddenly, magically, without artificial intelligence, things start seeming intelligent!” Other IBM researchers have noted the role of “social attribution errors” in the design of devices that mediate social interactions. If the social interfaces of the devices are not carefully designed, people mistake mechanical errors for human errors, falsely influencing their assessment of other people or social situations.24 The same researchers who noted “social attribution errors” also pointed out that head-mounted displays, even Steve Mann’s sleek sunglasses, interfere with a key element of human social communication—eye contact.

  Human discourse without eye contact has its dangers. Anyone who has experienced a misunderstanding via email or witnessed a flame war in an online discussion knows that mediated communications, lacking the nuances carried by eye contact, facial expression, or tone of voice, increase the possibility of conflicts erupting from misunderstandings. Between social interface theory, social attribution errors, and the possibility of miscommunication, at the very least the designers of smart mob technologies must focus heightened attention on the potential social side effects of their design decisions. And those who shape the norms of wearable com- munities by virtue of being the first generations of users must keep in mind that artificial mediation brings hazards as well as advantages to social communication.

  Carried to its extreme, the experience of living in a world pervaded by context-aware devices could lead to what researcher Mark Pesce calls “technoanimism,” a “very dynamic relationship to the material world that to our eyes is going to look almost sacrilegious or profane.”25 Given Nass and Reeves’s discovery that our long evolution in the ages before movies and talking computers has not prepared us to react differentially to real humans and human simulations, we should prepare for startling emergent social behaviors when mobile communicators grow context aware and billboards customize their displays according to the demographics of the people who look at it. Widespread popular beliefs that computationally colonized objects are intelligent, even if they aren’t, could lead to unpleasant unintended consequences. If pervasive computation devices and anthropomorphic software agents lead people to confuse machines with humans, will people grow less friendly, less trusting, and less prepared to cooperate with one another? Or is it possible for informed designers to account for social attribution errors to create cooperation-amplifying technologies? Even if the potential design flaws of technoanimistic devices are overcome, especially if they are overcome, what do we do about the way we spend more and more time interacting with devices?

  Many of the threats to quality of life posed by mobile communications and pervasive computing seem to be linked to changes in the way people experience time. Leslie Ha
ddon wrote: “Researchers examining changing time patterns have noted the paradox that while time budgets show that on average the amount of leisure time has increased, many people perceive that their lives have become busier.”26 Leopoldina Fortunati of the University of Trieste, analyzing the spread of mobile telephony and texting in Italy, observed: “Time is socially perceived as something that must be filled up to the very smallest folds,” thus eliminating “the positive aspects of lost time” that “could also fill up with reflection, possible adventures, observing events, reducing the uniformity of our existence, and so on.”27 While the Internet competes with television and with face-to-face communications in the home and workplace, smart mob technologies compete with attention to other people who are present in public places and with the users’ own idle time between home and work.

  Mizuko Ito and her graduate students had noticed that the expected times of appointments for Tokyo youth had become fluid, and Norwegian researcher Rich Ling had observed a “softening of time” among texting youth in his part of the world.28 Ling and Yttri called the new way of arranging appointments “hyper-coordination.”29 Finnish researcher Pasi Mäenpää made the same forecast in writing that Kenny Hirschhorn, the futurist at European telecommunication giant Orange, and Natsuno-san at DoCoMo had made in conversation: The mobile telephone is evolving into a kind of remote control for people’s lives. Our technology-assisted and hyper-informated pace of living apparently now requires further technology, in order for the hyper-informated to control our own lives. And controlling our lives through mobile devices involves scheduling future activities:

 

‹ Prev