Book Read Free

The End of Absence: Reclaiming What We've Lost in a World of Constant Connection

Page 7

by Michael Harris


  Recently, a band of scientists at MIT has made strides toward the holy grail of afficere—translating the range of human emotions into the 1s and 0s of computer code.

  • • • • •

  Besides the progress of chatbots, we now have software that can map twenty-four points on your face, allowing it to identify a range of emotions and issue appropriate responses. We also have Q sensors—bands worn on the wrist that measure your “emotional arousal” by monitoring body heat and the skin’s electrical conductance.

  But the root problem remains unchanged. Whether we’re talking about “affective computers” or “computational empathy,” at a basic level we’re still discussing pattern recognition technology and the ever more sophisticated terrain of data mining. Always, the goal is to “humanize” an interface by the enormous task of filtering masses of lived experience through a finer and finer mesh of software.

  Many of the minds operating at the frontier of this effort come together at MIT’s Media Lab, where researchers are busy (in their own words) “inventing a better future.” I got to know Karthik Dinakar, a Media Lab researcher who moonlights with Microsoft, helping them improve their Bing search engine. (“Every time you type in ‘Hillary Clinton,’” he told me, “that’s me.”)

  Dinakar is a handsome twenty-eight-year-old man with tight black hair and a ready smile. And, like Amanda Todd, he’s intimately acquainted with the harshness of childhood bullying. Dinakar was bullied throughout his teen years for being “too geeky,” and he would reach out online. “I would write blog posts and I would . . . well, I would feel lighter. I think that’s why people do all of it, why they go on Twitter or anywhere. I think they must be doing it for sympathy of one kind or another.”

  Compounding Dinakar’s sense of difference was the fact that he lives with an extreme variety of synesthesia; this means his brain creates unusual sensory impressions based on seemingly unrelated inputs. We’ve all experienced synesthesia to some degree: The brain development of infants actually necessitates a similar state of being. Infants at two or three months still have intermingled senses. But in rare cases, the situation will persist. If you mention the number “seven” to Dinakar, he sees a distinct color. “Friday” is always a particular dark green. “Sunday” is always black.

  Naturally, these differences make Dinakar an ideal member of the Media Lab team. A so-called geek with a brain hardwired to make unorthodox connections is exactly what a bastion of interdisciplinary academia most desperately needs.

  When Dinakar began his PhD work at MIT, in the fall of 2010, his brain was “in pain for the entire semester,” he says. Class members were told to come up with a single large project, but nothing came to mind. “I wasn’t interested in what others were interested in. There was just . . . nothing. I assumed I was going to flunk.”

  Then, one evening at home, Dinakar watched Anderson Cooper report on Tyler Clementi, an eighteen-year-old violin student at Rutgers University who had leapt from the George Washington Bridge and drowned in the Hudson River. Clementi’s dorm mate had encouraged friends to watch him kissing another boy via a secretly positioned webcam. The ubiquitous Dr. Phil appeared on the program, speaking with Cooper about the particular lasting power of cyberbullying, which does not disappear the way a moment of “real-life bullying” might: “This person thinks, ‘I am damaged, irreparably, forever.’ And that’s the kind of desperation that leads to an act of suicide. . . . The thought of the victim is that everybody in the world has seen this. And that everybody in the world is going to respond to it like the mean-spirited person that created it.” Dinakar watched the program, figuring there must be a way to stem such cruelty, to monitor and manage unacceptable online behavior.

  Most social Web sites leave it to the public. Facebook, Twitter, and the like incorporate a button that allows users to “flag this as inappropriate” when they see something they disapprove of. In the age of crowdsourced knowledge like Wikipedia’s, such user-driven moderation sounds like common sense, and perhaps it is.8 “But what happens,” Dinakar explains, “is that all flagging goes into a stream where a moderation team has to look at it. Nobody gets banned automatically, so the problem becomes how do you deal with eight hundred million users throwing up content and flagging each other?” (Indeed, Facebook has well over one billion users whose actions it must manage.) “The truth is that the moderation teams are so shockingly small compared with the amount of content they must moderate that there’s simply no way it can be workable. What I realized was that technology must help the moderators. I found that, strangely, nobody was working on this.”

  The most rudimentary algorithms, when searching for abusive behavior online, can spot a word like “faggot” or “slut” but remain incapable of contextualizing those words. For example, such an algorithm would flag this paragraph as bully material simply because those words appear in it. Our brains and our meaning, however, do not work in an “on” and “off” way. The attainment of meaning requires a subtle understanding of context, which is something computers have trouble with. What Dinakar wanted to deliver was a way to identify abusive themes. “The brain,” he told me, “is multinominal.” We think, in other words, by combining several terms in relation to one another, not merely by identifying particular words. “If I tell a guy he’d look good in lipstick,” says Dinakar, “a computer would not pick that up as a potential form of abuse. But a human knows that this could be a kind of bullying.” Now Dinakar just had to teach a computer to do the same.

  The solution came in the form of latent Dirichlet allocation (LDA), a complex language-processing model derived in 2003 that can discover topics from within the mess of infinite arrangements of words the human brain spews forth. LDA is multinominal, like our brains, and works with what Dinakar calls “that bag-of-associations thing.” Dinakar began with a simple assumption about the bag of associations he was looking for: “If we try to detect power differentials between people, we can begin to weed out cases of bullying.”

  The work was barely under way when Dinakar received a letter from the Executive Office of the President of the United States. Would he like to come to a summit in D.C.? Yes, he said.

  There he met Aneesh Chopra—the inaugural chief technology officer of the United States—who was chairing the panel on cyberbullying that Dinakar had been invited to join. Three years later, Dinakar has the White House’s backing on a new project he’s calling the National Helpline, a combined governmental and NGO effort that means to, for the first time, begin dealing with the billions of desperate messages in bottles that teens are throwing online. The National Helpline incorporates artificial intelligence to analyze problems that are texted in, then produces resources and advice specific to each problem. It is one of the most humane nonhuman systems yet constructed. The effort is fueled in part by Dinakar’s frustration with the limits of traditional psychiatry, which is “mostly based on single-subject studies and is often very retrospective. They come up with all these umbrella terms that are very loosely defined. And there’s no data anywhere. There is no data anywhere. I think it’s a very peculiar field.”

  By contrast, Dinakar’s National Helpline—in addition to providing its automated and tailored advice—will amass an enormous amount of data, which will be stored and analyzed in a kind of e–health bank. “We’ll be analyzing every instance with such granularity,” says Dinakar. “And hopefully this will help psychiatry to become a much more hard science. Think about it. This is such an unexplored area. . . . We can mine photos of depressed people and get information on depression in a way that no one at any other point in history could have done.”

  The reduction of our personal lives to mere data does run the risk of collapsing things into a Big Brother scenario, with algorithms like Dinakar’s scouring the Internet for “unfriendly” behavior and dishing out “correction” in one form or another.9

  One Carnegie Mellon researcher, Alessandro Acquisti, has shown that in some cases facial recognition softw
are can analyze a photo and within thirty seconds deliver that person’s Social Security number. Combine this with algorithms like Dinakar’s and perhaps I could ascertain a person’s emotional issues after snapping his or her photo on the street. The privacy issues that plague our online confessions are something Dinakar is aware of, but he leaves policy to the policy makers. “I don’t have an answer about that,” he told me. “I guess it all depends on how we use this technology. But I don’t have an answer as to how that should be.”10 I don’t think any of us do, really. In our rush toward confession and connection—all those happy status updates and geo-tagged photo uploads—rarely do we consider how thorough a “confession” we’re really making. Nor do we consider to what authority we’re doing the confessing. This is because the means of confession—the technology itself—is so very amiable. Dinakar is building a more welcoming online world, and it’s a good thing he is. But we need to remain critical as we give over so much of ourselves to algorithmic management.

  • • • • •

  In a sense, Dinakar and others at the Media Lab are still pursuing Alan Turing’s dream. “I want to compute for empathy,” Dinakar told me as our time together wound down. “I don’t want to compute for banning anyone. I just want . . . I want the world to be a less lonely place.” Of course, for such affective computing to work the way its designers intend, we must be prepared to give ourselves over to its care.

  How far would such handling by algorithms go? How cared for shall we be? “I myself can sometimes think in a very reactive way,” says Dinakar. He imagines that, one day, technologies like the software he’s working on could help us manage all kinds of baser instincts. “I’d like it if my computer read my e-mail and told me about the consequences when I hit a Send button. I would like a computer that would tell me to take five deep breaths. A technology that could make me more self-aware.”

  A part of me has a knee-jerk reaction against the management Dinakar is describing. Do we want to abstract, monitor, quantify, ourselves so?

  Then I think again about the case of Amanda Todd and whether such online watchdogs might have helped her. Only one in six suicides is accompanied by an actual suicide note, but it’s estimated that three-quarters of suicide attempts are preceded by some warning signs—signs we hapless humans fail to act on. Sometimes the signs are explicit: Tyler Clementi updated his Facebook page to read, “Jumping off the GW Bridge sorry.” Sometimes the messages are more obscure: Amanda Todd’s video merely suggests deep depression. How much can be done when those warning signs are issued in the empathic vacuum of the Internet? Are we not obliged to try to humanize that which processes so much of our humanity? Dinakar’s software could help those who reach out directly to it—but here’s the rub: When we go online, we commit ourselves to the care of online mechanisms. Digital Band-Aids for digital wounds.

  We feed ourselves into machines, hoping some algorithm will digest the mess that is our experience into something legible, something more meaningful than the “bag of associations” we fear we are. Nor do the details of our lives need to be drawn from us by force. We do all the work ourselves.

  We all of us love to broadcast, to call ourselves into existence against the obliterating silence that would otherwise dominate so much of our lives. Perhaps teenage girls offer the ultimate example, projecting their avatars insistently into social media landscapes with an army of selfies, those ubiquitous self-portraits taken from a phone held at arm’s length; the pose—often pouting—is a mainstay of Facebook (one that sociologist Ben Agger has called “the male gaze gone viral”). But as Nora Young points out in her book The Virtual Self, fervent self-documentation extends far beyond the problematic vanities of teenage girls. Some of us wear devices that track our movements and sleep patterns, then post results on Web sites devoted to constant comparison; others share their sexual encounters and exercise patterns; we “check in” to locations using GPS-enabled services like Foursquare.com; we publish our minute-by-minute musings, post images of our meals and cocktails before consuming them, as devotedly as others say grace. Today, when we attend to our technologies, we elect to divulge information, free of charge and all day long. We sing our songs to the descendants of Alan Turing’s machines, now designed to consume not merely neutral computations, but the triumphs, tragedies, and minutiae of lived experience—we deliver children opening their Christmas presents; middle-aged men ranting from their La-Z-Boys; lavishly choreographed wedding proposals.

  There’s a basic pleasure in accounting for a life that, in reality, is always somewhat inchoate. Young discusses the “gold star” aspect of that moment when we broadcast ourselves: “Self-tracking is . . . revelatory, and consequently, for some of us at least, motivating.” In reality, life outside of orderly institutions like schools, jobs, and prisons is lacking in “gold star” moments; it passes by in a not-so-dignified way, and nobody tells us whether we’re getting it right or wrong. But publish your experience online and an institutional approval system rises to meet it—your photo is “liked,” your status is gilded with commentary. It’s even a way to gain some sense of immortality, since online publishing creates a lasting record, a living scrapbook. This furthers our enjoyable sense of an ordered life. We become consistent, we are approved, we are a known and sanctioned quantity.

  If a good life, today, is a recorded life, then a great life is a famous one. Yalda T. Uhls, a researcher at UCLA’s Children’s Digital Media Center, delivered a conference presentation in the spring of 2013 called “Look at ME,” in which she analyzed the most popular TV shows for tween audiences from 1967 to 2007. The post-Internet television content (typified by American Idol and Hannah Montana) had swerved dramatically from family-oriented shows like Happy Days in previous decades. “Community feeling” had been a dominant theme in content from 1967 to 1997; then, in the final decade leading up to 2007, fame became an overwhelming focus (it was one of the least important values in tween television in earlier years). Uhls points out that the most significant environmental change in that final decade was the advent of the Internet and, more to the point, platforms such as YouTube and Facebook, which “encourage broadcasting yourself and sharing aspects of your life to people beyond your face-to-face community. . . . In other words, becoming famous.”11 One recent survey of three thousand British parents confirmed this position when it found that the top three job aspirations of children today are sportsman, pop star, and actor. Twenty-five years ago, the top three aspirations were teacher, banker, and doctor.

  If the glory of fame has indeed trumped humbler ambitions, then the ethos of YouTube is an ideal medium for the message. Its tantalizing tagline: “Broadcast Yourself.”

  • • • • •

  We feel a strange duality when watching a YouTube video like Amanda Todd’s. The video is at once deeply private and unabashedly public. This duality seems familiar, though: The classic handwritten diary, secured perhaps with a feeble lock and key, shoved to the bottom of the underwear drawer, suggests an abhorrence of the casual, uninvited reader; but isn’t there also a secret hope that those confessions will be read by an idealized interloper? We desire both protection and revelation for our soul’s utterance. W. H. Auden wrote, “The image of myself which I try to create in my own mind in order that I may love myself is very different from the image which I try to create in the minds of others in order that they may love me.” But broadcast videos like Amanda Todd’s attempt to collapse those two categories. Bending both inward and outward, they confuse the stylized public persona and the raw private confession.

  What, then, is the material difference between making our confessions online, to the bewildering crowds of comment makers, and making our confessions in the calm and private cloister of a paper diary? What absence have we lost?

  When we make our confessions online, we abandon the powerful workshop of the lone mind, where we puzzle through the mysteries of our own existence without reference to the demands of an often ruthless public.

&
nbsp; Our ideas wilt when exposed to scrutiny too early—and that includes our ideas about ourselves. But we almost never remember that. I know that in my own life, and in the lives of my friends, it often seems natural, now, to reach for a broadcasting tool when anything momentous wells up. The first time I climbed the height of the Eiffel Tower I was alone, and when at last I reached the summit and looked out during a sunset at that bronzed and ancient city, my first instinct was not to take in the glory of it all, but to turn to someone next to me and say, “Isn’t it awesome?” But I had come alone. So I texted my boyfriend, long distance, because the experience wouldn’t be real until I had shared it, confessed my “status.”

  Young concludes The Virtual Self by asking us to recall that much of life is not “trackable,” that we must be open to “that which cannot be articulated in an objective manner or reduced to statistics.” It’s a caution worth heeding. The idea that technology must always be a way of opening up the world to us, of making our lives richer and never poorer, is a catastrophic one. But the most insidious aspect of this trap is the way online technologies encourage confession while simultaneously alienating the confessor. I wish, for example, I had just looked out at Paris and left it at that. When I gave in to “sharing” the experience, I fumbled and dropped the unaccountable joy that life was offering up. Looking back, I think it seems obvious that efficient communication is not the ultimate goal of human experience.

  Yet everywhere we seem convinced that falling trees do not make sounds unless someone (other than ourselves) can hear them. There was that one friend of mine who announced his mother’s cancer diagnosis on his Facebook wall, which shocked me and seemed utterly natural to others. There was another friend who posted online the story of his boyfriend dying of AIDS (he had refused to take his meds). It’s easy to say this is just about “shifting baselines.” But adopting a culture of public confession is more than that: It marks the devaluing of that solitary gift—reverie.

 

‹ Prev