The Numerati

Home > Other > The Numerati > Page 15
The Numerati Page 15

by Stephen Baker


  Hermansen built a name-identifying company, Language Analysis Systems (LAS). Its immense mosaic of names, as they’re organized and spelled from one culture to another, embodies the labor of anthropologists and linguists, not computer scientists. But that expertise gets plugged into computers. In spring 2006, Hermansen sold LAS to IBM. Now it works closely with Jeff Jonas’s identity detection unit. Despite progress, Hermansen says, untangling global names will continue to confound us for generations. “My grandchildren could be working on this.”

  This doesn’t mean that techniques like the Next Friend analysis won’t be useful inside the NSA. But those multidisciplinary teams Schack talks about will need lots of guidance from the intelligence officer. Hopefully, that liberal arts grad masters a foreign language or two. To understand the cultural biases in selecting certain data, a strong dose of anthropology can’t hurt.

  In the meantime, many tools and technologies that are being honed for national security may find receptive markets closer to home. It makes sense, doesn’t it? The intelligence agencies may have spotty data in their foreign files, with all sorts of tangled and duplicate names. The files that we inhabit, by contrast, are brimming with useful and intelligible data. Our records at work, for example, have clear names and schedules, and everyone works on the same e-mail system. Consider how Next Friend analysis could come in handy. Say a colleague leaves a job at your company. Who’s the person most likely to be affected by his departure? Will that person be the next to jump ship? Managers can intervene. On a darker note, what if a colleague is caught selling confidential information? Could a Next Friend analysis point to others who should be monitored? Scary perhaps, but look at the bright side: once the Numerati master these techniques on us, maybe they can use them to catch terrorists.

  BACK IN LAS VEGAS, I try that same quip on Jeff Jonas. He doesn’t find even a hint of humor in it. The way he sees it, the technology to monitor us, and to predict our behavior, will continue to march ahead endlessly. “Everybody is trying to compete,” he says, “whether it’s governments competing against each other, companies competing against each other, or governments competing against [terrorist] threats. And when you compete, you want to have the best human resources, the best minds, the best tools, and the best data. You always want more data, better tools, smarter people.” He leans forward. “When’s that going to end?” He answers his own question. “It’s not,” he says. And the machines that gather our data will continue to proliferate, with wireless sensors and cameras following our movements. This could mean casino-style surveillance in much of the rest of the world. “I’m thinking, man, when is this going to slow down?” Jonas says. “Is there any throttling mechanism? Whoa, whoa, whoa!”

  The throttling mechanisms that he’s talking about—locks and shields to defend our privacy—often don’t appear in the original technology. They’re tacked on later as fixes. Why so? The nature of innovation is first to create the breakthrough service or product. Control mechanisms, like privacy filters, come afterward. Jonas himself, he admits, was late to grasp the implications of his own inventions. There he was, building the identity systems that could become pillars of global surveillance and, he says, “Honestly, four or five years ago, I didn’t even know what the word privacy meant!”

  Here’s the prime danger he sees. Imagine that government data miners sift through the details of our lives but fail to uncover terror cells. Good chance they’ll likely push for more data—making the case that collecting it is a matter of national security. Mishandled or misunderstood, this hunt threatens to ransack our bedrooms and medicine cabinets, ripping away what privacy we still hold on to. It can implicate innocent people—“false positives” in data-mining lingo. Statistics, after all, point only to probabilities, not to truth.

  The damage can spread even further. Let’s say that terror sleuths, while failing to find terrorists, come across other interesting patterns in our data. Maybe one of us looks like a tax cheat. Another belongs to an informal e-mail network that includes pornographers. What then? Will privacy advocates and civil libertarians dare to defend suspected pedophiles? Or imagine that your accountant quietly runs an illegal side business as a bookie. Suddenly, says David Evans, CEO of Clairvoyance Corp., a data analysis company in Pittsburgh, “you get analyzed for any other data that support the relationship with the bookie. Where did that money come from? Were there withdrawals in cash? These are the statistics that could be used to create a case against you.” This, says Jonas, is why we need technology to protect our identities and policies that safeguard our rights. “We’re going to need some smart people in policy,” he says. Without sound oversight, we’re liable to get the worst of both worlds—a surveillance society that still fails to make us safe.

  Jonas says he didn’t understand these risks when he was building NORA. Then he began to see how once the information about us in databases is compiled, it could be used in different and devious ways. He calls it “repurposing.” This late awakening has turned Jeff Jonas into a champion of privacy. He’s built a strong privacy-protection supplement for NORA. He originally named it ANNA, a play on “anonymity.” Now that it’s gone corporate, it’s been redubbed IBM Anonymous Resolution. It encrypts each identity into long series of letters and numbers known as “one-way hashes.” Governments and companies can then search for connections—combing through the passenger list of a cruise ship, looking for the hashes of suspected terrorists. This system reduces the risk of data leaks. More important, no one sees the names until a match appears and the company gets a formal request to reveal the identities. With Jonas’s system, our most sensitive data is exposed, but not attached to a person’s identity. But even as some of the Numerati come up with schemes to protect us, others are racing ahead in the data hunt for the next would-be bombers. They’re bound to learn lots about us in the process, more than most of us are ready to share. “We technologists had better spend a little more time thinking about what we’re creating,” Jonas says.

  Chapter 6

  * * *

  Patient

  I REMEMBER taking my mother, in her last year, to the doctor. One of us had to accompany her, to write down the new dosages of the eight or nine medicines she was taking. She was frail and a bit forgetful, and much too busy taking care of her blind 95-year-old husband to think much about herself. Her doctor asked questions. Does that hurt? Well, yes. And that? Um. Yes, a little. He scribbled these hazy answers in his pad.

  “Are you having trouble sleeping?” he asked.

  “No.” She was pleased to provide a definitive answer.

  I jumped in. “Mom, weren’t you up making cocoa in the middle of the night?” Well, she allowed, some nights were better than others. The doctor kept scribbling. I was already researching this book, and I remember wondering, What kind of data is this?

  Miserable data, is Eric Dishman’s answer. From his research lab at Intel, just outside Portland, Oregon, Dishman is working feverishly to replace the fog, forgetfulness, and wishful thinking of human memory with minute-by-minute updates pouring in from electronic sensors. This 40-year-old anthropologist has a broad face, with a burr of dark hair on top and a smile stretching across the bottom. He’s an evangelist for a new approach to health care because he sees the status quo as untenable. His words spill out in torrents: “I took an unpaid leave to go deal with my wife’s grandmother, who fell and died from the fall,” he tells me, as five members of his team and I chow on Chinese takeout in a conference room. “Actually, she died from horrendous medical errors that occurred,” he says. “Then my grandfather fell. He didn’t die, but we put him into a nursing home in North Carolina. I’m not trying to be egocentric, but I’m a world-known expert in this field, and I couldn’t stop this from happening. I work at Intel, I make a decent living, I know the technology, I know every CEO in this industry, I can call senators on these issues.” And yet people in his own family fell victim to the very healthcare disasters he works to prevent. “I lose one relativ
e, and another gets forced to move into a nursing home,” he says. “What happens if you don’t speak English, if you don’t have the kind of access that someone like me has? It’s terrifying.”

  He thinks that over the next generation, many of us will surround ourselves with the kinds of networked gadgets he and his team are building and testing. These machines will busy themselves with far more than measuring people’s pulse and counting the pills they take, which is what today’s state-of-the-art monitors can do. Dishman sees sensors eventually recording and building statistical models of almost every aspect of our behavior. They’ll track our pathways in the house, the rhythm of our gait. They’ll diagram our thrashing in bed and chart our nightly trips to the bathroom—perhaps keeping tabs on how much time we spend in there. Some of these gadgets will even measure the pause before we recognize a familiar voice on the phone.

  A surveillance society gone haywire? Personal privacy in tatters? Not at all, says Dishman. He predicts that many of us will deploy these sensors to spy on ourselves in order to live healthier, happier, and longer lives. We’ll do it, in other words, because we choose to. And we’ll come to learn about this technology, insidious as this may sound, by trying it out on the old people we love—especially those who live far away. Take it from me. As Dishman walks me through his lab, a year after the deaths of my parents (and only ten miles from their home), I look at each new sensor and think, “Boy, we could have used one of those.”

  Here’s an example. At that checkup I went to with my mother, the doctor told her to weigh herself every day and to keep a daily log of the results. This was important, he said, because a dramatic rise in her weight might indicate that her weakening heart was failing to pump fluids. He didn’t go into the details, but untreated, those fluids would fill her lungs and kill her. Later that afternoon, I bought an electronic scale. I knew, even as I showed it to her, that this plan was fatally flawed for at least three reasons: She wouldn’t always remember to weigh herself. She would find it very hard to tap the scale with enough authority to activate it. And even if she succeeded, she’d have a tough time reading the electronic display, which even I squinted to see. In short, even though the doctor needed updates and my mother was willing to furnish them, crucial sensor and recording components—my mother’s eyes and her memory—were not up to the job. (For my father, blind and increasingly immobile, daily weighing was nearly hopeless.)

  Dishman guides me toward a small tiled section of kitchen flooring. It’s a prototype, which he calls the “magic carpet.” Under each of the beige tiles are webs of weight sensors. If my mother’s kitchen had been equipped with a magic carpet, her every visit to the kitchen would have dispatched details on her changing weight along a wireless path from the tiles to her computer, and from there to the doctor’s office. From this thread of data, the doctor would not only be able to monitor her weight, but equally important, he could receive an alert if one day she failed to walk into the kitchen. That’s a fact worth knowing.

  I can just imagine my parents laughing at the extravagance of a magic carpet in their kitchen. It sounds like something from the Jetsons. Yet in the past half-century, while medical costs have skyrocketed, electronics have gone the other way. Back in the 1960s, when my parents were raising us, our doctor paid affordable house calls if we had the sniffles—and NASA spent millions of dollars for computers no more powerful than the battered cell phone in my pocket. Consider how dramatically things have changed. For the price of one single bottle of my mother’s heart medicine—$80—she could have bought a wireless network for their home. (Believe me, I encouraged her to. Their dial-up modem drove me batty.) She could have replaced her buggy old computer for the cost of one MRI. Near the end, my parents were spending about $180 a day for home nursing. For just a fraction of their monthly nursing bill, they could have thrown enough blinking sensors and networking gizmos into their house to record and transmit every step, bite, breath, word, and heartbeat in their Portland house.

  But who would have noticed this river of data? My parents, like many of us, had a hard time getting responses to one of the simplest and most definitive alerts imaginable: a phone call to the doctor’s office. If doctors are so understaffed that they struggle to return phone calls, I ask Dishman, how are they ever going to interpret data pouring in from magic carpets and countless other devices? “That’s precisely the point!” he says. The doctors are too busy. The gadgets by themselves don’t help much. It will be up to the Numerati to pore over the patterns of movements and speech and social interactions and then figure out what they mean. Only good math can sift through these floods of nearly meaningless data to provide doctors with specific alerts. This isn’t easy. In one Oregon study, people’s beds were wired to monitor their nightly movements and weight. One woman, researchers were startled to see, gained eight pounds between bedtime and breakfast. A dangerous accumulation of fluids? Time to call an ambulance? No. Her little dog had jumped on the bed and slept with her. Culling the pugs and corgis from the data will be up to the Numerati.

  Even the simplest of these algorithms must be customized. For some invalids, for example, it’s a red-light alert if they’re out of bed. Maybe they’ve fallen or are teetering in the hallway or fiddling with the stove. For healthier patients, it’s the failure to climb out of bed at the usual time that spells potential trouble.

  This analysis is still in its infancy. Think back to the Internet in the mid-1990s. As we learned to send e-mails and call up Web pages, we were creating data. But it took a few years for data-crunching companies like Tacoda, Umbria, and Google to learn how to analyze our clicks and search queries and blog posts, and build businesses around them. Dishman’s job now is to entice us to use his sensors. Only by hooking us up can he generate the streams of data that the Numerati feast on.

  The gadgets won’t make their way into many of our homes until they pass important tests. They must be easy to use and provide decent service, while also protecting at least a bit of our privacy. If these machines create confusion and frustration, they’ll wind up stacked in a closet, gathering dust, like that digital scale I bought. And if users have reason to fear snooping, from marketers, scam artists, or insurance companies, they’ll likely pull the plug. These are the challenges ahead for the electronics and software companies, such as Intel, Microsoft, and Google, that are rushing into the medical business.

  Dishman sees this march of medical monitors as inevitable. Aging societies around the world face exploding healthcare costs, especially as the jumbo generation born after World War II starts retiring in droves. It creates a market for automation, which Dishman spotted long ago. He worked to develop the science in the 1990s as part of a start-up financed by Paul Allen, the cofounder of Microsoft. But the key to getting these monitors into hundreds of millions of homes was to harness the power and reach of a global computing giant. He knocked on doors throughout the tech world, making the case that home computers would become, among many other things, home nursing stations. But those companies, he says, fretted about saddling their youthful brands with geriatrics. Intel finally relented. Dishman launched the home health division with just one colleague in 2001. Two years later, they issued a press release about predicting Alzheimer’s disease. The public response was immense. It was led by people just like me, Dishman says, who were avid for technology that could keep an eye on their elderly parents. Since then, his division has opened a research branch in Ireland and has carried out tests in more than 1,000 homes in 20 countries. He runs a national nonprofit for home health care, which involves 500 companies and universities.

  This push to develop electronics isn’t just a matter of replacing doctors and nurses with machines or using digital readings to supplement our faulty memories. Constant monitoring is bound to change the very nature of health care, eventually giving each one of us the kind of continuous health surveillance historically reserved for VIPs, such as vice presidents with heart disease, billionaires, and astronauts. This change, says
Dishman, shifts the focus from after to before, from crisis response to prevention. If the Numerati get it right, they’ll note changes in our patterns of behavior long before we fall ill. They’ll know our typical daily routines when in health. Then, when they detect changes in our activities, they’ll figure out what we’re coming down with and start treating our maladies before we get them—or at least before we perceive them.

  In many ways, these promises of preventive care echo others coming from genomics laboratories, another growing empire for the Numerati. In universities and pharmaceutical labs around the world, computer scientists and computational biologists are designing algorithms to sift through billions of gene sequences, looking for links between certain genetic markers and diseases. The goal is to help us sidestep the diseases we’re most likely to contract and to provide each one of us with a cabinet of personalized medicines. Each one should include just the right dosage and the ideal mix of molecules for our bodies. Between these two branches of research, genetic and behavioral, we’re being parsed, inside and out. Even the language of the two fields is similar. In a nod to geneticists, Dishman and his team are working to catalog what they call our “behavioral markers.” The math is also about the same. Whether they’re scrutinizing our strands of DNA or our nightly trips to the bathroom, statisticians are searching for norms, correlations, and anomalies. Dishman prefers his behavioral approach, in part because the market’s less crowded. “There are a zillion people following biology,” he says, “and too few looking at behavior.” His gadgets also have an edge because they can provide basic alerts from day one. The technology indicating whether a person gets out of bed, for example, isn’t much more complicated than the sensor that automatically opens a supermarket door. But that nugget of information is valuable. Once we start installing these sensors, and the electronics companies get their foot in the door, the experts can start refining the analysis from simple alerts to sophisticated predictions—perhaps preparing us for the onset of Parkinson’s disease or Alzheimer’s.

 

‹ Prev