Solomon's Code

Home > Other > Solomon's Code > Page 11
Solomon's Code Page 11

by Olaf Groth


  Both the “intelligent intuition” of the coaches and Wilkinson’s snap back into a state of flow illustrate how the subtlest changes in environment can recalibrate the performance of mind and body. If an athlete doesn’t feel comfortable in an environment, he or she will switch back to the emotional cortex and back to self-awareness, perceiving the world primarily through a lens of negative experiences and a threatened state, Neal explains. We can’t change that natural reaction, but we can learn to identify the situations that activate this response and, with training, refocus those thought patterns through a physical activity—say, stand-up comedy.

  Yes, an occasional piece of Neal’s training regimen takes place not in the gym or on the field, but in a comedy club. For Olympians who have never been on that big a stage, Neal and his colleagues create a similar environment by giving athletes an hour to prepare a comedy routine. They’re terrified; the idea of performing comedy for a roomful of people gives them a real sense of fear, but it also provides the coaches an opportunity to help them relax when put in that context. They begin to realize they can go into a threatened state, but they don’t need to fail. Over the course of their training, Neal says, they eventually get to a point where they want to do standup because they enjoy it. “And then we can use heart and other physiological data to show them, in hard data, how their performance is improving,” he says. “Heart rate variance data shows how their physiological response is changing, and once they see it they believe it even more.”

  This intricate, complex, and intractable relationship between our minds, bodies, and the world around us make up human reality. Alan Jasanoff, a professor of biological engineering at the Massachusetts Institute of Technology, calls this the “cerebral mystique”—the idea that the connections between stimuli, emotions, and cognition are not discrete and separable functions.§§ Our brain should not be viewed as computer analog, Jasanoff suggests, largely because what makes us human is the complex interplay between physical sensations, emotions, and cognition. Our bodies are part of cognition, so our physical environment, social context, and life experiences shape our self-perception and identity. From the early stages of our childhood through the last moments of our existence, we receive and integrate feedback from the people and institutions around us, and those physical and emotional stimuli become part of our cognition, our experience, and ultimately our values.

  Our environments teach us about right and wrong behaviors, good and bad performance, and standards of beauty and aesthetics. It shapes who we are and how we move through the world. Social control mechanisms in our communities and organizations signal to us the acceptable norms for interaction. And in all these processes, our own notion of our identity meets, melds, and clashes with the perceptions of others we encounter. We shape and reshape our sense of self through a lifelong process, struggling to balance our power, values, and trust with one another and the institutions with which we interact.

  In the past, we retained a certain share of power over how we represented ourselves in that public sphere, but not always for the better. Politicians would portray themselves as great and visionary leaders, while voters sought to find out the truth about the men and women who wished to lead us—their integrity and flaws helping us gauge whether they were fit for office. But we also accepted a certain amount of ambiguity in the formation of character and public identity, especially for candidates we supported. We chose to forgive their errors or justify their successes as outcomes of complex, multifactor pictures. And we accepted them as part of a subjective narrative, one that appeals to us even if its parts were less than pure truth.

  However, the prevalence of personal data and artificial intelligence will change how we craft notions of self-image and self-awareness in the years to come. Smart machines will make us more objectively measurable, much like an athlete whose physiological, neurological, and environmental signals are gathered, read, and acted upon by coaches such as Neal. The emerging algorithms, deep neural networks, and other sophisticated AI systems can process dozens of data streams from a multitude of areas in our lives, assessing our overall performance, profile, or character traits. They might not always be right. Data can be biased, and people can change, but already these systems capture intricate facets of our existence and our environment that we don’t have the type of intelligence to fully process.

  Consider, for example, the ability of IBM Watson to conduct personality assessments based on your Twitter feeds, a capability it designed with the help of a panel of psychologists.¶¶ While the system, at the time of this writing, still had some kinks to work out, such as distinguishing between an original post and a retweet, the algorithms could conduct a text and sentiment analysis from the feeds and deduce personality profiles. Already, human resources departments have deployed similar systems to detect happiness or discontent among workers who answer open-ended questions on annual employee surveys, according to the Society for Human Resource Management.## It’s not a far stretch to imagine such systems applied across all of a person’s social media feeds and data streams, and from that picture Watson might assign labels such as introverted or extroverted, passionate or aggressive, open-minded or closed to the ideas of others. It’s an astoundingly simple step toward providing us an external check, a mirror to see how other people might perceive us.

  This raises plenty of opportunities and risks, not the least of which is how accurate a view of a person’s true persona such a system might provide. We heavily self-curate our portrayal of ourselves on social media—in part because most of our everyday actions and communications haven’t been publicly digitized yet, but also because we present ourselves differently from what we really are. At one point, while working as a vice president and senior fellow at Intel, Genevieve Bell and her team suggested the idea of integrating social media into set-top television boxes, so people could quickly and easily share what they were watching rather than typing it into their phones or laptops. In testing, users revolted because they regularly lied about what they were really doing, says Bell, who’s developing the new Autonomy, Agency and Assurance (3A) Institute at Australian National University. “Human beings spend most of their time lying, not in the big nasty sinful ways, but the ones that keep the peace and keep things ticking,” she says. “Systems don’t know how to do that, and humans don’t know how to live in a world without that. . . . If everybody just lived out their true impulses without control screens or masks, the world would be a terrible place.”

  Of course, as almost anyone on Twitter knows, people can still hide behind incomplete or fragmented digital identities and use that as a shield to unleash their basest impulses. Digital trolling has become commonplace over the past decade, spawning levels of vitriol that rarely occur in face-to-face interactions. The digital stalking, intimidation, and hate won’t dissipate any time soon, especially not in societies and on platforms that favor free speech. Yet, as the digitization of our physical world increases, we might see a ratcheting up of the tension between digital and real-life personas. Data streams will become more holistic, multifaceted, and comprehensive. Arguably, then, the picture of who we are will get triangulated and checked and veritable. How will this impact the ways we perceive ourselves and the ways others perceive us? How will this new blend of data-based assessments and human masks shape our consciousness? How much control will we still have over our self-awareness and self-image when we are constantly forced to look into the mirror of data-driven authenticity? When digitally objective historical data tells one story, but our aspirations tell another, how much influence can we exert over how we evolve and who we become? Might the evolving society then become more rational, honest, and less prone to egos and misdirection, but also no longer a place for free self-definition?

  Already, thousands of people rely on the algorithmic matchmaking “wisdom” of online dating sites to find partners. At many of them, AI-driven platforms help cut through the self-promotion and clutter in personal profiles to hone in on a more likely
match. These sorts of systems might help us adjust our expectations of ourselves and others, so we’re disappointed less often and develop happier relationships. We might find partners that better fit our true nature, rather than hooking up with the people our egos and social conditioning guide us toward. Given the mediocre success rate of the self-guided approach to romantic love, a little help from an objective source might not be a bad thing, as those who grow up in arranged-marriage cultures sometimes argue.

  Yet, all this assumes that the AI platforms involved in the process get it right enough to build trust in the system and, by extension, between us as individual citizens of our communities. These systems and the people who develop them have yet to fully address concerns about bad data, human and algorithmic biases, and the tremendous complexity and nonlinearity of the human personality. To computer scientists, it is more like a piece of art than a piece of engineering. But their current training allows them to treat it only as the latter. What if the geeks get us wrong? What if the psychologists involved apply monocultural lenses to cross-cultural identity issues?

  Additionally, even when done perfectly from a code and data perspective, AI can only measure what it sees. Much of who we think we are lies in ideas we never speak, or we keep them in such finely nuanced patterns that AI can’t detect it. As Genevieve Bell notes, a small dose of self-deception can boost self-esteem, smooth our relationships with the outside world, and carry individuals and communities a long way toward success. But then, if the right balance of reality and delusion is key, how will AI capture that balance and the almost intangible timing of pivoting from one to another? That could backfire if the AI doesn’t get the fine lines.

  Ultimately, a beneficially symbiotic relationship between our minds, bodies, and the environment—as well as with other forms of natural and man-made intelligence—relies on the AI agent balancing the tension along those tenuous lines. As humans, we need to maintain the power to make sure this high-wire act goes well. Otherwise our trust in each other will erode and our trust in AI will die before it can bear fruit.

  EMOTIONAL INTELLIGENCE AND TRUST

  Dr. Alison Darcy and her team refer to it as “he,” fitting given just how intimate and attached “his” users can become. This character they refer to is Woebot, an AI-powered chatbot that provides psychological therapy services for its users. At first glance, it harkens back to the conversational banter users could have with the famous ELIZA, one of the earliest natural language processing computer programs developed in the mid-1960s. Woebot goes considerably deeper in its engagement, though, and many users have developed a close bond with it. Their attachment might stem in part from just how lonely they say they are, a fact that surprised Darcy, a clinical psychologist at Stanford University and Woebot Labs’ founder and CEO. Even in social situations, users, especially young adults, report that they feel lonely. One colleague noted how often her adult patients would describe an empty and hollow feeling. They started seeing that in Woebot’s generally younger clientele, who welcomed the five- to ten-minute moment of self-reflection that the mobile phone app could provide.

  Yet, Darcy stresses, they did not create Woebot to replace a professional therapist. The app merely checks in once a day with a lighthearted and friendly interaction, complete with emojis and banter. If you do say you’re feeling down, it will drop out of its jovial mode, which simulates empathy, and bring you back into the cognitive restructuring that’s at the core of cognitive behavioral therapy (CBT). Loosely speaking, CBT is based on the idea that what upsets someone is not the events in their world, but their perception of themselves and what that means about them. Thus, treating the root cause of those anxieties, a process called cognitive restructuring, requires active participation on the part of the patient. Although research shows it to be effective, only roughly half of psychotherapists employ the techniques, and fewer do it well. A big part of the problem is keeping patients engaged, doing the homework they need to do to “overwrite difficult thoughts with rational ones. It’s a fair amount of work for the patient,” Darcy says, “but it really pays off and that’s empirically proven, so it does make sense to have a coach reminding you to do the work.”

  Woebot can only approximate a good, long-term CBT intervention and it’s not designed to replace the human counselor, but two things make it especially useful. For one, it’s available at any time, including for that midnight panic attack on Sunday. While not intended to provide emergency services, it can recognize dire situations and trigger suggestions to other services, including an app that has an evidence-based track record of reducing suicidal behavior. In fact, Woebot Labs, which received an $8 million Series A funding round in March 2018, takes patient control of the app, their interactions with it, and their data so seriously that they allow users to turn off the alert words that usually trigger the app to challenge their suicidal thinking.

  The second element that makes Woebot so useful is the fact that “he” builds unusually strong bonds with “his” users, forging a working alliance with them and keeping them engaged with their therapy. While the folks at the company refer to Woebot as “he” because they’re referring to the character it portrays, Darcy says, users also refer to it in remarkably personal terms. “They used relational adjectives,” she says. “They called it a friend.” In fact, she explains, one Stanford fellow’s study of Woebot’s “working alliance” measures found that it did not map with what he’d previously seen in computer-human interactions, nor did it map with human-to-human interactions. For her part, Darcy suggests it occupies a space in that gray area where people suspend disbelief, much in the same way we encourage kids to talk to their Elmo doll or pretend their toys are real. Yet, Woebot’s developers specifically avoid making it more lifelike or realistic. They want it to remain a distinct entity, clearly a therapeutic chatbot—something removed from simple friendship the way a psychiatrist or doctor maintains a professional demeanor and code of conduct. “A really good CBT therapist should facilitate someone else’s process,” she says, “not become part of it.”

  Similar types of trusting, empathetic, and symbio-intelligent relationships between bots and their users have proven remarkably effective in addressing a variety of mental health issues. Other psychotherapy bots, like Sim Sensei or ELIZA, have effectively treated US soldiers suffering from post-traumatic stress disorder (PTSD), providing them an effective, base-level therapy while also providing a remove from a human therapist, with whom they might hesitate to interact. With a watchful human eye, these AI-backed systems have penetrated something as innately personal and human as our mental health—and because of the regrettable stigma still attached to most mental health services, they might work better than human interaction in many cases.

  A young start-up that emerged from Carnegie Mellon University hopes the same symbiotic relationship between AI and humans can help address the opioid crisis that sunk its teeth into American society in 2017. In some ways, Behaivior moves a half step further from Woebot, more actively engaging in actions designed to nudge recovering addicts away from potential relapses. But it also takes a cue from Darcy and Co., making sure the control of the app remains in the hands of the user—something critical to establish and keep trust. Behaivior works in concert with wearable devices, such as Fitbits and mobile phones, to measure a range of factors about the recovering opioid addicts who use it. It draws on everything from geolocation to stress and other physiological metrics to identify when a user might be heading for a relapse. While still in initial testing when cofounder Ellie Gordon spoke with us, the AI-backed system detects nuanced patterns in activity of individuals and across the range of its user base. When a factor or combination of factors signals the possibility of an imminent relapse—say, signs of stress and a return to the place where a person used to buy heroin—it triggers an intervention predefined by the user. For some, Gordon says, it’s a picture of them in the throes of drugs, looking gaunt and unhealthy. For others, it’s music. Many recovering paren
ts select pictures of or messages from their kids. And sometimes they set up a connection to a sponsor from a twelve-step recovery program.

  The systems can pick up on some intriguing signals, such as cigarette usage. If users start smoking more often, Gordon explains, they’re likely regressing toward a high-craving state. Through sensors on a Fitbit or a similar device, Behaivior can measure the movement of arms and hands when users smoke. It’s not infallible, of course. A person might smoke with the hand not wearing the Fitbit, so collecting a wide array of less-than-obvious signals is critical. Recovering addicts relapse far more often than most outsiders realize, Gordon says. It’s not unheard of for addicts to overdose, go to the emergency room, get treated and discharged, call a dealer, and immediately overdose in the bathroom of the very hospital that just treated them. Because of the relapse rate—six or seven falls aren’t uncommon before treatment sticks—Behaivior has started to gain traction with treatment centers. The founders hope to eventually extend that interest to insurance companies, which could save money on expensive coverage of the multiple treatments, especially in emergency rooms. For now, though, Behaivior remains in the development phase, with much to research, test, and prove. But its ideas have made it a finalist in the $5 million IBM Watson AI XPRIZE competition. The start-up’s AI systems help identify the nuanced behavioral patterns and data that signal potential relapses, but eventually they could use such technologies to improve interventions in real time, learning on the fly to help pull people back from a potential relapse.

 

‹ Prev