Book Read Free

Know This

Page 37

by Mr. John Brockman


  What a wonderful world it could be! But how to get there? How would the machines teach empathy, prospection, and correct agency attribution? Most likely, they would overhaul our education system. The traditional classroom setting would finally be demolished so that humans could be taught solely in the school of hard knocks: Machines would engineer everyday situations (both positive and negative) from which we would draw the right conclusions. But this would take a lot of time and effort. Perhaps the machines would realize that rather than expose every human to every valuable life lesson, they could distill down a few important ones into videos or even text: The plight of the underdog. She who bullies is eventually bullied herself. There’s just us. Do to others what. . . . Perhaps these videos and texts could be turned into stories, rather than delivered as dry treatises on morality. Perhaps they could be broken into small bite-sized chunks, provided on a daily basis. Perhaps instead of hypothetical scenarios, life lessons could be drawn from real plights suffered by real people and animals. Perhaps they could be broadcast at a particular time—say, 6:00 pm and 11:00 pm—on particular television channels, or whatever the future equivalent venue is.

  The stories would have to be changed daily to keep things “new.” And there should be many of them, drawn from all cultures, all walks of life, all kinds of people and animals, told from all kinds of angles to help different people empathize, prospect, and impute causes to effects at their own pace and in their own way. So, not “new” then, but “news.”

  Data Sets over Algorithms

  Alexander Wissner-Gross

  Inventor; entrepreneur; president and chief scientist, Gemedy

  Perhaps the most important news of our day is that data sets, not algorithms, might be the key limiting factor in the development of human-level artificial intelligence.

  At the dawn of the field of artificial intelligence, in 1967, two of its founders famously anticipated that solving the problem of computer vision would take only a summer. Now, almost a half century later, machine-learning software finally appears poised to achieve human-level performance on vision tasks and a variety of other grand challenges. What took the AI revolution so long?

  A review of the timing of the most publicized AI advances over the past thirty years suggests a provocative explanation: Perhaps many major AI breakthroughs have been constrained by a relative lack of high-quality training data sets and not of algorithmic advances. For example, in 1994, the achievement of human-level spontaneous speech recognition relied on a variant of a hidden Markov model algorithm published ten years earlier, but used a data set of spoken Wall Street Journal articles and other texts available only three years earlier. In 1997, when IBM’s Deep Blue defeated Garry Kasparov to become the world’s top chess player, its core NegaScout planning algorithm was fourteen years old, whereas its key data set of 700,000 Grandmaster chess games (known as the “The Extended Book”) was only six years old. In 2005, Google software achieved breakthrough performance at Arabic- and Chinese-to-English translation based on a variant of a statistical machine-translation algorithm published seventeen years earlier, but used a data set with more than 1.8 trillion tokens from Google Web and News pages gathered the same year. In 2011, IBM’s Watson became the world Jeopardy! champion using a variant of the mixture-of-experts algorithm published twenty years earlier, but utilized a data set of 8.6 million documents from Wikipedia, Wiktionary, Wikiquote, and Project Gutenberg updated the year before. In 2014, Google’s GoogLeNet software achieved near-human performance at object classification using a variant of the convolutional neural network algorithm proposed twenty-five years earlier, but was trained on the ImageNet corpus of approximately 1.5 million labeled images and 1,000 object categories first made available only four years earlier. Finally, in 2015, Google DeepMind announced that its software had achieved human parity in playing twenty-nine Atari games by learning general control from video, using a variant of the Q-learning algorithm published twenty-three years earlier, but the variant was trained on the Arcade Learning Environment data set of over fifty Atari games made available only two years earlier.

  Examining these advances collectively, the average elapsed time between key algorithm proposals and corresponding advances was about eighteen years, whereas the average elapsed time between key data-set availabilities and corresponding advances was less than three years, or about six times faster, suggesting that data sets might have been limiting factors in the advances. In particular, one might hypothesize that the key algorithms underlying AI breakthroughs are often latent, simply needing to be mined out of the existing literature by large, high-quality data sets and then optimized for the available hardware of the day. Certainly, in a tragedy of the research commons, attention, funding, and career advancement have historically been associated more with algorithmic than data-set advances.

  If correct, this hypothesis would have foundational implications for future progress in AI. Most important, the prioritized cultivation of high-quality training data sets might allow an order-of-magnitude speed-up in AI breakthroughs over purely algorithmic advances. For example, we might already have the algorithms and hardware that will enable machines in a few years to write human-level long-form creative compositions, complete standardized human examinations, or even pass the Turing Test, if only we trained them with the right writing, examination, and conversational data sets. Additionally, the nascent problem of ensuring AI friendliness might be addressed by focusing on data-set rather than algorithmic friendliness—a potentially simpler approach.

  Although new algorithms receive much of the public credit for ending the last AI winter, the real news might be that prioritizing the cultivation of new data sets and research communities around them could be essential to extending the present AI summer.

  Biological Models of Mental Illness Reflect Essentialist Biases

  Bruce Hood

  Professor, School of Experimental Psychology, University of Bristol; author, The Domesticated Brain

  In 2010, in England alone, it was estimated that mental illness cost over £100 billion to the economy. Around the same time, the cost in the U.S. was estimated at $318 billion annually. It is important that we do what we can to reduce this burden. However, we have mostly been going about it the wrong way, because the predominant models of mental illness do not work. They are mostly based on the assumption that there are discrete underlying causes, but this approach to mental illness reflects an essentialist bias we readily apply when trying to understand complexity.

  Humans are, of course, complex biological systems, and the way we operate requires sophisticated interactions at many levels. Remarkably, it has taken more than a century of research and effort to recognize that when things break down, they involve multiple systems of failure; yet, until the last couple of years, many practitioners in the West’s psychiatric industry have been reluctant to abandon the notion that there are qualitatively distinct mental disorders with core causal dysfunctions. Or at least that’s how the treatment regimes seem to have been applied.

  Ever since Emil Kraepelin, at the end of the 19th century, advocated categorizing mental illnesses into distinct disorders with specific biological causes, research and treatment has focused on building classification systems of symptoms as a way of mapping the terrain for discovering root biological problems and corresponding courses of action. This medical-model approach led to development of clinical nosology and accompanying diagnostic manuals such as the Diagnostic and Statistical Manual of Mental Disorder (DSM), whose fifth edition was published in 2013. However, that year the National Institute of Mental Health announced that it would no longer be funding research projects that relied solely on DSM criteria. This is because the medical model lacks validity.

  A recent analysis by Denny Borsboom in the Netherlands revealed that 50 percent of the symptoms of the DSM are correlated, indicating that co-morbidity is the rule, not the exception, which explains why attempts to find biological markers for mental illness either through gene
tics or imaging have proved largely fruitless. It does not matter how much better we build our scanners or refine our genetic profiling: Mental illness will not be reducible to Kraepelin’s vision. Rather, new approaches consider symptoms as sharing causal effects rather than arising from an underlying primary latent variable.

  It’s not clear what will happen to the DSM, as there are vested financial interests in maintaining the medical model, but in Europe there is a notable shift toward symptom-based approaches to treatment. It is also not in our nature to consider human complexity other than with essentialist biases. We do this for race, age, gender, political persuasion, intelligence, humor, and just about every dimension we use to describe someone—as if these attributes were at the core of who they are.

  The tendency of the human mind is to categorize the world—to carve nature up at its joints, as it were. But in reality, experience is continuous. The boundaries we create are more for our benefit than a reflection of any true existing structures. As complex biological systems, we evolved to navigate the complex world around us and thus developed an ability to represent it in the most useful way, as discrete categories. This is a fundamental feature of our nervous system, from the input of raw sensory signals to the output of behavior and cognition. Forcing nature into discrete categories optimizes the processing demands and the number of needed responses, so it makes sense from an engineering perspective. The essentialist perspective continues to shape the way we go about building theories to investigate the world. Maybe it’s the best strategy when dealing with unknown terrain—assume patterns and discontinuities with broad strokes before refining your models to reflect complexity. The danger lies in assuming that the frameworks you construct are real.

  Neuroprediction

  Abigail Marsh

  Associate professor of psychology, Georgetown University

  The Cartesian wall between mind and brain has fallen. Its disintegration has been aided by the emergence of a wealth of new techniques in collecting and analyzing neurobiological data, including neuroprediction, which is the use of human-brain imaging data to predict how the brain’s owner will feel or behave in the future. The reality of neuroprediction requires accepting the fact that human thoughts and choices are a reflection of basic biological processes. It has the potential to transform fields like mental health and criminal justice.

  In mental health, advances in identifying and treating psychopathology have been limited by existing diagnostic practices. In other fields of medicine, new diagnostic techniques such as genetic sequencing have led to targeted treatments for tumors and pathogens and major improvements in patient outcomes. But mental disorders are still diagnosed as they have been for 100 years—using a checklist of symptoms derived from a patient’s subjective reports or a clinician’s subjective observations. This is like trying to determine whether someone has leukemia or the flu based on subjective evaluations of weakness, fatigue, and fever. The checklist approach not only makes it difficult for a mental-health practitioner to determine what afflicts a patient—particularly if the patient is unwilling or unable to report his symptoms—but also provides no information about what therapeutic approach will be most effective.

  In criminal justice, parallel problems persist, in sentencing and probation. Making appropriate sentencing and probation decisions is hampered by the difficulty of determining whether a given offender is likely to re-offend after being released. Such decisions, too, are based on largely subjective criteria. Some who likely would not recidivate are often detained for too long, and some who will are released.

  Neuroprediction may yield solutions to these problems. One recent study found that the relative efficacy of different treatments for depression could be predicted from a brain scan measuring metabolic activity in the insula. Another found that predictions about whether paroled offenders would recidivate were improved using a brain scan that measured hemodynamic activity in the anterior cingulate cortex. Neither approach is ready for widespread use yet, in part because predictive accuracy at the individual level is still only moderate, but inevitably they—or improvements on them—will be.

  This would be an enormous advance in mental health. Currently, treatment outcomes for disorders like depression remain poor; up to 40 percent of depressed patients fail to respond to the first-line treatment, the selection of which still relies more or less on guesswork. Using neuroprediction to improve this statistic could dramatically reduce suffering. Because brain scans are expensive and their availability limited, however, there would be disparities in access.

  Neuroprediction of crime presents a different scenario, as its primary purpose would be to improve outcomes for society (less crime, fewer resources spent on needless detentions) rather than for the potential offender. It’s hard to imagine this becoming accepted practice without a shift in our focus away from retribution and toward rehabilitation. In furthering understanding of the biological basis of persistent offending, neuroprediction may help in this regard. Regardless, neuroprediction, at least the beta version of it, is here. Now is the time to consider how to harness its potential.

  The Thin Line Between Mental Illness and Mental Health

  Joel Gold

  Psychiatrist; clinical associate professor of psychiatry, NYU School of Medicine; co-author (with Ian Gold), Suspicious Minds

  It is discomfiting for many of us to contemplate this fact. We prefer to imagine a nice thick wall between us, the “Well,” and them, the “Mad.”

  In one episode of The Simpsons, Homer is psychiatrically hospitalized by mistake. His hand is stamped “Insane.” When his psychiatrists come to believe he’s not and release him, they stamp his hand “Not Insane.” But sanity is not binary; it’s a spectrum on which we all lie. Overt madness might be hard to miss, but what is its opposite? There is clear evidence that large numbers of people who have no psychiatric diagnosis and are not in need of psychiatric treatment experience symptoms of psychosis, notably hallucinations and delusions. A recent study in JAMA Psychiatry surveyed more than 30,000 adults from nineteen countries and found that 5 percent had heard voices at least once in their life. Most of these people never developed full-blown psychosis of the type observed in a person with, say, schizophrenia. An older study reported that 17 percent of the general nonclinical population had experienced psychosis at some point.

  It gets even more slippery: It isn’t always clear if an experience is psychotic or not. Why is someone who believes that the U.S. government is aware of alien abductions deemed not delusional but merely a conspiracy theorist, yet someone who believes that he himself has been abducted by aliens is likely considered delusional?

  The psychosis continuum has important clinical ramifications. Unfortunately, that is news to many mental-health practitioners. It’s easy to see the neurobiological parallels between antidepressant medication improving mood, anxiolytic medication reducing panic, and antipsychotic medication ameliorating hallucinations. But ask a psychiatrist about providing psychotherapy to people suffering from these symptoms and, again, the wall comes up. At least here in New York City, many people with depression and anxiety seek relief in therapy. Very few of those with psychosis are afforded its benefits, despite the fact that therapy works in treating psychotic symptoms. And here is where the lede has been buried.

  Cognitive behavioral therapy (CBT)—one of the most practiced forms of therapy—while commonly applied to mood, anxiety, and a host of other psychiatric disorders, also works with psychosis. This might seem inherently contradictory. By definition, a delusion is held despite evidence to the contrary. You aren’t supposed to be able to talk someone out of a delusion. If you could, it wouldn’t be a delusion, right? Surprisingly, this is not the case.

  And here we return to our thin line. Early in CBTp, the therapist “normalizes” the psychotic experiences of the patient—perhaps going so far as to offer his own strange experiences—thereby reducing stigma and forging a strong therapeutic bond with the patient, who is encouraged
to see himself not as “less than” his doctor but further along the spectrum (the continuum model). The patient is then educated as to how stressors like child abuse or cannabis use can interact with preexisting genetic risk factors and is encouraged to reflect on the effects his life experiences might have on his symptoms (the vulnerability-stress model). Finally, the therapist reviews an Activating event, the patient’s Belief about that event, and the Consequences of holding that belief (the ABC model). Over time, the clinician gently challenges it, and ultimately patient and doctor together reevaluate the belief. CBTp can be applied to hallucinations as well as to delusions.

  CBTp has about the same therapeutic benefit as the older antipsychotic medication chlorpromazine (Thorazine) and the newer antipsychotic olanzapine (Zyprexa). This does not mean, of course, that people shouldn’t take antipsychotic medication when appropriate; they certainly should. The reality, however, is that many do not, and it’s not hard to understand why. These medications, while often life-saving (for the record, I have prescribed antipsychotics thousands of times), often have adverse effects. Impaired insight (the ability to reflect on one’s inner experiences and to recognize that one is ill) is also a significant impediment to medication adherence.

 

‹ Prev