Lamar couldn’t allow Agatha’s mother to threaten his plans, so he arranged an intricate ruse to dispose of her. Knowing that if he attempted to murder her, the precogs would predict it, Lamar paid a contract killer to murder Agatha’s mother. As anticipated, this was predicted and prevented by Precrime. But as soon as the killer-to-be had been hauled off, Lamar re-enacted the planned murder, this time succeeding.
Because Lamar’s act was so close to the attempted murder, images of his actions from the precogs were assumed to be part of the thwarted killing. And because Agatha’s precognition wasn’t quite in step with the two other precogs, it was treated as a minority report. In this way, using the system he’d created to bring an end to murder, Lamar pulled off the perfect murder—or so he thought. But as Anderton got closer to realizing that Lamar had staged Agatha’s mother’s murder, Lamar realized that, in order to protect Precrime, he also needed to be eliminated. And he would have succeeded, had Anderton’s estranged partner not put two and two together, and freed Anderton from his halo-induced purgatory.
Things come to a head in the movie as Anderton publicly broadcasts Agatha’s minority report of Lamar killing her mother. In doing so, he presents Lamar with a seemingly-impossible choice: kill Anderton (as the precogs are predicting) and validate the program, but be put away for life in the process; or don’t kill him, and in doing so, demonstrate a fatal flaw in the program that will result in it being terminated.
In the end, Burgess opts for a third option and kills himself. In doing so, he saves Anderton, but still reveals a flaw in the system that had predicted Anderton’s murder at his hand. As a result, Precrime is dismantled, and the precogs are allowed to live as full a life as is possible.
Minority Report is a fast-paced, crowd-pleasing, action sci-fi thriller of the caliber you’d expect from its director Stephen Spielberg. But it also raises tough questions around preemptive action based on predictive criminal behavior, as well as predestination, human dignity, and the dangers of being sucked in by seemingly beneficial technologies. It presents us with a world where technology has seemingly made people’s lives safer, but at a terrible cost that isn’t immediately obvious. And it shines a searing spotlight on the question of “should we” when faced with a seductive technology that ultimately threatens to place society in moral jeopardy.
The “Science” of Predicting Bad Behavior
In March 2017, the British newspaper The Guardian ran an online story with the headline “Brain scans can spot criminals, scientists say.”33 Unlike in Minority Report, the scanning was carried out using a hefty functional magnetic resonance imaging (fMRI) machine, rather than genetically altered precogs. But the story seemed to suggest that scientists were getting closer to spotting criminal intent before a crime had been committed, using sophisticated real-time brain imaging.
In this case, the headline vastly overstepped the mark. The original research used fMRI to see if brain activity could be used to distinguish knowingly criminal behavior from merely reckless behavior.34 It did this by setting up a somewhat complex situation, where volunteers were asked to take a suitcase containing something valuable through a security checkpoint while undergoing a brain scan. But to make things more interesting (and scientifically useful), their actions and choices came with financial rewards and consequences.
Each participant was first given $6,000 in “play money.” They were then presented with one to five suitcases, just one of which contained the thing of value. If they decided not to carry anything through the checkpoint, they lost $1,500. If they decided to carry a suitcase, it cost them $500. And if they dithered about it, they were docked $2,500.
Having selected a suitcase, if they chose the one with the valuable stuff inside and they weren’t searched by security, they got an additional $2,500—jackpot! But if they were searched and found to be carrying, they were fined $3,500, leaving them with a mere $2,000. On the other hand, if they weren’t carrying, they suffered no penalties, whether they were searched or not.
The point of this rather elaborate setup was that there were financial gains (at least with the fake money being used) involved with the choices made, and the implication that carrying a suitcase stuffed with valuable goods was dangerous (you could be fined if discovered carrying), but financially lucrative if you got away with it.
To mix things up further, some participants only had the choice of carrying the loaded suitcase (thus possibly getting $8,000), or declining to take part in such a dodgy deal and walking away with just $2,000. The participants who took a chance here were knowingly participating in questionable behavior. For the rest, it was a lottery whether they picked the loaded suitcase or not, meaning that their actions veered toward being more reckless, and less intentional. By simultaneously studying behavior and brain activity, the researchers were able to predict what state the participants were in—whether they were intentionally setting out to engage in behavior that maybe wasn’t legitimate, or whether they were just feeling reckless.
The long and short of this was that the study suggested brain activity could be used to indicate criminal intent, and this is what threw headline writers into a clickbait frenzy. But the research was far from conclusive. In fact, the authors explicitly stated that “it would be absurd to suggest, in light of our results, that the task of assessing the mental state of a defendant could or should, even in principle, be reduced to the classification of brain data.” They also pointed out that, even if these results could be used to predict the mental state of a person while committing a crime, they’d have to be inside an fMRI scanner at the time, which would be tricky.
Despite the impracticality of using this research to assess the mental state of people during the act of committing a crime, media stories around the study tapped into a deep-seated fascination with predicting criminal tendencies or intent—much as Veris Prime’s Truth Index does. Yet this is not a new fascination, and neither is the use of science to justify its indulgence.
In the seventeenth century, a very different “science” of predicting criminal tendencies was all the rage: phrenology. Phrenology was an attempt to predict someone’s character and behavior by the shape of their skull. As understanding around how the brain works developed, the practice became increasingly discredited. Sadly, though, it laid a foundation for assumptions that traits which appear to be common to people of “poor character” are also predictive of their behavior—a classic case of correlation erroneously being confused with causation. And it foreshadowed research that continues to this day to connect what someone looks like with how they might act.
Despite its roots in pseudoscience, the ideas coming out of phrenology were picked up by the nineteenth-century criminologist Cesare Lombroso. Lombroso was convinced that physical traits such as jaw size, forehead slope, and ear size were associated with criminal tendencies. His theory was that these and other traits were throwbacks to earlier evolutionary ancestors, and that they indicated an innate tendency toward criminal behavior.
It’s not hard to see how attractive these ideas might have been to some, as they suggested criminals could be identified and dealt with before breaking the law. With hindsight, it’s easy to see how misguided and malevolent they were, but at the time, many people bought into them. It would be nice to think that this way of thinking about criminal tendencies was a short and salutary aberration in humanity’s history. Sadly, though, it paved the way to even more divisive forms of pseudoscience-based discrimination, including eugenics.
In the 1900s, discrimination that was purportedly based on scientific evidence shifted toward the idea that the quality or “worth” of a person is based on their genetic heritage. The “science” of eugenics—and sadly this is something that many scientists at the time supported—suggested that our genetic heritage determines everything about us, including our moral character and our social acceptability. It was a deeply flawed concept that, nevertheless, came with the same seductive idea that, if we know what makes people “
bad,” we can remove them from society before they cause a problem. What is heartbreaking is that these ideas coming from academics and scientists gained political momentum, and ultimately became part of the justification for the murder of six million Jews, and many others besides, in the Holocaust.
These days, I’d like to think we’re more enlightened, and that we don’t fall prey so easily to using scientific flights of fancy to justify how we treat others. Unfortunately, this doesn’t seem to be the case.
In 2011, three researchers published a paper suggesting that you can tell a criminal from someone who isn’t (and, presumably by inference, someone who is likely to engage in criminal activities) by what they look like.35 In the study, thirty-six students in a psychology class (thirty-three women and three men) were shown mug shots of thirty-two Caucasian males. They were told that some were criminals, and they were asked to assess—from the photos alone—whether each person had committed a crime; whether they’d committed a violent crime; if it was a violent crime, whether it was rape or assault; and if it was non-violent, whether it was arson or a drug offense.
Within the limitations of the study, the participants were more likely to correctly identify criminals than incorrectly identify them from the photos. Not surprisingly, perhaps, this led to a slew of headlines along the lines of “Criminals Look Different From Non-criminals” (this one from a blog post on Psychology Today36). But despite this, the results of the study are hard to interpret with any degree of certainty. It’s not clear what biases may have been introduced, for instance, by having the photos evaluated by a mainly female group of psychology students, or by only using photos of white males, or even whether there was something associated with how the photos were selected and presented, and how the questions were asked, that influenced the results.
The results did seem to indicate that, overall, the students were successful in identifying photos of convicted criminals in this particular context. But the study was so small, and so narrowly defined, that it’s hard to draw any clear conclusions from it. However, there is a larger issue at stake with this and similar studies, and this is the ethical issue with carrying out and publicizing the results of such research in the first place. Here, the very appropriateness of asking if we can predict criminal behavior brings us back to the earlier study on intent versus reckless behavior, and to the underlying premise in Minority Report.
The assumption that someone’s behavioral tendencies can be predicted from no more than what they look like, or how their brain functions, is a slippery slope. It assumes—dangerously so—that behavior is governed by genetic heritage and upbringing. But it also opens the door to a better-safe-than-sorry attitude to law and order that considers it better to restrain someone who might demonstrate socially undesirable behavior than to presume them innocent until proven guilty. And it’s an attitude that takes us down a path where we assume that other people do not have agency over their destiny. There is an implicit assumption here that how we behave can be separated out into “good” and “bad,” and that there is consensus on what constitutes these. But this is a deeply flawed assumption.
What the behavioral research above is actually looking at is someone’s tendency to break or bend agreed-on rules of socially acceptable conduct, as these are codified in law. These laws are not an absolute indicator of good or bad behavior. Rather, they are a result of how we operate collectively as a social species. In technical terms, they establish normative expectations of behavior, which simply means that most people comply with them, irrespective of whether they have moral or ethical value. For instance, in most cultures, it’s accepted that killing someone should be punished, unless it’s in the context of a legally sanctioned war or execution (although many societies would still consider this morally reprehensible). This is a deeply embedded norm, and most people would consider it to be a good guide of appropriate behavior. The same cannot be said of “norms” surrounding homosexual acts, though, which were illegal in the United Kingdom until 1967, and are still illegal in some countries around the world, or others surrounding LGBTQ rights, or even women’s rights.
When social norms are embedded within criminal law, it may be possible to use physical features or other means to identify “criminals” or those likely to be involved in “criminal” behavior. But are we as a society really prepared to take preemptive action against people who we arbitrarily label as “bad”? I sincerely hope not. And here we get to the crux of the ethical and moral challenges around predicting criminal intent. Even if we can predict tendencies from images alone—and I am highly skeptical that we can gain anything of value here that isn’t heavily influenced by researcher bias and social norms—should we? Is it really appropriate to be asking if we can predict, simply from how someone looks, whether they are likely to behave in a way that we think is appropriate or not? And is it ethical to generate data that could be used to discriminate against people based on their appearance?
Using facial features to predict tendencies puts us way down the slippery slope toward discriminating against people because they are different from us. Thankfully, this is an idea that many would dismiss as inappropriate these days. But, worryingly, our interest in relating brain activity to behavioral traits—the high-tech version of “looks like a criminal”—puts us on the same slippery slope.
Criminal Brain Scans
Unlike photos, functional Magnetic Resonance Imaging allows researchers to directly monitor brain activity, and to do it in real time. It works by monitoring blood flow to different parts of the brain, and using this to pinpoint which parts of someone’s brain are active at any one point in time.
One of the beauties of fMRI is that it can map out brain activity as people are thinking about and processing the world around them. For instance, it can show which parts of a subject’s brain are triggered if they’re shown a photo of a donut, if they are happy, or sad, or angry, or what their brain activity looks like if they’re given the opportunity to take a risk.
fMRI has opened up a fascinating window into how we think about and respond to our surroundings, and in some cases, what we think. And it’s led to some startling revelations. We now know, for instance, that we often unconsciously decide what we’re going to do several seconds before we’re actually aware of making a decision.37 Recent research has even indicated that high-resolution fMRI scans on primates can be used to decode what the animals are seeing.38 The researchers were, quite literally, reading these primates’ minds.
This is quite incredible science. And not surprisingly, it’s leading to a revolution in understanding how our brains operate. This includes developing a better understanding of how certain brain behaviors can lead to debilitating medical conditions. It’s also leading to a deeper understanding of how the mechanics of our brain determine who we are, and how we behave.
That said, there’s still considerable skepticism around how effective a tool fMRI is and how robust some of its findings are. It’s also fair to say that some of these findings challenge deeply held beliefs about many of the things we hold dear, including the nature of free will, moral choice, kindness, compassion, and empathy. These are all aspects of ourselves that help define who we are as a person. Yet, with the advent of fMRI and other neuroscience-based tools, it sometimes feels like we’re teetering on the precipice of realizing that who we think we are—our sense of self, or our “soul” if you like—is merely an illusion of our biology.
This in itself raises questions over the degree to which neuroscience is racing ahead of our ability to cope with what it reveals. Yet the reality is that this science is progressing at breakneck speed, and that fMRI is allowing us to dive ever deeper behind our outward selves—our facial features and our easily observed behaviors—and into the very fabric of the organ that plays such a role in defining us. And, just like phrenology and eugenics before it, it’s opening up the temptation to interpret how our brains operate as a way to predict what sort of person we are, and what we might do.
/>
In 2010, researchers provided a group of subjects with advice on the importance of using sunscreen every day. At the same time, the subjects’ brain activity was monitored using fMRI. It’s just one of many studies that are increasingly trying to use real-time brain activity monitoring to predict behavior.
In the sunscreen study, the subjects were asked how likely they were to take the advice they were given. A week later, researchers checked in with them to see how they’d done. Using the fMRI scans, the researchers were able to predict which subjects were going to use sunscreen and which were not. But more importantly, using the scans, the researchers discovered they were better at predicting how the subjects would behave than they themselves were. In other words, the researchers knew their subjects’ minds better than they did.39
Research like this suggests that our behavior is determined by measurable biological traits as much as by our free will, and it’s pushing the boundaries of how we understand ourselves and how we behave, both as individuals and as a society. And, while science will never enable us to predict the future in the same way as Minority Report’s precogs, it’s not too much of a stretch to imagine that fMRI and similar techniques may one day be used to predict the likelihood of someone engaging in antisocial and morally questionable behavior.
But even if predicting behavior based on what we can measure is potentially possible, is this a responsible direction to be heading in?
Films from the Future Page 8