by Jared Cohen
It takes only one mistake or weak link to compromise an entire network. A Navy SEAL Team Six member we talked with described a top al-Qaeda commander who was exceptionally cautious around technology, always swapping phones and rarely speaking for very long. But while he was careful with his professional life, he was careless with his social one. At one point, he called a cousin in Afghanistan to say he planned to attend a wedding. That one misstep gave authorities enough information to find and capture him. Unless a terrorist is acting completely alone (which is rare), and with perfect online discipline (even rarer), there is a very good chance that somewhere in the chain of events leading up to a planned attack, he will compromise himself in some way. There are simply too many ways to reveal oneself, or be revealed, and this is very encouraging in contemplating the future of counterterrorism.
Of course, amid all of the smart and savvy cyber terrorists there will be dumb ones, too. In the trial-and-error period of connectivity growth, there will be plenty of demonstrations of inexperience that might seem laughable to those of us who grew up with the Internet. Three years after the Canadian journalist Amanda Lindhout was kidnapped in Somalia—she was held for fifteen months by al-Shabaab extremists and finally released for a hefty ransom—her former captors contacted her on Facebook, issuing threats and inquiring about more money. Some were dummy accounts, set up with the sole purpose of harassing her further, but others appeared to be genuine personal Facebook accounts. It seems unlikely that the terrorists understood the degree to which they’d exposed themselves—not just their names and profiles, but everyone they were connected to, everything they’d written on their own and others’ Facebook pages, what websites they’d “liked,” and so on. Each such exposure, of course, will represent a teachable moment for other extremists, enabling them to avoid the same errors in the future.
It is estimated that more than 90 percent of people worldwide who have mobile phones keep them within three feet of themselves twenty-four hours a day. There is no reason to believe this won’t be true for extremists. They might adopt new routines that help protect them—like periodically removing the battery from their phones—but they won’t stop using them altogether. This means that counterterrorist raids by militaries and law enforcement will result in better outcomes: capture the terrorist, capture his network. Interrogations post-capture will remain important, but each device used by a terrorist—mobile phones, storage drives, laptops and cameras—will be a potential gold mine. Commandeering a captured terrorist’s devices with the rest of his network unaware will lead his cohorts to unwittingly disclose sensitive information or locations. Additionally, the devices might contain content that can be used to expose hypocrisy in a terrorist’s public persona, as American officials did when they revealed that the computer files taken from Osama bin Laden’s compound contained a large stash of pornographic videos. Of course, once this vulnerability becomes apparent, more sophisticated terrorists will combat it by having an abundance of technology with misleading information on it. Deliberately storing personal details about rivals or enemies on devices that find their way into the hands of law enforcement will be a useful form of sabotage.
No Hidden People Allowed
As terrorists develop new methods, counterterrorism strategists will adapt accordingly. Imprisonment may not be enough to contain a terror network. Governments may determine, for example, that it is too risky to have citizens “off the grid,” detached from the technological ecosystem. To be sure, in the future, as now, there will be people who resist adopting and using technology, people who want nothing to do with virtual profiles, online data systems or smart phones. Yet a government might suspect that people who opt out completely have something to hide and thus are more likely to break laws, and as a counterterrorism measure, that government will build the kind of “hidden people” registry we described earlier. If you don’t have any registered social-networking profiles or mobile subscriptions, and on-line references to you are unusually hard to find, you might be considered a candidate for such a registry. You might also be subjected to a strict set of new regulations that includes rigorous airport screening or even travel restrictions.
In a post-9/11 world, we can already see signs that even countries with strong civil-liberties foundations jettison citizen protections in favor of a system that enhances homeland surveillance and security. That will only accelerate. After some cyber-terrorist successes, it will be easier to persuade people that the sacrifices involved—essentially, a heightened level of governmental monitoring of online activity—are worth the peace of mind they will bring. The collateral damage in this scenario, besides the persecution of a small number of harmless hermits, of course, is the danger of the occasional abuse or poor judgment by government stewards. This is yet another reason it will be so important to fight for privacy and security in the future.
The push-pull between privacy and security in the digital age will become even more prominent in the coming years. The authorities responsible for locating, monitoring and capturing dangerous individuals will require massive, highly sophisticated data-management systems to do so. Despite everything individuals, corporations and dedicated nonprofit groups are doing to protect privacy, these systems will inevitably include volumes of data about non-terrorist citizens—the questions are how much and where. Currently, most of the information that governments collect on people—their addresses, ID numbers, police records, mobile-phone data—is siloed in separate places (or is not even digitized yet in some countries). Its separation ensures a degree of privacy for citizens but creates large-scale inefficiencies for investigators.
This is the “big data” challenge that government bodies and other institutions around the world are facing: How can intelligence agencies, military divisions and law enforcement integrate all of their digital databases into a centralized structure so that the right dots can be connected without violating citizens’ privacy? In the United States, for example, the FBI, State Department, CIA and other government agencies all use different systems. We know computers can find patterns, anomalies and other relevant signifiers much more efficiently than human analysts can, yet bringing together disparate information systems (passport information, fingerprint scans, bank withdrawals, wiretaps, travel records) and building algorithms that can efficiently cross-reference them, eliminate redundancy and recognize red flags in the data is an incredibly difficult and time-consuming task.
Difficult does not mean impossible, however, and all signs point toward these comprehensive integrated information systems’ becoming the standard for modern, wealthy states in the near future. We had the opportunity to tour the command center for Plataforma México, Mexico’s impressive national crime database and perhaps the best model of an integrated data system operating today. Housed in an underground bunker in the Secretariat of Public Security compound in Mexico City, this large database integrates intelligence, crime reports and real-time data from surveillance cameras and other inputs from agencies and states across the country. Specialized algorithms can extract patterns, project social graphs and monitor restive areas for violence and crime as well as for natural disasters and other civilian emergencies. The level of surveillance and technological sophistication of Plataforma México that we saw is extraordinary—but then, so are the security challenges that Mexican authorities face. Therein lies the challenge looking ahead: Mexico is the ideal location for a pilot project like this because of its entrenched security problems, but once the model has been proven, what is to stop other states with less justifiable motivations from building something similar? Surely other governments can play the security card and insist that such a sophisticated platform is necessary; what might stop them?
In the early 2000s, following the September 11 terrorist attacks, something similar was proposed in the United States. The Defense Department set up the Information Awareness Office and green-lit the development of a program called Total Information Awareness (TIA). Pitched as the ultimate securit
y apparatus to detect terrorist activity, TIA was designed and funded to aggregate all “transactional” data—including bank records, credit-card purchases and medical records—along with other bits of personal information to create a centralized and searchable index for law enforcement and counterterrorist agencies. Sophisticated data-mining technologies would be built to detect patterns and associations, and the “signatures” that dangerous people left behind would reveal them in time to prevent another attack.
As details of the TIA program leaked out to the public, a range of vocal critics emerged from both the right and the left, warning about the potential costs to civil liberties, privacy and long-term security. They zeroed in on the possibilities of abuse of such a massive information system, branding the program “Orwellian” in scope. Eventually, a congressional campaign to shut TIA down resulted in a provision to deny all funds for the program in the Senate’s 2004 defense appropriations bill. The Information Awareness Office was shuttered permanently, though some of its projects later found shelter in other intelligence agencies in the government’s sprawling homeland-security sector.
Fighting for privacy is going to be a long, important struggle. We may have won some early battles, but the war is far from over. Generally, the logic of security will always trump privacy concerns. Political hawks merely need to wait for some serious public incident to find the political will and support to push their demands through, steamrolling over the considerations voiced by the doves, after which the lack of privacy becomes normal. With integrated information platforms like these, adequate safeguards for citizens and civil liberties must be firmly in place from the onset, because once a serious security threat appears, it is far too easy to overstep. (The information is already there for the taking.) Governments operating surveillance platforms will surely violate restrictions placed on them (by legislation or legal ruling) eventually, but in democratic states with properly functioning legal systems and active civil societies, those errors will be corrected, whether that means penalties for perpetrators or new safeguards put into place.
Serious questions remain for responsible states. The potential for misuse of this power is terrifyingly high, to say nothing of the dangers introduced by human error, data-driven false positives and simple curiosity. Perhaps a fully integrated information system, with all manner of data inputs, software that can interpret and predict behavior and humans at the controls is simply too powerful for anyone to handle responsibly. Moreover, once built, such a system will never be dismantled. Even if a dire security situation were to improve, what government would willingly give up such a powerful law-enforcement tool? And the next government in charge might not exhibit the same caution or responsibility with its information as the preceding one. Such totally integrated information systems are in their infancy now, and to be sure they are hampered by various challenges (like consistent data-gathering) that impose limits on their effectiveness. But these platforms will improve, and there is an air of inevitability around their proliferation in the future. The only remedies for potential digital tyranny are to strengthen legal institutions and to encourage civil society to remain active and wise to potential abuses of this power.
A final note on digital content as we discuss its uses in the future: As online data proliferates and everyone becomes capable of producing, uploading and broadcasting an endless amount of unique content, verification will be the real challenge. In the past few years, major news broadcasters have shifted from using only professional video footage to including user-generated content, like videos posted to YouTube. These broadcasters typically add a disclaimer that the video cannot be independently verified, but the act of airing it is, in essence, an implicit verification of its content. Dissenting voices may claim that the video has been doctored, or is somehow misleading, but those claims when registered get a fraction of the attention and are often ignored. The trend toward trusting unverified content will eventually spur a movement toward more rigorous, technically sound verification.
Verification, in fact, will become more important in every aspect of life. We explored earlier how the need for verification will come to shape our online experiences, requiring better protections against identity theft, with biometric data changing the security landscape. Verification will also play an important role in determining which terrorist threats are actually valid. To avoid identification, most extremists will use multiple SIM cards, multiple online identities and a range of obfuscating tools to cover their tracks. The challenge for law enforcement will be finding ways to handle this information deluge without wasting man-hours on red herrings. Having “hidden people” registries in place will reduce this problem for authorities but will not solve it.
Because the general public will come to prefer, trust, depend on or insist on verified identities online, terrorists will make sure to use their own verified channels when making claims. And there will be many more ways to verify the videos, photos and phone calls that extremist groups use to communicate. Sharing a photograph of hostages with fresh daily newspapers will become an antiquated practice—the photo itself is the proof of when it was taken. Through digital forensic techniques like checking the digital watermarks, IT experts can verify not only when, but where and how.
This emphasis on verified content, however, will require terrorists to make good on their threats. If a known terrorist does not do so, the subsequent loss of credibility will hurt his and his group’s reputation. If al-Qaeda were to release an audio recording proving that one of its commanders survived a drone attack, but forensic computer experts using voice-recognition software determined that someone else’s voice was on the tape, it would weaken al-Qaeda’s position and embolden its critics. Each verification challenge would chip away at the grandiose image that many extremist groups rely on to raise funds, recruit and instill fear in others. Verification can therefore be a tremendous tool in the fight against violent extremism.
The Battle for Hearts and Minds Comes Online
While it’s true that effective hackers and computer experts will enhance terror groups’ capabilities, the broad foundation of recruits will, like today, be basic foot soldiers. They’ll be young and undereducated, and they’ll have grievances that extremists exploit to their own advantage. We believe that the most pivotal shift in counterterrorism strategy in the future will not concern raids or mobile monitoring, but instead will focus on chipping away at the vulnerability of these at-risk populations through technological engagement.
An estimated 52 percent of the world’s population is under the age of thirty, and the vast majority are what we could call “socioeconomically at risk,” living in urban slums or poorly integrated immigrant communities, in places with unreliable rule of law and limited economic opportunity. Poverty, alienation, humiliation, lack of opportunity and mobility, and just simple boredom make these young populations highly susceptible to the influence of others. Set against a backdrop of repression and in a subculture that promotes extremism, their grievances foster their radicalization. This is as true for the undereducated slum kid as it is for university students who see no opportunities awaiting them on the other side of their degree.
At Google Ideas, we’ve studied radicalization around the world, particularly with an eye toward the role that communication technologies can play.4 It turns out that the radicalization process for terrorists is not very different from what we see with inner-city gangs or other violent groups, like white supremacists. At our Summit Against Violent Extremism in June 2011, we brought together more than eighty former extremists to discuss why individuals join violent organizations, and why they leave them. Through open dialogue with the participants, who, between them, represented religious extremists, violent nationalists, urban gangs, far-right fascists and jihadist organizations, we learned that similar motivations exist across all these groups, and that religion and ideology play less of a role than most people think. The reasons people join extremist groups are complex, often having to do more wit
h the absence of a support network, the desire to belong to a group, to rebel, to seek protection or to chase danger and adventure.
There are far too many young people who share these sentiments. What’s new is that large numbers of them will air their grievances online in ways that advertently or inadvertently advertise themselves to terrorist recruiters. What radicalized youth seek through virtual connections grows out of their experience in the physical world—abandonment, rejection, isolation, loneliness and abuse. We can figure out a great deal about them in the virtual world, but in the end, real de-radicalization requires group meetings and a lot of support, therapy and meaningful alternatives in the physical world.