In 2008, before the age of social media and mobile computing, I had an opportunity to travel to Lankaran, a small town in Azerbaijan near the Iranian border. The mayor took me to a school, a cement structure with drab, spartan classrooms. The kids we encountered wore the standard uniform of their millennial counterparts around the world—torn jeans, baggy pants, T-shirts, baseball caps.
I led a conversation in which the kids expressed great curiosity about the United States and what it was like to live as a Muslim there. I asked them about their lives, their hopes for the future, and their thoughts about current events. I also asked where they went to learn about their religion. I was expecting that they would cite their parents, relatives, or teachers. They had another answer: the internet.
“How do you know that the people who are answering your questions online know what they’re talking about?” I asked.
“Well, we know because friends tell us to go to certain websites— that the people on there are good. They know.”
“But you don’t really know that these people really are legitimate scholars, right?”
The teens were quick to jump in. “We see they are popular, and they seem like they know. So, we believe these people are good.”
We wrapped up the discussion, and I went on to meet with a number of teachers at the school. What they told me was disturbing. They said that these kids, who were so quick to believe what they saw online, were changing their behavior in dramatic ways, such as beginning to fast on certain holidays that, traditionally, were not observed by locals. “For these kids, it’s all about Google,” one of the teachers told me. “Whatever Google tells them, they do. It’s beyond our control.”
We all know that millennials are the “wired generation,” and we’ve been inundated with media reports about how extremists are seducing kids online. With tens of thousands of tweets produced each day by extremists in various languages, social media has clearly become a serious and dangerous means of indoctrinating youth about violent ideologies. Yet as we dissect the constantly changing so-called Islamic State recruiting machine, what many people don’t realize is that the group’s online success in recent years is part of a much larger set of technological habits that have developed since 9/11.
Even in less tumultuous times, adolescents of all ethnicities have a pretty rough time of it. These are the years when we begin to craft adult identities, when we question certitudes we grew up with as children, and, as the psychologist Erik Erikson and others have theorized, when some of us experience painful “crises” of identity. But at the exact moment when hundreds of millions of Muslim millennials were passing through this developmental phase, a horrific criminal attack on the world’s superpower made these normal struggles with identity unfathomably more difficult, more charged, more painful. Every day since 9/11, Muslim youth have been bombarded by negative news, messages, and imagery about their heritage. It is constant, inescapable. As a kid, what do you do with that?
Muslim youth have gone online for answers, seeking information about Islam and opportunities to share their religious beliefs online. Religious exploration no longer means going to the old man with the longest beard and highest hat. Rather, it means going to Sheikh Google. In their innocence, they have found a dangerous space largely dominated by conservative interpretations of Islamic texts.
The solution isn’t to ban Muslim millennials from the internet, or to censor harsh interpretations of Islam we dislike. We must do something else: find ways to project more credible, moderate voices online. And we can do this through an approach called “countering violent extremism” (CVE), which focuses on strengthening local communities to resist extremist ideas, exposing youth to alternative ideas about identity and belonging, and putting a social, mental, and cultural system in place to support these efforts. If impressionable youth encounter a multiplicity of ideas and not just conservative ones, they’ll have a healthier perspective on their religion and apply a more critical gaze.
I only came to appreciate the importance of the identity crisis affecting Muslim millennials because of the unusual opportunities I had to visit diverse Muslim communities, first in Europe in 2007 and 2008 and afterward around the world. I conducted this travel over an eight-year period, during my work on the National Security Council, at the State Department, and on the Homeland Security Advisory Council.
It’s not that the Muslims I met during my journeys weren’t going to the mosque or to religious school—they were. But they weren’t accepting the messages they heard at face value. And while the Islam pushed by extremists was of an ultraconservative sort, it was the local Muslim leadership that was frequently seen by youth as the conservative party, because they were the ones who seemed unwilling to accept that, post-9/11, what was needed was reinvention and adaptation.
In 2011, when I visited the American Center in Jakarta, I spoke with a group of teens who liked to go online themselves to learn about their religion, because their parents were too “old-fashioned.” Whereas in previous decades, the mosque’s call to prayer might have reminded their parents that it was time to pray, these teens could now rely on their smartphones. The technology helped kids feel that they were experiencing religion in a current, cutting-edge way—they were staying abreast of global trends affecting their generation.
There’s another reason technology appealed so strongly to Muslim youth, and continues to appeal: it allows them a customized experience of religion. Millennials of all faiths are used to maintaining control over their destinies in every area of life. They have grown up with technology that allows them to watch and listen to what they want to watch, and with a vast array of consumer products customized to their individual needs. I consistently found them patching together new, individualized versions of Islam out of disparate ideas, beliefs, and practices.
The Sheikh Google phenomenon does not turn every young Muslim it touches into a budding extremist. But some are more likely to embrace curated expressions of religion and extremist ideology. As technology entrepreneur Shahed Amanullah, who formerly served as my senior advisor for technology and new media, explains, extremists are “particularly successful” appealing to youth who “don’t have a strong rooting in the religion”—youth who strongly desire to “be” Muslim, but who are cloudy on the specifics of the rules and traditions. “That’s the perfect target for extremists,” he says.
Much has been written about the recruiting efforts of the so-called Islamic State and other groups, so let me just make a few basic points. These efforts remain massive and growing. In addition, extremist groups continue to expand aggressively into new mediums oriented toward youth, including multiplayer video games and cartoons. The so-called Islamic State posts in multiple languages, and makes use of high-end production techniques, using services such as JustPaste, WhatsApp, and SoundCloud to get its message out. In 2015, the group was disseminating as many as 90,000 tweets a day. It even had a twenty-four-hour “Jihadi Help Desk” system in place, utilizing YouTube and Twitter to help train would-be terrorists in how to evade detection by the authorities.
Even as conservative and extremist views have abounded on the Web, certain elements of the medium have made it much harder for millennials to engage it critically. The ethos of the open internet is that search results are determined not by careful evaluation of content so much as by popularity. Cascading comments might add context, but sifting through them can efface the immediacy that makes online research so compelling. When I’ve spoken with Muslim youth, I’ve often observed an alarming failure to scrutinize the ideas they encounter online. If a given image or posting had hundreds of likes or a given imam had tens of thousands of followers, or if a friend had retweeted a picture or an article, that was often enough for them.
Alienated and defensive Muslims often circulated and promoted conspiracy theories involving America, Jews, and 9/11. Kids would tell me that it wasn’t Al Qaeda who brought down the twin towers, but the Jews. Or they would say that 9/11 never even happened; Jewish executives in Hollywood created the whole thing. (The number of non-Muslim Americans who believe the 9/11 attacks were an inside job is also shockingly large.) When I asked kids how they knew about these notions, they would frequently say, “I read about it online.” If they read it online, it was real.
Eli Pariser has written about how platforms that offer a personalized Web experience create “filter bubbles” that seal us in and prevent us from grappling with opposing views. Driven by our prior online behavior, highly sophisticated algorithms, potentially drawing on thousands of pieces of data, present us with content we are likely to find incredibly compelling. Disseminators of fake news appeal to our emotions, in effect disabling our rational faculties. Pervasive educational gaps only deepen the problem.
Of course, we also create our own personal media bubbles online through our preferences, settings, and other choices. As a young man I met in Finland in 2008 told me, he sought to avoid thinkers who advocated more gender equality and the ability of Muslim women to work outside of the home. When I challenged him, he asserted that women don’t work under “authentic Islam.” I replied that the Prophet’s wife had been a wealthy businesswoman who had hired Muhammad before he became a prophet. My young interlocutor shook his head. “That doesn’t have any relevance for today, and I don’t want to hear people who put forward such arguments. They are feminists.”
The phenomenon of Sheikh Google points us to a number of important measures that we should adopt to help us win against the extremists. By “we,” I don’t simply mean government, but society at large, and large technology companies in particular.
Technology executives will often argue that their products and services are neutral, and that bad people—extremists of all kinds—are turning them to evil purposes. They are merely media platforms, they say, not publishers.
But over the last few years, in response to international pressure, large technology companies have announced a range of actions, including, as one media report put it, “suspending accounts, using artificial intelligence to identify extreme content, hiring more content moderators, developing and supporting counter-speech campaigns, and creating a shared industry database of hashes—unique digital fingerprints—for violent terrorist images.” In June 2017, Facebook, Twitter, Microsoft, and YouTube announced the creation of the “Global Internet Forum to Counter Terrorism,” a new “global working group to combine their efforts to remove terrorist content from their platforms.” Google, meanwhile, has announced that it will spend a million British pounds to “promote innovation in combatting hate and extremism” in the United Kingdom, funneling some of this money to established nongovernmental organizations.
All of these examples, and more, represent welcome moves—and one hopes, the beginning of a sustained and serious effort to confront extremist ideology online. What would such an effort entail? Well, quite simply, a lot more than what those companies have currently offered.
Technology companies need to devote substantial resources—both financial and nonfinancial—to disrupting the entire process by which kids become radicalized online. We need technology companies to go all in in relation to countering violent extremism (CVE).
If a youth is running a search for “Muslim prayer,” what if she were directed away from sites about the so-called Islamic State version of how to pray and act as a Muslim, and toward sites that emphasize the diversity of Muslim traditions and beliefs? If a young person were searching for the meaning of a quote from the Qur’an, what if he were given precise, curated, educated, peer-generated content that taught him how to analyze the Qur’an and its scholarly interpretations?
To those who might dismiss such proposals, regarding them as tantamount to censorship, we should remember that the First Amendment doesn’t protect all speech. Community standards factor in—and those standards change over time. In 2004, Al Jazeera gave Osama bin Laden valuable minutes to air his view before the network’s viewers. Today no network would do the same for so-called Islamic State leader Abu Bakr al-Baghdadi, because we now appreciate the gravity of the extremist threat.
Technology companies might deliver access to youth for purposes of CVE in a variety of ways. More sophisticated software could detect extremist content as it is posted and provide warning systems to users. Companies could also put systems in place to alert parents that their children have downloaded videos with extremist content. They could monitor usage of their sites and produce regular public reports about online hate speech. They could support free workshops for parents and kids, educating them not just about online dangers, but about how algorithms work to keep their kids and parents inside echo chambers. Google has already invested in the development of a highly innovative digital literacy and engagement program in the United Kingdom called Be Internet Citizens, developed and run by the Institute for Strategic Dialogue. Rolled out across youth centers, the program equips young people to understand how technology and propaganda—filter bubbles, us-and-them messaging—can dupe and divide us. We need to expand this program and others like it around the world.
Government can play a role in encouraging tech companies to step up. One option might be to reward companies that take specific steps to make the virtual world safer, offering tax incentives and various forms of public recognition. As regards the latter, we might create a scoring system for technology companies, awarding high ratings to those that take specific actions. More broadly, government must endeavor to serve as a partner for technology companies, aiding and guiding their efforts.
Inevitably, some companies will continue to resist what they see as government interference. Yet there is a fair and productive middle ground that respects the tech sector’s historical autonomy, while still allowing us to make inroads on CVE. To find this middle ground, government should put its money where its mouth is, sharing the burden with tech companies and refraining from taking an overly adversarial stance.
If we don’t address the Sheikh Google phenomenon far more directly and aggressively than we have to date, we’ll see more extremist content across platforms, in multiple languages. We’ll see more kids lured onto encrypted channels that the authorities can’t monitor. We’ll see extremists using video games and gaming-related platforms like Twitch to target youth. We’ll see many more attacks broadcast online in real time, including “appointment viewing” incidents in which terrorists announce attacks at an undisclosed location and invite viewers to tune in.
Farah Pandith, F95, H18, is a former American diplomat and served as the first-ever Special Representative to Muslim Communities. She is a senior fellow with the Future of Diplomacy Project at the Belfer Center for Science and International Affairs at the Harvard Kennedy School as well as an adjunct senior fellow at the Council on Foreign Relations. This essay has been adapted from a chapter called “Sheikh Google” in her new book, How We Win: How Cutting-Edge Entrepreneurs, Political Visionaries, Enlightened Business Leaders, and Social Media Mavens Can Defeat the Extremist Threat. Copyright 2019 by Farah Pandith. Reprinted by permission of HarperCollins.