when artificial intelligence is not yet intelligent …when artificial intelligence is not yet...

64
WHEN ARTIFICIAL INTELLIGENCE IS NOT YET INTELLIGENT ENOUGH: MAKING THE CASE FOR STRATEGIC ALTERNATIVE-NARRATIVES TO COMBAT EXTREMISM IN THE UNITED STATES ONLINE Master of Arts in Law and Diplomacy Submitted by Marina Shalabi April 20, 2018 Capstone Advisor: Carolyn Gideon, Ph.D.

Upload: others

Post on 05-Jun-2020

26 views

Category:

Documents


0 download

TRANSCRIPT

WHEN ARTIFICIAL INTELLIGENCE IS NOT YET INTELLIGENT ENOUGH: MAKING THE CASE FOR STRATEGIC ALTERNATIVE-NARRATIVES TO COMBAT

EXTREMISM IN THE UNITED STATES ONLINE

Master of Arts in Law and Diplomacy

Submitted by Marina Shalabi April 20, 2018

Capstone Advisor: Carolyn Gideon, Ph.D.

2

Abstract Technology and social networks changed the way we communicate both on and offline. In a world where tech platforms are also optimized by extremists looking to radicalize and recruit the next promising cohort, questions arise on how we can best detect, prevent, and counter this threat online. This study examines both opportunities and limitations of U.S. domestic efforts to counter extremism in the online space. The primary research question addressed in this study is: what narrative components and outreach strategies are best suited to decrease the demand for ISIS-inspired extremist content online? While the focus of this study is ISIS-inspired narratives and counter-narratives, it is critical to acknowledge the presence of a wide ranging spectrum of extremist ideologies domestically in the United States, all of which must be treated differently. In light of the large volume of extremist content online, governments across the globe placed pressure on technology companies to remove material from their platforms. For this purpose, companies turned towards artificial intelligence to scale the speed of removal. These automated efforts alone, however, are not enough to effectively counter extremist content online. This paper utilizes scholar Lawrence Lessig’s theoretical communication framework to conclude that for regulating online extremist content to be done well, we need a combination of laws, norms, market price, and technology. The focus of this study specifically analyzes the role of changing the norms through creating effective counter-narratives. Successful counter-narratives need 1. Targeted strategic messages 2. Credible messengers 3. Appropriate Mediums used to scale. As new and emerging communication technologies enhance their reach so also are new opportunities to mitigate the spread of radicalization online.

3

Table of Contents

I. Introduction…………………………………………………………………...…………5

A. Background and Motivation B. Research Question C. Methodology

II. Theoretical Foundations…………………………………………………………..…….10 A. Laws and Regulations B. Norms C. Markets D. Technology

III. Literature Review………………………………………………………………………18 A. Background: How Technology Facilitates Radicalization B. Legislation: Regulating Harmful Speech and Content Online C. Changing Norms: Countering the Narrative Domestically Online D. Markets: Profiting from Extremist Advertisements, Profits, and Extremist E. Artificial Intelligence and leveraging the Technology of the Internet

IV. Changing the Norms ………………………………………………………………..…30 A. The Message B. The Messenger C. The Medium

V. Case Studies……………………………………………………………………………41 A. Institute for Strategic Dialogue: One-to-one Messaging B. Average Mohamed C. University of Maryland Peer2Peer

VI. Policy Discussions and Recommendations ……………………………………………..49 VII. Conclusion & Further Research…………………………………………………...…….53

VIII. Appendix……………………………………………………………………………….55 A. Radicalization by the Numbers B. Analytical Framework and Analysis: The Messenger C. Measuring Effectiveness

IX. Bibliography……………………………………………………………………………60

4

What role did the internet play in the individual’s radicalization?

State of Habitation and level of Internet Radicalization for ISIS-inspired cases of radicalization by State -

PIRUS (Keshif) Date: April 2, 20181.

The dataset covered a period between 1948-2016 and was coded using entirely public sources of information. The figure above measures the role the internet played in an individual’s radicalization (ISIS-Inspired) by location. The higher the score, the increased likelihood that the internet played a role in an individual’s radicalization per state.

1 "Profiles of Individual Radicalization in the United States - PIRUS (Keshif)." Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif Scoring: 0 = No known role of the internet in individual’s radicalization 1 = Internet played a role but was not the primary means of radicalization (e.g. internet resources were used to reaffirm or advance pre-existing radical beliefs) 2 = Internet was the primary means of radicalization for the individual (e.g. initial exposure to ideology and subsequent radicalization occurred online -99 = Unknown (filtered) -88 = Not Applicable (radicalization occurred before 1995) (filtered)

5

I. INTRODUCTION

A. Background and Motivation

On October 31st 2017, 29-year-old Sayfullo Saipov rammed a truck into innocent civilians on a

busy lower Manhattan bike path, killing eight. How did he get there? On his cell-phone alone,

investigators found 90 videos and 3,800 images of beheadings, shootings, and bomb-making

instructions among several pro-ISIS propaganda in his possession.2 Saipov’s collection of ISIS

material is not a standalone phenomenon. While technology and social media platforms have in

many ways revolutionized the way in which a new aged world communicates day-to-day, their use

also allows for the exchange of countless harmful material online. Now more than ever, we can

reach more people in more places with unparalleled ease and speed. In a similar way, however,

extremists have weaponized the same platforms, websites, messaging services, and search engines

alike towards a digital media campaign to recruit and mobilize the next generation of sympathizers

to their cause.

If history is any guide, regardless of the physical environment, extremist thoughts do not simply

vanish. As the US-led coalition against ISIS recently claimed, 98% of territory once held by the

group in Iraq and Syria has now been recaptured.3 Platforms like Twitter, Facebook, Telegram, and

WhatsApp are able to keep the group’s ideas alive regardless of location. As one article points out,

“Decades of border disputes, violent conflict, and shifting refugee populations have left millions of

2 Solon, Olivia. “Sayfullo Saipov Had 90 Isis Videos on His Phone. Has the Fight against Online Extremism Failed?” The Guardian, Guardian News and Media, 4 Nov. 2017, www.theguardian.com/us-news/2017/nov/04/sayfullo-saipov-isis-online-propaganda-new-york-terrorism. 3 “Islamic State and the Crisis in Iraq and Syria in Maps.” BBC News, BBC, 28 Mar. 2018, www.bbc.com/news/world-middle-east-27838034.

6

Muslims without a clear national identity. ISIL’s virtual caliphate offers them citizenship free from

terrestrial constraints, which can be accessed from anywhere in the world.”4 This virtual flexibility

allows the diminishing physical caliphate to live beyond borders and to become more populated

than ever. From June to December 2017, for example, the Program on Extremism analyzed and

tracked pro ISIS channels on the encrypted messaging application Telegram. During that period,

finds show that ISIS supporters shared over 48,000 photos, thousands more of documents, videos,

links, and audio recordings.5 As a response, critics and government officials called on government to

draft legislation that punishes with heavy fines and censures to those that fail to comply.6

Studies show that the internet is considered a facilitator in the radicalization process but cannot

be attributed to the internet alone. Research shows that the internet may enhance opportunities to

become radicalized for example by filtering extremist material that aligns with their beliefs.7

Technology giants such as Facebook, Twitter, Google, and Microsoft have received a wave of

criticisms towards their ability to remove extremist content from their platforms in a timely fashion.

In-part as a response to the wave of criticisms, in June 2017, Facebook, Microsoft, Twitter, and

YouTube, announced the formation of the Global Internet Forum to Counter Terrorism to address

the use of their platforms to promote extremist propaganda.8 While revolutionary technology made

it possible for unprecedented efficiency and speed, it also gave rise to pressing security challenges.

4 “#Virtual Caliphate.” Center for a New American Security, www.cnas.org/publications/reports/virtual-caliphate. 5 “Telegram Tracker”. Program on Extremism. Fall 2017. https://extremism.gwu.edu/sites/g/files/zaxdzs2191/f/Telegram%20Tracker%20Fall%202017%20(5).pdf 6 Patrikarakos, David. “Social Media Spreads Terrorism and Propaganda. Police It.” Time, Time, 2 Nov. 2017, time.com/5008076/nyc-terror-attack-isis-facebook-russia/. 7 “Social and News Media, Violent Extremism, ISIS and Online Speech: Research Review.” Journalist's Resource, 24 May 2016, journalistsresource.org/studies/society/social-media/social-media-violent-extremism-isis-online-speech-research-review. 8 “Facebook, Microsoft, Twitter and YouTube Announce Formation of the Global Internet Forum to Counter Terrorism.” Microsoft on the Issues, 27 June 2017, blogs.microsoft.com/on-the-issues/2017/06/26/facebook-microsoft-twitter-youtube-announce-formation-global-internet-forum-counter-terrorism/.

7

Today, technology companies and governments around the world have struggled to balance their

support for free expression and the need to address extremist content online.

While policymakers and strategists alike are seeking ways to counter extremist content in the

online space, much of the efforts to date have placed momentous emphasis on restrictive content

removal. Measures such as these, however, are at best temporarily containing the issue at hand.

Content removed from one website can quickly appear on another. Given the speed of uploads and

limited resources for law enforcement, it is important to consider alternative ways in which we can

counter extremism by limiting both the supply of extremist content and its demand. Any effort to

find solutions must look beyond traditional methods of pointed laws and forceful content removal

and towards more sustainable policies that seek to address the problem at the source: radicalization.

8

B. Research Question

Technology companies, governments, and practitioners have struggled to address the issue of

extremist content online, it is imperative, then, to understand what could and perhaps should be done

to balance both security concerns, privacy, and free speech. This thesis aims to provide a sample

framework to develop best practices in the field of counter-narratives to analyze the most strategic

ways practitioners can regulate and address extremist content online in the United States. This study

will investigate the ways in which strategic communications and narratives can shift norms and

attractive appeal in cyberspace to answer the following primary research question: What narrative

components and outreach strategies are best suited to decrease the demand for ISIS-

inspired extremist content online?

C. Methodology

In identifying the most strategic communication tools and strategies to counter extremism online, a

combination of a theoretical, research, and analytical frameworks will be taken as an approach to

supplement quantitative and qualitative data on radicalization and counter-narratives. First, in order

to understand the regulatory environment in which this issue falls, I will begin by using Lessig’s

framework of internet regulation to assess the theoretical foundations of how cyberspace is

regulated through the lens of 1. Laws, 2. Norms 3. Markets, and 4. Architecture. Next, I will

specifically focus on regulating through “norms” by investigating how alternative and counter-

narratives can shift a user’s behavior online. In this section, I will conduct an in-depth analysis of

three narrative components and outreach strategies to determine best practices for each: 1. Message,

2. Messenger, and 3. Medium. While examples of counter-narratives research and programs

domestically are limited, this study will conduct three comparative case studies to understand

applicability of the suggested methodologies in action. The study will conclude by providing

9

recommendations to government officials and practitioners on approaches to consider in efforts to

counter the demand for extremist content online.

10

II. Theoretical Foundations: Laws, Norms, Markets, and Architecture

To begin answering how to best create alternative narrative online to shift the appeal of

extremist content, an understanding of the regulatory environment in cyberspace is required. A

sound framework to use in grasping this environment is grounded in Lawrence Lessig’s theoretical

foundations defined in his book Code and Other Laws of Cyberspace 2.0. A framework that

incorporates both opportunities and challenges faced in communication, legal regulation, soft

power, and psychological influence is needed to explain a fuller picture of this complex topic. To

start, Lessig argues that there are four primary ways to regulate one’s behavior online:1. Laws, 2.

Norms, 3. Architecture (or Technology), and 4. Markets. A combination of these theoretical

frameworks can help us conceptualize how best to constrain the production and consumption of

extremist content in cyberspace.

11

A. Laws and Regulations:

First, practitioners must consider legal regulations as a starting foundation in strategizing

options to limit harmful content online. Lessig defines the term legal regulability as the “capacity of

a government to regulate behavior within its proper reach.”9 The author outlines several ways in

which law regulates in cyberspace: copyright, defamation, and obscenity, for example continue to be

restrictive both by governments and technology companies. The author argues that in order to

understand how well law is regulated, several questions concerning a user’s identity must be

answered, including who someone is and where they are. Addressing these questions could help us

answer who, what, where, and how to best regulate. But how much regulation should be acceptable

without encroaching on privacy and free speech? In regulating public broadcasting, for example, the

Federal Communications Commission was authorized by congress to regulate content, where only

the licensed had authority to speak.”10 To what capacity, however, should the same regulations that

apply to television, news, and radio media apply in cyberspace?

Many could argue that perhaps censoring harmful material online could be the golden

answer to the presence of extremist content on the web. Lessig however, criticizes those arguments,

asserting that we must not rely on the government’s role in restricting speech alone, “...between two

solutions to a particular speech problem, one that involves the government and suppresses speech

narrowly, and one that doesn’t involve the government but suppresses speech broadly, constitutional

values should tilt us to favor the former...First amendment values should lead us to favoring a

speech regulation system that is thin and accountable, and in which the government’s action or

inaction leads only to the suppression of speech the government has a legitimate interest in

9 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.126. 10 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.271.

12

suppressing.”11 We must consider this tension in regulation between freedom of speech and

suppression as a guiding principle into the challenges that lie ahead in this domain. While legally

limiting content in cyberspace can significantly impact which material a user gains access to, there

remains a range of alternative means by which practitioners can decrease the consumption of

extremist content online.

A thorough theoretical understanding of this delicate balance behind the use of

communication and technology in the “public interest” perhaps could be a useful framework in

understanding legitimate policy options. Scholar Rebecca MacKinnon goes into depth on the

concept of speech regulation for the public-interest. In her book Consent of the Networked, the

author argues that “in most countries, the primary way of holding companies accountable to the

public interest is by passing laws.”12 In Dr. MacKinnon’s argument, laws and regulations for the

public interest are a means in which we can approach regulating harmful behavior online.

MacKinnon argues that while regulation is clearly needed, privacy legislation remains a concern for

companies who fear government’s role in regulation. This issue highlights only a fraction of

concerns between privacy, free speech, and regulation. MacKinnon further articulates her concerns

regarding government’s role in regulation, arguing that there is “no country on earth whose

government has not in some way sought to extend its power through private networks.” 13 The

author cites several trade-offs between balancing surveillance with crime, terrorism, and cyber-

attacks.14 Dr. MacKinnon’s arguments on government’s role in legislation and the gap between the

reality of technological innovation and legislation are indeed what stands at the center of a heated

debate in light of vast technological advancements.

11 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.255. 12 MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175 13 MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175 14 MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175

13

B. Norms

In light of high costs associated with enforcing legal regulations, Lessig’s model of regulation

turns to norms. According to the author, optimal protection in cyberspace often contains a mix of

“public law and private fences.”15 How, then, could we use private fences to limit the appeal and

consumption of harmful material online? Norms, as described by Lessig’s model to regulate internet

behavior online, could be used here as a proxy for limiting the appeal of violent extremist content by

providing alternative narratives. Changing the norms can also be used to regulate behavior in

cyberspace. According to Lessig, social norms are similar to law in that they are effective rules

imposed by society. These rules are normative constraints organized through members of a

community. Norms can constrain through social stigma imposed by the community for example. In

using this foundational framework in cyberspace, technology can both undermine and support

norms.16As an example, Lessig adds, “Talk about Democratic politics in the alt.knitting newsgroup,

and you open yourself to flaming...a set of understandings constrain behavior, again through the

threat of ex post sanctions imposed by a community.”17 Similar to these normative constraints

placed by community members in online forums, it is important to realize the ways in which

practitioners can leverage these private fences to normalize behaviors online.

But creating norms online is not a minor task. The author describes a set of challenges in

creating norms in places where sense of community is absent. Lessig emphasizes that in places

where the community is not self-enforcing, norms are supplemented by laws. Adding a legal

dimension to community building goals can often lead to escalated tension, however. Other

15 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. P.123. 16 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.124. 17 ibid

14

challenges that arise in creating norms online include challenges in deciphering identity in

cyberspace. Anonymity and lack of identification online for example, can make it more challenging

to create community and thus enforce norms.

While Lessig describes the concept of changing norms in both real and cyberspace, scholars

like Joseph Nye discuss the importance of influence from a different lens. Joseph Nye’s theoretical

foundations underlying the concept of “soft power” through shaping someone’s preference are

critical in conceptualizing shifting the appeal of extremist narratives online. To Nye, influencing

attitude as an alternative to hardline tactics is an indirect path to getting what we want, “soft power

rests on the ability to shape the preference of others”. Here, soft power is not merely to influence or

persuade. The author argues that simply put, “soft power is attractive power.”18 Persuasion then, is

argued to take a variety of shapes from creating shared values, culture, policies, and institutions. In

an attempt to address successful attempts to counter extremism online, soft power and shaping

preferences are important tools to enhance intrinsic appeal. How can attraction be used to positively

shift someone’s preferences positively?

Using soft power in cyberspace centers upon a critical understanding of the importance of

information resources to power. As the author describes, “Defined behaviorally, cyber power is the

ability to obtain preferred outcomes through use of the electronically interconnected information

resources of the cyber domain.”19 What makes using soft-power in cyberspace even more lucrative

for Nye is its significantly low barriers to entry, the large volume of players, and the opportunity for

concealment. While this could encourage small states to play significant roles, it can also lead

18 “The Benefits of Soft Power.” HBS Working Knowledge, hbswk.hbs.edu/archive/the-benefits-of-soft-power. 19 Nye, & Harvard Univ Cambridge MA Belfer Center FOR Science International Affairs. (2010). Cyber Power.

15

malicious non-state actors to gain entry and leverage at a low cost. As Nye cites, “Al Qaeda videos

on the internet designed to recruit people to their cause are another case of soft power being used to

change people from their original preferences or strategies.”20 While we must recognize the

opportunities surrounding the use of soft power in cyberspace, we must also reconcile the parallel

vulnerabilities and asymmetries that may result.

C. Markets

Lawrence Lessig argues that markets can constrain through prices in both real space and

cyberspace. The author looks at the different mechanisms in which resources are transferred from

one person to the next online. Cyberspace is arguably one of the least regulated industries. Markets

in the technology industry are among the least regulated industries. Companies like Twitter and

Facebook are able to set the price of a product, the level of access a user has, and the “terms of

service” that govern it all. As Lessig describes, the constraints of the market exists “because of an

elaborate background of law and norms defining what is buyable and sellable, as well as rules of

property and contract for how things may be bought and sold. Lessig gives an example of the

constraints of markets by making a reference to cigarette usage for example, where markets and the

price of cigarettes constrain one’s ability to smoke. In this case, a change in price will also change the

constraint on buying cigarettes.

Could changing price of accessibility on Facebook, Twitter, and YouTube, for example,

change the demand? A first step to an answer should incorporate technology companies’ pricing

structures—where advertisements, not necessarily user fees, cover expenses. Lessig cites certain

examples of this in pointing out that advertisers do reward popular sites, implying a certain market

20 ibid

16

structure that aims to gather popular support and market opportunity. Indeed, when considering

how this theoretical concept of market restraint pans out in the online domain, it is also imperative

to recall that increasing the cost of platform usage to members in this case is largely contrary to

operating structures for platforms that have offered their services globally to millions for free. In

theory, however, increasing the price of a product, in this case, the cost of a user’s profile, would

reduce traffic and by proxy consumption of harmful material. The people—user profiles, are the

product and the profit for many of these companies. Changing the “price”, and in this case,

structure of advertisement profits, could—like smoking, constrain one’s ability to potentially gain

access to harmful content.

D. Technology/Architecture

Code, according to Lessig, is the software and hardware that constitutes cyberspace and

implements protocols of regulation. Beyond the structure of legal, normative, and pricing

constraints, Lessig argues that the internet’s architecture or design also constrains what is feasible

and allowable as it chooses the terms upon which a user enters in cyberspace. The author argues that

while laws, norms, and markets are constrained through judgement, “architectural constraints have

their effect until someone stops them.”21 Architecture in many ways according to Lessig is a form of

law, i.e., it determines what set of options one has to regulate. But setting those choices (creating the

code) is the first step. In cyberspace, one must understand how different codes are set up to regulate

behavior. For Lessig, “code is law”-- different sets of codes using a mixture of soft power and hard

power architecture regulate behavior in cyberspace. Lessig argues that the way in which code

regulates and who the actual coders are (including who controls the types of codes they write) are

21 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.33

17

important in understanding an important intersection between laws and technology to regulate

behavior online.

Given the ways in which technology and architectures build the foundational basis for code

in cyberspace, what steps can be taken to address harmful behavior online? As Lessig asserts,

regulability depends at least in part on identification. To know exactly who did what, when, and

where, Lessig cites three familiar technologies used to clarify one’s persona online: 1. Identity

(attributes such as name, sex, and location), 2. Authentication (thumbprint readers), and 3.

Credential (virtual wallets). The more we know about one’s identity the better our ability to regulate

that person’s behavior, as “on the internet, it is both easy to hide that you are a dog and hard to

prove that you are not.”22 While that example may seem extreme for some, it portrays a vivid image

of how obscured one’s identity could be behind a screen—alluding to the difficulty that underlines

its regulations. An absence of self-authenticating facts in cyberspace reduces the regulability of

behavior in cyberspace. Lessig’s explanations of both the opportunity and challenges that remain in

efforts to constrain and regulate cyberspace must be examined in efforts that seek to address the

consumption of extremist material online.

22 Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.33

18

III. Literature Review

A. Background: How Technology Facilitates Radicalization:

While the Internet has been characterized as an embodiment of the free marketplace of ideas, it

has also has been identified as a catalyst in facilitating radicalization. It is important to note that

radicalization does not occur in a vacuum. Physical and virtual social networks are indeed drivers

that can lead to radicalization. Radicalization, according to The Center for the Prevention of

Radicalization Leading to Violence is the “process whereby people adopt extremist belief systems—

including the willingness to use, encourage or facilitate violence—with the aim of promoting an

ideology, political project or cause as a means of social transformation.”23 Studies show that face-to-

face interactions are often essential components of the radicalization and mobilization processes.

According to studies, however, the internet can encourage the radicalization of individuals,

including those who are not geographically accessible to key extremist figures.24 In a study by the

University of Maryland on individuals radicalized in the United States, The internet played at least

some known role in the radicalization process for 87.2% of ISIS-inspired extremists (Appendix

Section A). Before drafting an effective response to shift appeal of extremist content online, we

must note that the path to violent extremism is not linear and can take a variety of shapes. To this

end, an effective response must be targeted specifically towards the persons engaging with extremist

content and their unique set of characteristics and behaviors.

As the face of extremism and terrorism continues to change rapidly, practitioners must

meticulously examine the internet as a means to facilitate radicalization and terrorism. While this

23 Ratna Ghosh, W.Y. Alice Chan, Ashley Manuel & Maihemuti Dilimulati (2016) Can education counter violent religious extremism?, Canadian Foreign Policy Journal, 23:2, 117-133, DOI: 10.1080/11926422.2016.1165713 24 Radicalization Dynamics: A Primer, Gang Enforcement. June 2012. http://www.gangenforcement.com/uploads/2/9/4/1/29411337/radicalization_process.pdf

19

study noted earlier that internet may encourage radicalization of individuals, there is a need to

understand the capacity of its role in an individual’s radicalization. To start, it is critical to examine

the ways in which the internet is used as a catalyst for recruiting supporters. A study exploring the

numerous ways the internet is used to radicalize underlined three primary ways it promotes dangers

in in cyberspace25: 1. The Internet illustrates and reinforces troubling messages that causes potential

recruits to gain access to material that supports misguided views. 2. The Internet helps provide

recruiters with low-risk means of connecting with potential new members to promote extremist

ideologies. 3. The internet establishes an environment for echo-chambers and misguided hateful

viewpoints that encourages violent behavior. These findings must guide strategists to develop a plan

to counter this threat online.

An analysis of the internet’s role as a facilitator of extremist content, an analysis of the role of

technology companies as a provider of these services needs to simultaneously take. From the ability

to take instant photographs to posting videos that can grow viral, technology companies hold

significant capabilities and allow users to use their platforms both for good and destructive

purposes. Technology companies also have the ability to display certain content to a specific user

depending on their demographics. As one article puts it, “They’ve become kind of more like

governments than companies with the amount of money they have, with the kind of power they

have over democracy in society.”26 As these companies have the power to choose when, where, and

how to influence actors that use their platforms, in what ways could we leverage this capability to

regulate extremism online?

25 Syed, Hasnain. “Safe Spaces Initiative.” Safe Spaces Initiative: Tools for Developing Healthy Communities - a Campaign by the Muslim Public Affairs Council (MPAC), www.mpac.org/safespaces/. 26 “How 5 Tech Giants Have Become More Like Governments Than Companies.” NPR, NPR, 26 Oct. 2017, www.npr.org/2017/10/26/560136311/how-5- tech-giants- have-become- more-like- governments-than- companies.

20

B. Legislation: Regulating Harmful Speech and Content Online

There are numerous legal frameworks that help guide internet content regulation in the

United States. Several guiding bodies including both the Federal Communications Commission

(FCC) and the Federal Trade Commission (FTC) incorporated provisionary clauses in their

mandates to regulate internet privacy, content, and beyond. For the purposes of this study, two

particular legislations should be understood in greater depth to realize current challenges in

countering extremism online 1. Privacy, 2. Censorship, 3. Free Speech, and 4. Counter-messaging to

a domestic audience. These is by no means an exhaustive legal analysis of legal frameworks that

guide regulation but simply a glimpse into some of the legal regulatory environment that guides

conversations on governing extremist content online. Efforts to legally counter extremism online in

the United States face challenges intersecting a range of delicate topics such as privacy and free

speech, creating immense challenges for policy makers to create legislation. Understanding the

operating environments that surrounds existing constraints will help illuminate insight on the

limitations faced in addressing such a complex subject as harmful and extremist content on the

internet.

U.S. legislation to address extremist propaganda online generally has erred on the side of free

speech and expression in this delicate issue. Rightfully so, cracking down on speech is a delicate and

thorny issue that holds tremendous value. In comparison, European allies like France and Germany

can hold Information Service Providers (ISP’s) legally accountable for extremist and terrorist

propaganda while the U.S. does not. While the Federal government strongly encouraged social

media companies to remove terrorist propaganda from their platforms, the U.S. government has

largely left this space free from heavy legal legislation. For a statement to be illegal, it needs to have a

direct and credible threat against an individual, organization, or institution. This makes illegal action

21

incredibly challenging due to biases in interpreting what is deemed a “credible threat” and what

could fall under the realms of “free speech”.

Nonetheless, many have argued that in order to effectively address extremist content legally

online, we must prohibit and remove it. According to studies, censoring the internet is “rarely

effective, except in the most repressive countries, which have full control over Internet access and

devote massive resources to policing its use.” 27 In Federal courts, judges have often erred on the

side of caution by protecting free speech28. As one author notes, “even critics of what may be called

‘free speech absolutism” concede ‘the First Amendment is so central to our self-conception’ that it

has come to define what being American means.”29 Regulating content through censorship or

removal will inevitably be limited in efforts to counter extremism domestically online. Censorship

and content removal, then, is not a panacea due to domestic legal limitations constraining

government from acting to remove harmful speech and materials in the online space.

Difficulties to balance censorship and free speech are by far the only challenges the U.S.

government faces in the domestic context of addressing extremism online. While the government is

allowed to message outwardly to foreign audiences to “counter” extremist narratives, it is

constrained from doing so at home. The Smith-Mundt act (formerly referred to as the United

States Information and Educational Exchange Act of 1948) is cited as a frequent example that

restricts the U.S. government from disseminating information to influence public opinion, citing

concerns in government propaganda. Below is a brief excerpt from the body of this bill30:

27 Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459 28 Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459 29 ibid 30 Smith Mundt Act of 2010

22

(a) No funds authorized to be appropriated to the Department of State shall be used for the purpose of influencing public opinion or propagandizing in the United States. The provisions of this section shall not prohibit or delay the Department from responding to inquiries about its operations, policies, programs, or program material, or making such available to members of the media, public, or Congress.

Legal regulations to limit the consumption of harmful and extremist content online are quite limited

domestically. Actors like the Global Engagement Center are charged with leading U.S. government

efforts to counter terrorist propaganda. These efforts, however, cannot disseminate or counter this

information at home due to domestic restrictions31. As exemplified by the Smith-Mundt Act, legal

restrictions on disseminating information online (even to counter extremism) prevents the U.S.

government from acting as an outward-facing messenger in this space preventatively. In light of

these restrictions, it is critical to consider alternatives to public law in an effort to find the best

strategies to counter the spread of extremist content online.

C. Changing Norms: Alternative Narratives

As discussed above, efforts to simply decrease the supply of extremist content online are not

enough to counter extremism. We must also work in parallel to decrease the demand through

providing alternative narratives. In 2013, scholar Peter Neumann conducted a study on online

radicalization and strategies for countering extremist propaganda in the United States. The article

argues that there are three ways to address online radicalization. It concludes that various

approaches aimed at regulation including restriction of freedom of speech and removing online

content are the least effective. Neumann argues that governments should play a greater role in

reducing the appeal of extremist online content by encouraging civic challenges to address

31 Author Interview with staff from the Global Engagement Center July, 2017.

23

extremism.32 A policy report published by the International Centre for the Study of Radicalization

and Political Violence concluded that any strategy that hopes to counter online extremism and

radicalization must create an “environment in which the production and consumption of such

materials become not just more difficult…a well as less desirable.” The report highlights the

importance of creating a strategy that encompasses both methods to regulate or deter both

consumers and producers. The report concludes a four pronged strategy: 1. deterring producers, 2.

empowering online communities, 3. reducing the appeal, and 4. promoting positive messages to

counter extremism. The article exemplifies a central problem in the field: artificial intelligence will

not solve all. While most governments have focused on calling for technology solutions to the issue

of radicalization, technology alone could perhaps be counterproductive. Authors of the report

conclude that “radicalization is largely a real-world phenomenon that cannot be dealt with simply

“pulling the plug.”33 As policy-makers look ahead, it is then critical to turn to look at alternative ways

to offset extremist narratives.

In order to understand concerns regarding integrating an appealing alternative narrative to

counter ISIS and al-Qaeda narratives, the Hedayah center and the International Centre for Counter-

Terrorism -The Hague organized a roundtable in June 2014 to bring together approximately twenty-

five leading experts on countering extremism from a variety of different countries. At the conclusion

of the meeting, organizers developed a report to outline best practices in constructing counter-

narrative frameworks against violent extremism. The report highlighted six predominant different

types of counter-narratives used in the field at the time: 1. Positive/alternative narratives 2. Strategic

counter-narratives 3. Ethical counter-narratives 4. Ideological and religious counter-narratives 5.

32 Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459., 33 Neumann, Peter, Stevens, Tim. Countering Online Radicalisation: A Strategy for Action. http://icsr.info/wp-content/uploads/2012/10/1236768491ICSROnlineRadicalisationReport.pdf

24

Tactical counter-narratives and 6. Humor and sarcasm. These narratives need to identify the right

message, incorporate a local and credible messenger, and utilize the appropriate medium

accordingly. As the report suggests, “Attractive alternative narratives can contribute to the

prevention of radicalization and recruitment if they are delivered to the target audience by trusted

sources.”34 It is critical, then, to incorporate narratives rooted in credible messengers.35

While later sections will describe narrative components and strategies in greater detail, it is

critical to note that successful strategies should incorporate strategic messages, credible messengers,

and appropriate mediums. The U.S. government and private funders should play the role of

facilitator rather than orchestrator in promoting online voices for countering extremism.

Incorporating just the right message, messenger, and medium to persuade the target audience

through alternative narratives is key to strategies that hope to have any effective impact.

D. Markets: Profiting from Extremist Advertisements, Profits, and Extremism

At the core of a successful business is often financial profitability, and this concept is no

exception for technology companies’ operating structures. The market and business environment

often serves as a guide to technology companies’ both for ethical and business practices. Many

believe that beyond profits, businesses such as technology companies have a moral responsibility to

remove extremist content from their platforms. These ideals, however, are hardly enforced legally

domestically in the United States, leaving the issue of harmful material in the hands of social media

companies. Many cite British Prime Minister Theresa May as having a more hands-on approach in

the United Kingdom. Prime Minister May told companies such as Google, Facebook, and Twitter to

34 “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.” ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/. 35 “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.” ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/.

25

take down terrorist-linked content and websites within two hours or face heavy fines. But the British

Prime Minister does not stand solo in lecturing technology companies about their need to do more

to remove extremist content online.36

Technology companies hold tremendous power both for content and of their governance. A

Pew research Center poll shows that a vast majority of Americans now own a cellphone or a smart-

phone. According to the poll, 94% of Americans between the ages of 18-29 own a smartphone.37

Technology has transformed parts of everyday life and technology companies made it possible to

use their platforms for media, business, government, and social interaction. With access to massive

amounts of data using artificial intelligence and machine learning, technology companies are able to

alter algorithms to shift the way in which users receive information. This capability has already been

deployed on a range of topic like suicide prevention, hate-speech, and extremism.38 Where, when,

and how technology companies choose to deploy these abilities remains critical in conceptualizing

the opportunities and choices that lie ahead.

Markets are important in deciphering what content and material is accessible to whom and

for what price. Technology platforms have increasingly self-regulated content from extremists and

otherwise, i.e., the companies’ governing bodies get to decide what material is available and for

whom. Companies such as Facebook, Twitter, Microsoft, and Google have developed the tools and

policies to both add and remove material from their platforms. Very little research has been done to

date that effectively evaluates the impact of companies’ self-regulation on violent extremism. . Due

36 McCann, Christopher Hope; Kate. “Google, Facebook and Twitter Told to Take down Terror Content within Two Hours or Face Fines.” The Telegraph, Telegraph Media Group, 19 Sept. 2017, www.telegraph.co.uk/news/2017/09/19/google-facebook-twitter-told-take-terror-content-within-two/. 37 “Mobile Fact Sheet.” Pew Research Center: Internet, Science & Tech, 5 Feb. 2018, www.pewinternet.org/fact-sheet/mobile/. 38 Solon, Olivia, and Sam Levin. “How Google’s Search Algorithm Spreads False Information with a Rightwing Bias.” The Guardian, Guardian News and Media, 16 Dec. 2016, www.theguardian.com/technology/2016/dec/16/google-autocomplete- rightwing-bias- algorithm-political- propaganda.

26

to substantial government pressure that requires technology platforms to address extremist content,

companies have chosen in large part to self-regulate by incorporating stricter user terms of service

that generally pivots them against extremism, violence, and hate speech. According to one study,

“they (companies) often reserve the right to take down or refuse to distribute such content, while

pledging to not disclose user information so as to respect their privacy (except in cases of harm done

to others or legitimate requests by the authorities).”39 Self-regulation standards, however, are not

standardized across the market. What violates the terms of service for one platform may be allowed

on another.

Markets have also taken it upon themselves to shape their users’ experience, including in the

field of counter and alternative-narratives. One Google backed startup, Moonshot CVE, for

example, works to redirect users to different messages in hopes to “nudge” their behavior in a

positive direction. In a program termed the “Redirect method”, a partnership between Jigsaw and

Moonshot CVE uses “targeted advertising for users that search for keywords that are with extremist

content. The program then directs those users to a curated YouTube library of anti-extremist

videos.40 The Market that allows this process to take place is the ability for YouTube to use Adwords

and targeted advertising, most of which is used for profiting purposes. The ads link to both Arabic

and English YouTube channels that include pre-existing videos collected by Jigsaw. Such materials

include testimonials from former extremists, imams denouncing ISIS, and the group’s dysfunctional

governance inside Northern Syria and Iraq. The Redirect Method connects people to authentic

credible messengers using marketing techniques such as targeted advertisements, providing an

39 “UNESCO Releases New Research on Youth and Violent Extremism on Social Media.” UNESCO, 6 Dec. 2017, en.unesco.org/news/unesco-releases-new-research-youth-and-violent-extremism-social-media. 40 “The Redirect Method.” The Pilot, redirectmethod.org/pilot/.

27

example of a framework that could prove to be effective in engaging the target audience.41

E. Artificial Intelligence and leveraging the Internet’s Architecture

In 2015, Terrence McNeil of Ohio, for example, who was charged for soliciting the killings

of U.S. service members over Facebook, Twitter, and Tumbler. He was found in possession of

photos on his Facebook account to praise the death of a Jordanian pilot who was burned alive by

ISIS. In light of similar controversies, Facebook responded by enhancing technological capabilities

and machine learning such as image-matching to compare content from extremist images uploaded

to one that have already been removed due to violations of terms of service.42

As the fight against terrorism and extremism is constantly evolving in a multidimensional

space, so too is the technology and artificial intelligence capability that could potentially counter it. A

London-based firm ASI Data Science, for example, created algorithms that that purports to detect

and block extremist content online43. Experts and practitioners alike have called for both

governments and technology companies to rethink their response to extremist narratives online.

This comes in the midst of technology companies placed at the heart of evidence found on

extremists or terrorists that have used technology platforms to further their cause.

With reach in almost every corner of the globe, companies have the power to filter content

they deem unfit to be on their platforms. With availability of public data on platforms, evolving

technologies have also made it possible for the creation of technology that works alongside social

41 “The Redirect Method.” The Pilot, redirectmethod.org/pilot/#results. 42 “Facebook, Microsoft, Twitter and YouTube Team up to Fight Terrorist Propaganda.” Los Angeles Times, Los Angeles Times, 5 Dec. 2016, www.latimes.com/business/technology/la-fi-tn-internet-terrorism-20161205-story.html. 43 Rodriguez, Ojel L. “The Fight against Extremism Is Turning to a Fight against Freedom.” Washington Examiner, Washington Examiner, 25 Feb. 2018, www.washingtonexaminer.com/the-fight-against-extremism-is-turning-to-a-fight-against-freedom/article/2649871.

28

media algorithms to detect harmful content. Hani Farid, a professor at Dartmouth College

developed an algorithm named eGlyph based on software that helps detect online images of child

exploitation. This algorithm reportedly is able to identify and remove known extremist material on

the internet. When extremist content is found, eGlyph is activated. “The algorithm scans the billions

of uploads to social media platforms every day. When it finds a matching fingerprint, it reports its

findings to social media platforms.”44 According to the founder, flagging this technology helps flag

content that would then urge tech companies to take action and “enforce their own terms of service

more efficiently.45

Technology platforms are also taking significant strides to self-regulate their content. In a report

written by Facebook’s director of global public policy and management Monika Bicker and Brian

Fishman, Facebook’s counterterrorism policy manager, the authors stressed that while it is difficult

to monitor Facebook’s two billion global users that write in more than eighty different languages,

the company is taking the following steps to counter extremist content online.46

1. Content removal: With image matching to prevent content formerly removed to be

reloaded.

2. Artificial Intelligence: Experimenting with “language understanding” to recognize harmful

content using text signals

3. Algorithms: To find clusters of extremist content and accounts

4. Audience analysis: tracking users that persistently violate rules.

44 “How CEP's EGLYPH Technology Works.” Counter Extremism Project, 8 Dec. 2016, www.counterextremism.com/video/how-ceps-eglyph-technology-works. 45 ibid 46 Heilweil, Rebecca. “5 Ways Facebook Uses Artificial Intelligence To Counter Terrorism.” Forbes, Forbes Magazine, 16 June 2017, www.forbes.com/sites/rebeccaheilweil1/2017/06/15/5-ways-facebook-uses-artifical-intelligence-to-counter-terrorism/#1020ab3d49e6.

29

5. Privacy and Encrypted channels: Investigation of encrypted messaging apps like

WhatsApp that offer end-to-end encrypted services.

The extent to which artificial intelligence and the internet’s architecture plays a role in providing

both opportunities and limitations to counter extremism online can be demonstrated in Mark

Zuckerberg’s April 2018 testimony before the Senate’s Commerce and Judiciary Committee.

Facebook’s chief executive explained that before A.I. (Artificial Intelligence), people could “share

what they wanted, and then, if someone in the community found it to be offensive or against our

policies, they'd flag it for us, and we'd look at it reactively.”47 For Zuckerberg, however, this issue is

now taking new shape as A.I. tools continue to develop. But even for Zuckerberg’s team of more

than 20,000 people working to secure content, some problems are easier to address with artificial

intelligence than others.48 Hate speech, for example, similar to content deemed “extremist” in the

online space, is “one of the hardest, because determining if something is hate speech is very

linguistically nuanced…”49 While Zuckerberg did claim success in the ability to automate a system

that is able to remove approximately 99% of ISIS and Al-Qaeda content on Facebook, the CEO

demonstrated that A.I. tools are not yet nuanced enough linguistically to address different types of

content such as hate-speech—especially in different languages, with accuracy. The CEO added that

he is optimistic it could happen in the next five to ten years. For now, however, artificial intelligence

is “just not there yet.”50

47 Government, Transcript courtesy of Bloomberg. “Transcript of Mark Zuckerberg's Senate Hearing.” The Washington Post, WP Company, 10 Apr. 2018, www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/?utm_term=.6112207e6ec1. 48 ibid. 49 ibid. 50 ibid

30

IV. Changing the Norms

Regulating and removing harmful material from the web is only a part of the solution. This

effort must be met with counter and alternative narratives to decrease the appeal of extremist

content. Experts say that when “chased from Facebook, terrorists set up shop elsewhere.”51 If

content chased from one platform appears on another, one must strategize, then, how best to

decrease its appeal. This study argues that the most strategic ways to counter extremism online uses

a mix of three communication strategies and components: 1. a skillfully crafted message targeting a

specific audience 2. A credible and relatable messenger and 3. The appropriate medium to scale

content with appropriate quantity and quality. These three primary layers of creating effective

alternative-narratives should be deployed together in combination with laws, markets, and

algorithmic regulations to decrease the appeal of harmful content online. Findings from this study

can help inform policy makers, practitioners, and technology companies how best to address the

intricate problem of extremism online.

51 Guynn, Jessica. “Facebook Says Artificial Intelligence Has Sped up Removal of Terrorist Content.” USA Today, Gannett Satellite Information Network, 29 Nov. 2017, www.usatoday.com/story/tech/2017/11/28/facebook-says-artificial-intelligence-has-sped-up-removal-terrorist-content/903615001/.

31

A. The Message:

A challenge that remains in the field of Countering Violent Extremism is efficiently creating a

message that aligns with interests of the target audience. The target audience in this case, is one that

is identified to be most at risk. Audience targeting goes hand-in-hand with creating a strategic

message. Incorporating carefully and creatively crafted message to the right audience is an essential

component of creating an alternative-narrative. An interdisciplinary approach to audience-targeting

using communication and business practices can extend to this domain and help provide a

foundational framework with which to counter extremist propaganda online. The section below

describes how best to formulate a strategic message based on audience demographics and

psychographics.

Crafting a strategic message towards a specific audience is a big task that needs to incorporate

demographic data specific to the target audience. Demographics can include characteristics such as

age, gender, and location but must be coupled with psychographics that relate to the audience’s

interests, attitudes, and inspirations. This allows messages to be carefully crafted in a way that aligns

with an audience’s motivations. While defining psychographics can often be subjective, it is critical

in incorporating the interests, activities and opinions that would align with a user’s appeal.

Incorporating this information is critical in analyzing the effective appeal a message could have on

the target audience. After all, the purpose here is to influence or persuade the audience to believe in

a different and more compelling idea.

Identifying persuasive elements of story-telling and messaging is another essential component of

persuading a target population. As scholars Health and Heath argue, there are certain principles that

32

help identify the increased appeal of a message. The authors argue that persuasive messages are

simple, unexpected, concrete, and credible, include emotional content, and contain personal

stories.52 In sum, after choosing a target audience by careful analysis of both demographics and

psychographics, alternative messages to counter extremism online need to incorporate elements of

persuasive messaging such as personal stories as described by Heath and Heath in order to create

alternative norms and shape a user’s preference in cyberspace.

Audience Analysis and Targeting

While there are no strict rules for creating narratives, a critical component of successful

practices include messaging that incorporates thorough understanding of the target audience53.

Identifying a target audience should focus on specific communities that are considered vulnerable

both demographically and geographically. Extremist organizations hope to recruit sympathizers to

their cause, edging experts, policy makers, and civil society alike to target users beyond

demographics and interests and towards an approach that also considers how far alone one is on the

radicalization spectrum. According to interviews with practitioners, this population is called “fence-

sitters”. The term “fence sitters”, refers to those who are not yet convinced. As one report on

Countering Terrorist Narratives claims, “In particular, a key target audience for counter-narrative

efforts should be fence sitters, individuals who are showing an active interest in extremism (or are

being targeted by recruiters) but who are still undecided and have yet to mobilize.”54

52 Heath, Chip, and Dan Heath. Made to Stick: Why Some Ideas Survive and Others Die. Woonjing ThinkBig Co., 2007. 53 “Strong Cities Network.” ISD, http://strongcitiesnetwork.org/wp-content/uploads/2016/05/Briefing-Paper-1.pdf 54 Countering Terrorist Narratives. Best Practices from Around the Globe. http://www.marshallcenter.org/MCPUBLICWEB/mcdocs/files/College/F_PTSS/coi_reports/coi_report_countering_terrorist_narratives.pdf

33

A strategic target audience in efforts to counter-extremism online is to incorporate “fence-

sitters” into a preventative strategy. Fence-sitters are those that are sympathetic to extremist

narratives and somewhat engaged in the online radical community, but not yet motivated to act in

their own violent Jihad.” The goal at this stage according to one study is to “use the power of

scripture to delegitimize the radical narrative.” 55Counter-narratives should address those very

recruitment mechanisms and preventatively shape counter-narratives as such. The International

Center for the Study of Violent Extremism, for example, designed an approach to target fence

sitters. These might be individuals that like both ISIS affiliated pages but also pages that disseminate

liberal and democratic views.56 One article analyzing intervention opportunities for counter-

narratives underlines the importance of targeting “fence sitters”, “...counter-narratives have to

exploit the vulnerabilities of “fence sitters” by providing credible alternatives. Condemning ISIS or

emphasizing liberal values is not effective in terms of countering ISIS’s propaganda. What is more

effective is to draw upon “fence sitters’” vulnerabilities by offering them an alternative to ISIS.”57

In looking at demographics of ISIS-inspired individuals in the United States, practitioners

can use information on an individual’s demographics such as age, education, work history, location,

and marital status, for example, as a way to create a message that runs parallel to this audience

demographic trends from this population. The University of Maryland collected data on radicalized

individuals from 1948-2016 and was coded using entirely public sources of information. Figure 2

and 4 below represents some demographic data of ISIS-inspired radicalized individuals in the United

55 Helmus, Todd C., et al. Promoting Online Voices for Countering Violent Extremism. RAND Corporation, 2013. 56 Speckhard, Anne. “Anne Speckhard.” ICSVE, 8 Mar. 2018, www.icsve.org/research-reports/fighting-isis-on-facebook-breaking-the-isis-brand-counter-narratives-project/. 57 “Countering the Narrative: Understanding Terrorist's Influence and Tactics, Analyzing Opportunities for Intervention, and Delegitimizing the Attraction to Extremism.” Small Wars Journal, smallwarsjournal.com/jrnl/art/countering-the-narrative-understanding-terrorist’s-influence-and-tactics-analyzing-opportun.

34

States.58 For the purposes of this study, I focused on one of the variable demographics presented in

this dataset: Age. As Figure 1 below demonstrates, 77 out of 149 ISIS-inspired radicalized

individuals in the United States were between 15-25 years of age. This finding indicates that a much

younger, vulnerable population is at risk of radicalization, particularly on the internet. My qualitative

interviews with experts, practitioners, and policy makers also corroborated to similar findings,

indicating that a “Generation Z” (individuals born after 1995), were the youngest and oftentimes

more vulnerable population to radicalization. The section below outlines a sample analysis of target

audience demographics and psychographics in the United States: Generation Z. What do

demographics and technology usage of Generation Z tell practitioners about the way in which a

message could appeal to their interests?

58 " Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif

35

Figure 1 Demographics of ISIS Affiliates in the United States out of 149 individuals: Age at Exposure59

Case Study in Audience Targeting: Generation Z

Generation Z, the cohort born after the mid 1990’s, is characterized primarily by change in

its increased racial, ethnic, and religious diversity, technological use, and volume of which it is

entering the workforce. These trends should prompt policy-makers to harness both opportunities

and challenges that intersect technology, demographics, security, and society. With the spread of the

use of social media and online platforms also comes national security concerns. Young Gen-Z’ers,

for example, could also be the target of recruitment by violent extremists seeking support for their

radical ideologies. According to research, youth aged 13 – 18 in particular are being actively engaged

in extremist activities including online communication with known extremists who are often

recruiting young charismatic youth.60 These factors indicate concerns and potential risk for

Generation Z’ers that are often characterized by their use of technology. A study on demographic

59 " Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif 60 Written by Neal Keny-Guyer, Chief Executive Officer, Mercy Corps. “How Can We Stop Young People Joining Extremist Groups?” World Economic Forum, www.weforum.org/agenda/2015/10/how-can-we-stop-young-people-joining-extremist-groups/.

36

trends of Generation Z can help policy-makers and strategists understand how to craft messages

that appeals to a younger and much tech-savvy demographic.

Generation Z currently makes up 25.9% of the United States population and is considered

the largest percentage demographically, and perhaps one that is most critical to pay attention to.61

Generation Z is predicted to comprise of one-third of the U.S. population by 202062, making it a

critical population to study in efforts to preventatively present alternatives to challenging extremist

narratives online. A report by the US Census bureau highlights the changing demographics of

America’s young youth cohort and their changing diverse demographic characteristics. The report

predicts that by 2060, 64% of children will belong to an ethnic or racial minority63. While marketers

and business strategists, were able to strategically market to this changing youthful market,

policymakers seem to lag behind.

Currently, as technology plays a bigger role in connecting more youth to harmful material

like extremist content online, an in-depth study of trends surrounding their technology-use is

essential. YouTube, Snapchat, WhatsApp, Telegram, Signal and Twitter are among several

platforms in high demand by this generation. Research shows that an estimated 96% of generation

Z owns a Smartphone. Generation Z is considered “digitally native”. The term is defined as an

“individual who was born after the widespread adoption of digital technology.”64 The term digital

native is a catch-all term for the category for children who grew up using technology. One survey of

this population showed that social media plays a critical role in shaping identity amongst these

61 Beall, George. “8 Key Differences between Gen Z and Millennials.” The Huffington Post, TheHuffingtonPost.com, 5 Nov. 2016, www.huffingtonpost.com/george-beall/8-key-differences-between_b_12814200.html. 62 Beall, George. “8 Key Differences between Gen Z and Millennials.” The Huffington Post, TheHuffingtonPost.com, 5 Nov. 2016, www.huffingtonpost.com/george-beall/8-key-differences-between_b_12814200.html. 63 Sandra L. Colby and Jennifer M. Ortman. “Library.” Projections of the Size and Composition of the U.S: 2014-2060, 3 Mar. 2015, www.census.gov/library/publications/2015/demo/p25-1143.html. 64 “What Is a Digital Native? - Definition from Techopedia.” Techopedia.com, www.techopedia.com/definition/28094/digital-native.

37

young people. Many Gen Z’ers gravitate towards social media as a primary tool to connect with

peers. The study showed that 42% of this generation said that social media had a direct impact on

how they felt about themselves.Policy makers, businesses, and strategists alike must pay close

attention to changes in the use of digital technology to better adapt to future shifts. Increasing the

breadth of communication and interactions through technology to engage this generation is

essential in adapting to new realities.

B. The Messenger:

For any effort to ensure the message has a chance of providing a persuasive appeal, a target

audience must find the messenger credible. Unfortunately, however, messengers often lack the

technical skills and capacity to develop appealing content. As mentioned earlier, a credible

messenger with legitimacy is key. An analytical framework that explores that question was created

based on research, best practices, and qualitative interviews (Appendix Section B). As studies show,

for example, former violent extremists and victims are considered credible messengers in providing

alternative narratives and testimonies that could potentially divert radicals from becoming violent

extremists.65 How do they compare to other types of messengers? As the analytical framework

shows, delivery is key. A CSIS report, for example, argues that “strategic communications efforts

will only be effective if they are organic, embedded in local peer networks, delivered by credible

messengers, and articulate a positive vision for society.”66 While we must encourage technology

65 Avis, William Robert. “The role of online/social media in countering violent extremism in East Africa”. http://www.gsdrc.org/wp-content/uploads/2016/06/HDQ1380.pdf 66 “Turning Point.” Turning Point | Center for Strategic and International Studies, 14 Nov. 2016, www.csis.org/features/turning-point.

38

companies to help provide resources necessary to deliver these messages, working on creating a

compelling product from a credible messenger is key.67

Findings from the Analytical Framework

An analytical framework (Appendix Section B) was conducted to explore who could act as a

credible messenger to provide alternative and counter messages to counter ISIS-inspired extremism

online. The framework incorporates a comparative case study analysis of the type of messenger and

the ways in which messages and mediums were used. Efforts to evaluate effectiveness in this study

uses the Institute for Strategic Dialogue’s metrics for evaluation: 1. Awareness, 2. Engagement. 3.

Impact (Appendix Section C).

When looking at “One-to-one, or peer mentors”, “former” extremists, “community groups

or religious institutions”, and the U.S. or Western Government”, the analytical framework found

that well-constructed counter-narratives from direct one-to-one engagement, former extremists,

community organizations, and select government sponsored partnerships are key to influence target

audience awareness and engagement in hopes of shaping preference towards behavioral change.

These messengers are in the strongest and most strategic position to address extremism with

credible and compelling alternative narratives for use at such an urgent time.

67 ibid

39

C. The Medium:

With a messenger and message in mind, efforts to create effective alternative narratives are

not complete until there is an effective medium used to promote these messages to scale. According

to a study by Omar Ashour at the University of Exeter, “After building the message and

coordinating with the messengers, publicizing and propagating both of them becomes crucial. After

all, many of the battles won by violent extremists were on media fronts.68 An accurate understanding

of how the target audience spends time online using which platforms is essential to using the proper

medium to offer a credible alternative narrative.

An effective medium to be utilized for counter and alternative narratives should take into

consideration context, content, timing, and the process of which to disseminate information. A

Medium must accurately reflect the target audience. As discussed in earlier sections, demographics

and psychographics are key components of the type of message needed. This information must also

be translated across decisions on the type of messenger and platform used. A report by the

Department of Homeland Security asserts that counter-narrative messages must be “available in the

spaces (physical and virtual) that are frequented by the target audience in order for it to have any

effect.”69

Effective alternative and counter-narratives should be able to utilize all resources available

throughout the relevant mediums to capture the most strategic message that fits the target

audience’s core interests. For young, tech-savvy audiences like Generation Z, messaging should

68 Ashour, Omar. “Online De-Radicalization? Countering Violent Extremist Narratives: Message, Messenger and Media Strategy.” Perspectives on Terrorism, www.terrorismanalysts.com/pt/index.php/pot/article/view/128/html. 69 “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.” ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/.

40

include high-quality images and short videos as a sample mean to reach a more powerful connection.

The section below outlines best practices in selecting the most appropriate medium using business

and marketing principles. The principles outlined here can help couple the type and kind of platform

with the outreach features that could be appealing for the target audience70 71:

Selecting the Most Appropriate Medium

Platform Type of Platform Outreach Features

Facebook Social Network, Content sharing, news sharing,

Facebook Groups and Facebook Ads. Facebook ads are considered incredibly effective for business marketing and beyond.

Twitter Social Network, Content sharing, news sharing,

Hashtags, Quotes, statistics, photos, polls, and short videos. Attractive contents need to come in 140 characters or less.

YouTube Videos Videos of quality, with crisp, clear content.

Instagram Media Sharing Stories, add captions, mentions of other popular users. Hashtags may also be used.

WhatsApp Messaging Messaging (one-to-one), Encrypted and Private.

Telegram Messaging Messaging (one-to-one), Encrypted and Private.

Reddit Discussion Boards Link ad and Text ad are two promoted types of posts. Continuous sharing of relevant content and commentary on platform.

Quora Discussion Boards Deep user research with brutal honest opinions

Counter and alternative-narratives must build on ways in which ISIS has capitalized on media and

platform use. ISIS had a carefully structured media strategy, contributing critically to its success in

recruitment. Practitioners hoping to counter similar propaganda must maintain a carefully structured 70 Egan, Karisa. “The Difference Between Facebook, Twitter, Linkedin, Google , YouTube, & Pinterest.” IMPACT: Inbound Marketing Strategy, Advice, and Agency, www.impactbnd.com/blog/the-difference-between-facebook-twitter-linkedin-google-youtube-pinterest. 71 Curtis, and Hootsuite. “10 Types of Social Media and How Each Can Benefit Your Business.” Hootsuite Social Media Management, 20 June 2017, blog.hootsuite.com/types-of-social-media/.

41

media platform such as YouTube channels that are rich and diverse with creative content. It is

important to incorporate effective marketing tools as a means to garner the attention, support, and

interest of the target audience.

V. Case Studies

For the purpose of this study, we will consider the relationship between awareness,

engagement, and behavioral impact using three case studies to evaluate the interplay between

message, messenger, and medium. Three case studies below are used to analyze alternative-narratives

to evaluate best practices, measures, and lessons learned in countering extremist content online: 1.

The Institute for Strategic Dialogue’s One-to-one Facebook Messaging campaign under the Against

Violent Extremism Network (messenger: one-to-one and formers), 2. Average Mohamed

(messenger: community led organization), and 3. The University of Maryland’s “Operation

Genovese” (messenger: government sponsored in private-public partnership). These case studies

were selectively chosen as a means to explore a diverse set of the impact different messengers can

have in countering extremist content online. While the case studies will specifically focus on the role

of the messenger, they will reference successful impact to reach a wider audience using a specific

message and medium. Measures to evaluate the impact for these programs is explained in greater

detail in the Appendix (Appendix B and C)

42

Case Study 1: Institute for Strategic Dialogue and Against Violent Extremism Network’s One to One Peer Mentorship: Facebook “Pay to Message” function. The Against Violent Extremism Network (AVE) Network is a global network of former extremists

and survivors of violent extremism selected by Google Ideas. The program is managed by the

Institute for Strategic Dialogue (ISD) in a unique public-private partnership. Intervention providers

from The Against Violent Extremism Network aim to bridge the gap between on and offline

interventions. To start, the project team identified ten former extremists within the network to test

methodologies across geographies. The project proposed aimed to test the viability of a method to

directly message those that are identified to openly express extremist sentiment online in an effort to

dissuade those individuals from embracing violent extremism. AVE’s nuanced use of technology to

connect, exchange, disseminate and influence extremists makes for this case study to be an excellent

example for the use of both a one-to-one and “former” messengers.

In using the analytical framework exemplified here to evaluate effectiveness, the program appears to

be quite impactful. In measuring awareness, the program was able to extend its reach successfully to

the target audience. Of the profiles confirmed at risk, over 60% of the messages sent one-to-one by

former extremists were seen by the candidate. In evaluating engagement, one-on-one conversations

with anonymous formers held a 44% sustained engagement rate and 69% sustained engagement rate

for known providers. This implies an important lesson learned in that anonymous profiles could be

far less effective and elicit less responses than known providers. In measuring the program’s impact

on behavioral change, it also appears that this program exhibited some tremendous potential.

The use of qualitative measures were quite critical in measuring success for this dimension.

One candidate, for example, refused help in the beginning of the one-to-one intervention stating “I

43

don’t need no help, I like me for me.”72 The intervener in this case decided not to pressure the

candidate. One week later, however, this candidate reached out to the intervention provider

requesting help by saying, “I need help if you can help me.” In measuring behavioral impact, this

request for help could be used as a proxy for a positive behavioral shift. Overall, in evaluating this

program’s awareness, engagement, and impact, it is considered to be highly successful to reaching

and engaging with the target audience towards positive change. This program was able to reach the

identified target audience using one to one intervention. To date, the program has engaged 723

radicalized individuals.73 As the study demonstrates, peer to peer and one-on-one messaging could

be a powerful tools used in efforts to address violent extremism online.

Case Study 2: Average Mohamed (Messenger: Community Organization)

Average Mohamed is a non-profit organization that uses animation and videos to counter

extremist ideologies. The organization was created and founded by Mohamed Amin Ahmed, a

Somali American who migrated to the United States approximately 20 years ago. The organization

believes that the solution to many issues faced today is simple: dialogue.74 According to the

organization’s website, the goal is to “give average parents, kids, and clergy talking points to counter

falsehood propagated by extremists, racists, and others who promote intolerance.”75 The Average

Mohamed campaign consisted of five short (1-2 minute) videos with a goal of reaching young

Somali Muslims between the ages of 14-25 living in the communities with high Somali Muslim

populations including Minneapolis, San Diego, Seattle, and Washington.

Evaluating the campaign’s metrics exceeds expectations on awareness, engagement, and

impact. In a study that compared Average Mohamed to two other counter-narrative programs

72 “One to One.” ISD, www.isdglobal.org/programmes/communications-technology/one-to-one/. 73 “One to One.” ISD, www.isdglobal.org/programmes/communications-technology/one-to-one/. 74 “About Us.” Average Mohamed, www.averagemohamed.com/about-us.html. 75 ibid

44

(Harakat-ut-Taleem and ExitUSA), Average Mohamed’s campaign on twitter achieved more video

views than the other two counter-narrative campaigns combined.76 In sum, this campaign’s

awareness metrics met and exceeded its measurement of success in comparison to similar

community-led efforts. The campaign’s engagement metrics were equally as impressive. In

measuring the total engagement for both video views and engagements on Facebook, YouTube, and

Twitter, the campaigned totaled 10,810 engagements (constructive, neutral, and antagonistic).

Twitter garnered the largest number of total engagement for Average Mohamed at 7,354. Facebook

metrics also saw tremendous increase in page likes showing significant connection between

successfully incorporating the most appropriate messenger, message, and medium for a specific

target audience. At the completion of the campaign, Average Mohamed’s Facebook likes gained 733

members, increasing sevenfold from the beginning of the program. Average Mohamed also received

the most Facebook comments at 305 comments made on the content, 66% of these were

supportive.

Qualitatively, comments on YouTube also received a substantially positive response. For

example, one user commented: “So much respect for you brother, taking time to address these issue,

believe me when I say this, your message will get far not only to non-Muslim but Muslim who have

forgotten their religion.”77 Engagement metrics imply that this program was able to utilize its local

and community-authorized credibility towards disrupting extremist narratives and promoting

alternatives. It is programs like these that exemplify the successful use of soft power and influence

towards decreasing the appeal of harmful narratives online.

76 Impact of Counter-Narratives Online https://www.isdglobal.org/wp-content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE_1.pdf 77 Impact of Counter-Narratives Online https://www.isdglobal.org/wp-content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE_1.pdf

45

While measuring Average Mohamed’s behavioral impact proved to be challenging, the

campaign met significant educational hurdles by increasing a user’s knowledge positively. For the

purposes of this study, this change in knowledge could perhaps be considered a proxy for positive

impact. According to this program’s evaluation, Average Mohamed’s videos inspired Muslims to

debate pressing topics including the role of gender in Islam as well as balancing multiple identities.

For example, the comments section, audience members aimed to discuss and uncover topics of

democracy and Islam, Islam and the West, as well as women’s role in Islam. Indeed, this campaign’s

findings were consistent with research on successful prevention measures to counter extremism,

where much emphasis was placed on investing in community led prevention by enabling civil society

efforts such as Average Mohamed to detect and disrupt radicalization and recruitment to extremist

ideologies and narratives. In sum, the Average Mohamed campaign strategically met its awareness,

engagement, and impact metrics to demonstrate that community partners and organizations are a

credible source of counter-messaging when providing insightful content that is specific to a

particular target audience.

Case Study 3: University of Maryland Peer2Peer

According to a recent Department of Homeland Security fact sheet, the government has

legal and credibility limitations in creating content to counter terrorist ideologies.78 Due to research

identifying the U.S. government’s limited credibility to create counter-narratives, for the purposes of

this study, a brief look at the role of government not as a creator but as a sponsor of a public-private

campaign can provide lessons learned to counter extremist narratives with credible actors. During

the Spring of 2017, the University of Maryland created the program “It Just Takes One” as part of a

78 Factsheet: A Comprehensive U.S. Government Approach to Countering Violent Extremism. https://www.dhs.gov/sites/default/files/publications/US%20Government%20Approach%20to%20CVE-Fact%20Sheet_0.pdf

46

competition funded by the Department of Homeland Security and administered by EdVenture

Partners.

While the primary messenger in this case was not the U.S. government, this case study is an

example of U.S. government sponsored efforts to counter violent extremism domestically online.

Within this program, the University of Maryland was given guidance, financial support, and

resources to create a game called Operation Genovese, where their mission was to develop a global

network of mobilized bystanders.79 To accomplish this mission, the group produced educational

content on the signs of radicalization and proper intervention methods, and promoted bystander

empowerment rhetoric to its target audience. The game challenged players to discover the warning

signs of radicalization and take appropriate actions to help “stop a terrorist attack.” Indeed the

group’s efforts returned significant awareness and engagement metrics. The group incurred 385,000

impressions on Facebook and thousands of views on campaign promotional material. While

following on twitter remained limited, the group’s campaign saw a steady increase overtime. This

implies that it could be possible for a campaign to see successful impact on one platform but not on

another and still be considered “impactful”.

While it was challenging to measure the program’s resulting behavioral impact with

bystanders, the group did manage to create a program that has the ability to shape depth of

knowledge of when and how to act as a bystander. A Twitter follower, for example, shared her story

and explained that this campaign had “given her the confidence to intervene in a workplace dispute

when she otherwise would not have done so. She now feels more empowered to be an active

79 It Just Takes One. University of Maryland. https://www.dropbox.com/s/eq0srfy6u8jyieu/S17_P2P_DHS_1st.pdf?dl=0

47

bystander in the future.”80 While the program did not have the opportunity to scale its efforts, it did

increase awareness of their target audience.

One perhaps fascinating lesson learned in this case study is on the limitations of artificial

intelligence. During interviews with one of the members of the University of Maryland’s program

team, a member explained to me that the counter-narrative campaign they created was taken down

on both Google and Apple after uploading the application, with both technology platforms citing

“hate-speech”.81 With hours to go and a deadline to submit their project to the Department of

Homeland Security, the team emailed Google frantically. Below is a portion of the team’s

correspondence with Google Play Developer Support82:

“My app has been flagged for hate speech. While the app does contain characters who say hateful things, one of the purposes of the app is to try to stop people from saying such things. As described in the app description, the game is a text based adventure game about de-radicalizing potential terrorists. People associating with extremist views can often hold hateful beliefs and say hateful things.”

The team had a valid argument—afterall, this was an awareness campaign to teach users how to

recognize the signs of extremism. The team then received the following correspondence from the

Google Play team on their request to reinstate their Application83:

“After further review, your app will not be reinstated because it violates violence, hate speech and sensitive events provisions of the Content Policy:

Our policy states: Hate Speech: We don't allow content advocating against groups of people based on their race or ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation, or gender identity.”

80 ibid 81 Marcella Goldring. Interview with Author. Washington, D.C. August 4, 2017. 82 June One UMD. Email to the author on October 15, 2017. 83 Just ONE UMD. Correspondence with Google Play Developer Support on Appeal for Reinstatement. June 9, 2017.

48

Indeed the group’s App, which was meant to raise awareness for the good of countering extremism

was automatically removed from platforms for violating the terms of service. While the full details of

what algorithms were used to flag this application are unknown, it can perhaps provide an example

of how efforts to counter extremism and even hate speech using artificial intelligence are just not yet

intelligent enough yet.

49

VI. Recommendations

Between June and December 2017, I interviewed numerous entrepreneurs, experts,

practitioners, government personnel, and students working on programs to counter extremism both

on and offline. This section will outline recommendations based on the research presented above

and unattributed qualitative interviews.

Recommendation 1: Artificial intelligence will not solve it all, a comprehensive framework is needed to provide alternative narratives.

Extremists use online tools in a plethora of ways including online chatting, booking flights,

saving pictures, watching videos, raising money, and planning malicious activities to name a few.

Policymakers have responded by calling on technology companies to create algorithms to curb the

spread of extremist narratives online. This response, however, is unfortunately simply reactionary

and may only offer temporary solutions. Practitioners and experts have criticized censorship and

offered recommendations to include comprehensive but proactive approaches to address extremism

in the online space. As artificial intelligence and technology continue to advance, policy makers,

experts, and practitioners alike should be able to move beyond responding to threats towards a more

preventative measure that supports local communities in addressing extremist narratives by

equipping them with them technical skills necessary to use an effective message, messenger, and

medium.

We must acknowledge that today we stand at the center of a heated debate on the role of

technology companies and government in regulating content online. While some may think that

artificial intelligence can perhaps solve the issue at hand by automatically removing harmful content

from the web, it is by no means a panacea to reduce the demand of extremist material. For a

sustainable solution to take place, we must work on providing alternative narratives to reduce the

50

appeal of extremism and harmful speech. In addressing such a heated debate on the role of online

regulation online, we need a balanced approach between content removal, free speech, and security.

We must recognize first that there are limitations to hopes to remove all extremist content online.

Matthew Rice, Acting Chief Digital Officer at the Countering Violent Extremism Task Force

(Department of Homeland Security) explains, “When you look at the question of content removal, it

obviously has a short-term tactical advantage but long term, it is whack-a-mole. Long-term, that is

not a solution.”84 In considering these limitation, we must get-ahead of the curve by preventatively

incorporating innovative solutions to an incredibly complex and delicate problem.

Recommendation 2: Target at risk-youth and “fence-sitters”

Campaigns aiming to provide alternative narratives need to focus on specific target

audiences to craft a core message that aligns with their interests. As studies show, youth are

considerably at risk of recruitment by violent extremist organizations seeking to influence young

minds and recruit them to engage in violence. ISIS recruiters are skilled at disseminating powerful

messages. Any effort that hopes to compete must provide youth with an even more compelling

message.

In addition to youth, counter-narratives should target fence-sitters. As one article describes

it, fence-sitters are simply those individuals that are looking for answers to certain controversial

questions in their lives. Experts should be ready to engage with this population but not necessarily

tell them what to do. When fence-sitters turn to the internet for answers, it is critical to incorporate

messages that provide alternative narratives to answer that answer critical questions. While

understanding that targeting fence-sitters is a worthwhile place to start on the spectrum of 84 “Breakout Session: Counter-Messaging from a Practitioner Perspective.” YouTube, YouTube, 29 Mar. 2017, www.youtube.com/watch?v=-dPGPc7GO_M.

51

radicalization, we must have a thorough understanding of demographics of the persona being

targeted.

Recommendation 3: A Credible messenger is necessary and Government is not the right

player.

Government should not act as a messenger of alternative-narratives but rather a facilitator of

credible local and community-led efforts. Private institutions should also consider new opportunities

to increase funding and engagement to help promote credible voices and initiatives to counter

extremist online presence. According to an interview with Zahed Amanullah of the Institute for

Strategic Dialogue, a central part of creating effective counter-narratives is finding a credible voice

and particularly one that could have been affected by extremism in the past. According to Mr.

Amanullah, however, “that alone does not mean that the story will change people’s hearts and

minds.” What is needed, however, is an effective strategy that is able to translate creative messages

from credible voices into something that will be able to impact the target audience. “There is a need

for preventative messaging and we need to be able to scale.” In order to be able to do so, however,

community partners like Zahed’s Against Violent Extremism network needs resources.

Messengers set to counter extremist propaganda should be credible, authentic, and local so

as to mirror the interests of the target audience. Government representatives, for example, are

oftentimes deemed untrustworthy in the communities where extremists recruit, making government-

led efforts to counter propaganda challenging and oftentimes ineffective. Formers, on the other

hand, can give unparalleled insight into one’s experience down the path of radicalization. The best

messengers of an anti-radicalization message, therefore, are trusted individuals who share social

capital with individuals at risk.

52

Recommendation 4: Scale efforts to disseminate alternative-narratives using tech-savvy mediums that mirror the target audience’s interests.

Once a message has been crafted, it is critical to choose the correct medium to disseminate

the information to the right target audience. We must then consider in what ways we can scale our

operation towards our target. The medium of the content should reflect the interests of the audience

i.e. if the audience uses Facebook, then video content is a great way of capturing their interest.

Counter speech campaigns could be made up of one or more mediums e.g. videos, text, images,

online literature, audio recordings or comics. As one government official told me, “Online and

Offline (counter-messaging) strategies are different. For Online efforts, we are able to identify

people by their actions. We are able to look at data and understand the choices but we need to be

able to scale these programs.”85 A combination of tech-savvy and creative videos, images, and

content on YouTube, Facebook, and Twitter, for example, need to be used appropriately with the

target audience’s core interests in mind.

85 Personal Interview. In-person, July 8, 2018.

53

VII. Conclusion and Further Research

The battle against extremism is too costly to let die out on the battlefield in Iraq and Syria

alone. Policymakers and practitioners are at a crossroad and must choose when and how best to act

to balance both privacy and security on and offline. The primary research question for this study

asked: What narrative components and outreach strategies are best suited to decrease the

demand for ISIS-inspired extremist content online? Laws, norms, markets, architecture regulate

the world of cyberspace. While laws, markets, and architecture reduce the supply of extremist

content online, we must jointly grasp the incredibly powerful tool of changing the norm to reduce

the demand in parallel.

After an in-depth analysis of the regulatory environment, this thesis concludes that three

narrative components and strategies are best suited to decrease the demand for ISIS-inspired

extremist content online: 1. A strategic message, 2. Credible messenger, and 3. Appropriate medium.

In the hunt to reduce the demand of extremism online, practitioners must recognize both the

opportunities and constraints that lurk ahead. As this study demonstrated, the internet is

characterized as a facilitator of extremist content with videos, images, and content emerging to

influence users to adopt radical ideologies. While this study primarily focused on ISIS-inspired

extremism in the U.S., extremists of a range of ideologies could leverage technology to further their

ideas.

I must note hear that there is, a gap in research and evaluation on program effectiveness and

measurable behavioral impact for counter-narrative campaigns. The lack of data on measuring

effectiveness can be quite challenging in grasping best-practices for future programs. Counter-

54

messaging to address extremism online has seen substantial evaluation challenges. At a 2017 George

Washington University panel, Clara Tsao, the Chief Technology Officer at the Countering Violent

Extremism Task Force (Department of Homeland Security) says, “Counter-messaging and CVE

online has a huge evaluation problem. DARPA has been working with their quantitative crisis

response system to really start evaluating the success of these campaigns and whether a like actually

translates into sentiment change.”86

In light of incredibly adaptive technologies and developments, there are critical decisions to

be made in how best to create an alternative that is able to rigorously compete in the war of ideas in

cyberspace. Strategically crafted alternative narratives must be created with intentionality and

creativity towards a specific target audience as generation Z or “fence-sitters”. Credible messengers

like one-to-one peer mentors, former extremists, and religious leaders are needed in disseminating

messages to the right audience. Last but not least, we need to flood the marketplace of ideas on the

Internet with alternative narratives that are strategically formulated and scaled up for our target

audience. We must remember, while terrorist organizations like ISIS may physically disappear, their

ideology does not. This battle, here, is only won by capturing the hearts and minds of those most at

risk.

86 “Breakout Session: Counter-Messaging from a Practitioner Perspective.” YouTube, YouTube, 29 Mar. 2017, www.youtube.com/watch?v=-dPGPc7GO_M.

55

VIII. APPENDIX

A. Radicalization by the Numbers

Radicalization by the Numbers: University of Maryland START’s PIRUS database Online Radicalization in the United States. This study uses The University of Maryland’s PIRUS dataset. PIRUS, or Profiles of Individual Radicalization in the United States dataset measures individual information on demographics, attributes, and radicalization process of over 1,800 violent and non-violent extremists. The dataset covered a period between 1948-2016 and was coded using entirely public sources of information to understand domestic radicalization from an “empirical and scientifically rigorous perspective”. For the purposes of this study, I looked at the following attributes to measure the level of Internet radicalization in the United States as described in the Question 1: What is the level of Internet Radicalization among individuals inspired or affiliated with the Islamic State? To do so, I set the following filters on the dataset measuring individual radicalization in the United States:

1. Filter 1: Terrorist Group Affiliation/Inspiration- Islamic State of Iraq and the Levant

2. Filter 2: Internet Radicalization: Internet was the primary Means of radicalization87

3.

Level of Internet Radicalization Among ISIS-Inspired

Out of 149 pro-ISIS individuals filtered, this table shows the role of the internet in their radicalization. This means that in 87.2% of ISIS

affiliates, the Internet played at least some known role in their online radicalization.

87" Profiles of Individual Radicalization in the United States” - PIRUS (Keshif) | START.umd.edu.

http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif

56

B. Analytical Framework and Analysis: The Messenger

Measure: 1: Awareness.

Sub-question: Is this actor able to quantitatively reach the proper number of individuals identified from the target audience? While this could be best visualized by a quantitative measurements such as “impressions” or reach”, qualitative observations such as knowledge of the existence of such resources are also important.

One-to-one, Peer (mentors)

“Former” extremist Community groups or Religious Institutions

U.S./Western Government:

Observation: One-to-one or peer mentors are able to contact target audience via social media such as Facebook, Twitter and YouTube. For Facebook, one to one interventions would be able to create small scale “impressions”, for example. One-on-one interventions on Twitter, one-on-one intervention can be measured by how many times a link has been shared publicly from a private conversation.

Observation: Formers would be effective on a small scale. In a targeted intervention, the former is able to contact an individual who sees the message, opens the email, or reads the material. There should be a high degree of “awareness” from formers as ideally target audience would consider them experienced.

Observation: Religious leaders and community groups would be able to exert large numbers of impressions and reach to target audience based on their credibility within the community. Religious leaders and institutions that do not have the online presence or following need to display “reach” and “impression” amounts comparable to their following.

Observation: With increased technological capabilities, more material would ideally able to reach its target population. Hundreds and thousands of impressions and retweets from target audience would result if this actor were a successful messenger.

57

Measure 2: Engagement

Sub-question: Is this messenger able to sustain either positive or neutral engagement with the target audience?

One-to-one, Peer (mentors)

“Former” extremist Community groups or Religious Institutions

U.S./Western Government

Observation: In one to one, peer mentorship activities, engagement would be sustained and constructive. Messengers would be able to communicate with target audience more than once in a positive manner in an online conversation. The frequency of one to one conversation would be able to rise with time (longer one knows mentor, the more frequent conversations and engagement).

Observation: Formers would be able to engage target audience where users would largely have questions about their former experience. This could come initially in a Q&A format prior to expanding the relationship towards impact.

Observation: This actor would able to sustain engagement and interactions towards target audience as they have the credibility, authority, and institutional resources with ties to the community to be able to engage actors positively. Target audience would be able to reach out, ask questions, and engage with content such as sharing links on Twitter and YouTube.

Observation: This actor would be able to engage target audience by increasing engagement such as likes, shares, and retweets. Target audience would be reached without difficulty given its vast array of resources at disposal.

58

Measure 3: Behavioral or attitude impact

Sub-question: Is this actor able to sustain engagement towards a positive direction and towards behavioral impact?

One-to-one, Peer (mentors)

“Former” extremist Community groups and Religious Institutions

U.S./Western Government

Observation: Peer mentors in this space would be able to measurably influence the target audience to be able to take action (example stop sharing extremist material, delete content, etc). Peer or one-to-one mentors would be able to exert small but significant impact towards slow but steady change in knowledge. This can be seen in new material being shared or sent by a user’s account.

Observation: Formers would be able to provide credibility of stories and former experience to help one move towards behavioral change. Target audience for this dimension would be observed changing their interaction and conversation with formers from questions regarding former experience towards seeking exit strategies or help.

Observation: Community groups or religious leaders would be able to sow the seeds of doubt by changing a user’s behavior or depth of knowledge towards a positive direction. In engagement, we would notice target audience decrease engagement with harmful material and instead seek community or religious institutional assistance. Impact at this level could also be new knowledge that informs behavior.

Observation: Behavioral impact can be observed if the target audience stops or decreases engagement with harmful material as a result of government-sponsored program. Examples include commenting, or sharing extremist propaganda after engagement with government sponsored counter-narrative programs. New knowledge that could move one towards behavioral change could also be considered impactful.

59

C. Measuring Effectiveness

In order to understand a campaigns overarching goals and progress, the Institute for Strategic

Dialogue defined certain metrics that can be drawn into three categories defined below:

60

I.X. Bibliography

1. "Profiles of Individual Radicalization in the United States - PIRUS (Keshif)." Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif

2. Solon, Olivia. “Sayfullo Saipov Had 90 Isis Videos on His Phone. Has the Fight against

Online Extremism Failed?” The Guardian, Guardian News and Media, 4 Nov. 2017, www.theguardian.com/us-news/2017/nov/04/sayfullo-saipov-isis-online-propaganda-new-york-terrorism.

3. “Islamic State and the Crisis in Iraq and Syria in Maps.” BBC News, BBC, 28 Mar. 2018, www.bbc.com/news/world-middle-east-27838034.

4. “#Virtual Caliphate.” Center for a New American Security, www.cnas.org/publications/reports/virtual-caliphate.

5. “Telegram Tracker”. Program on Extremism. Fall 2017. https://extremism.gwu.edu/sites/g/files/zaxdzs2191/f/Telegram%20Tracker%20Fall%202017%20(5).pdf

6. Patrikarakos, David. “Social Media Spreads Terrorism and Propaganda. Police It.” Time, Time, 2 Nov. 2017, time.com/5008076/nyc-terror-attack-isis-facebook-russia/.

7. “Social and News Media, Violent Extremism, ISIS and Online Speech: Research Review.” Journalist's Resource, 24 May 2016, journalistsresource.org/studies/society/social-media/social-media-violent-extremism-isis-online-speech-research-review.

8. “Facebook, Microsoft, Twitter and YouTube Announce Formation of the Global Internet Forum to Counter Terrorism.” Microsoft on the Issues, 27 June 2017, blogs.microsoft.com/on-the-issues/2017/06/26/facebook-microsoft-twitter-youtube-announce-formation-global-internet-forum-counter-terrorism/.

9. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.126.

10. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.271.

11. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.255.

12. MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175

13. MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175

14. MacKinnon, Rebecca. Consent of the Networked: the Worldwide Struggle for Internet Freedom. Basic Books, 2013.P 175

15. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. P.123.

16. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006. p.124.

17. ibid 18. “The Benefits of Soft Power.” HBS Working Knowledge, hbswk.hbs.edu/archive/the-benefits-

of-soft-power.

61

19. Nye, & Harvard Univ Cambridge MA Belfer Center FOR Science International Affairs. (2010). Cyber Power.

20. ibid 21. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006.

p.33 22. Lessig, Lawrence. Code and Other Laws of Cyberspace: Version 2.0. Basic Books, 2006.

p.33 23. Ratna Ghosh, W.Y. Alice Chan, Ashley Manuel & Maihemuti Dilimulati (2016) Can

education counter violent religious extremism?, Canadian Foreign Policy Journal, 23:2, 117-133, DOI: 10.1080/11926422.2016.1165713

24. Radicalization Dynamics: A Primer, Gang Enforcement. June 2012. http://www.gangenforcement.com/uploads/2/9/4/1/29411337/radicalization_process.pdf

25. Syed, Hasnain. “Safe Spaces Initiative.” Safe Spaces Initiative: Tools for Developing Healthy Communities - a Campaign by the Muslim Public Affairs Council (MPAC), www.mpac.org/safespaces/.

26. “How 5 Tech Giants Have Become More Like Governments Than Companies.” NPR, NPR, 26 Oct. 2017, www.npr.org/2017/10/26/560136311/how-5- tech-giants- have-become- more-like- governments-than- companies.

27. Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459

28. Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459

29. ibid 30. Smith Mundt Act of 2010 31. Author Interview with staff from the Global Engagement Center July, 2017. 32. Neumann, Peter R. “Options and Strategies for Countering Online Radicalization in the

United States.” Studies in Conflict & Terrorism, vol. 36, no. 6, 2013, pp. 431–459., 33. Neumann, Peter, Stevens, Tim. Countering Online Radicalisation: A Strategy for Action.

http://icsr.info/wp-content/uploads/2012/10/1236768491ICSROnlineRadicalisationReport.pdf

34. “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.” ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/.

35. “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.”

ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/.

36. McCann, Christopher Hope; Kate. “Google, Facebook and Twitter Told to Take down

Terror Content within Two Hours or Face Fines.” The Telegraph, Telegraph Media Group, 19 Sept. 2017, www.telegraph.co.uk/news/2017/09/19/google-facebook-twitter-told-take-terror-content-within-two/.

37. “Mobile Fact Sheet.” Pew Research Center: Internet, Science & Tech, 5 Feb. 2018, www.pewinternet.org/fact-sheet/mobile/.

38. Solon, Olivia, and Sam Levin. “How Google’s Search Algorithm Spreads False Information with a Rightwing Bias.” The Guardian, Guardian News and Media, 16 Dec. 2016,

39. www.theguardian.com/technology/2016/dec/16/google-autocomplete- rightwing-bias- algorithm-political- propaganda.

62

40. “UNESCO Releases New Research on Youth and Violent Extremism on Social Media.” UNESCO, 6 Dec. 2017, en.unesco.org/news/unesco-releases-new-research-youth-and-violent-extremism-social-media.

41. “The Redirect Method.” The Pilot, redirectmethod.org/pilot/. 42. “The Redirect Method.” The Pilot, redirectmethod.org/pilot/#results. 43. “Facebook, Microsoft, Twitter and YouTube Team up to Fight Terrorist Propaganda.” Los

Angeles Times, Los Angeles Times, 5 Dec. 2016, www.latimes.com/business/technology/la-fi-tn-internet-terrorism-20161205-story.html.

44. Rodriguez, Ojel L. “The Fight against Extremism Is Turning to a Fight against Freedom.” Washington Examiner, Washington Examiner, 25 Feb. 2018, www.washingtonexaminer.com/the-fight-against-extremism-is-turning-to-a-fight-against-freedom/article/2649871.

45. “How CEP's EGLYPH Technology Works.” Counter Extremism Project, 8 Dec. 2016, www.counterextremism.com/video/how-ceps-eglyph-technology-works.

46. ibid 47. Heilweil, Rebecca. “5 Ways Facebook Uses Artificial Intelligence To Counter Terrorism.”

Forbes, Forbes Magazine, 16 June 2017, www.forbes.com/sites/rebeccaheilweil1/2017/06/15/5-ways-facebook-uses-artifical-intelligence-to-counter-terrorism/#1020ab3d49e6.

48. Government, Transcript courtesy of Bloomberg. “Transcript of Mark Zuckerberg's Senate Hearing.” The Washington Post, WP Company, 10 Apr. 2018, www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/?utm_term=.6112207e6ec1.

49. ibid. 50. ibid. 51. ibid 52. Guynn, Jessica. “Facebook Says Artificial Intelligence Has Sped up Removal of Terrorist

Content.” USA Today, Gannett Satellite Information Network, 29 Nov. 2017, www.usatoday.com/story/tech/2017/11/28/facebook-says-artificial-intelligence-has-sped-up-removal-terrorist-content/903615001/.

53. Heath, Chip, and Dan Heath. Made to Stick: Why Some Ideas Survive and Others Die. Woonjing

ThinkBig Co., 2007. 54. “Strong Cities Network.” ISD, http://strongcitiesnetwork.org/wp-

content/uploads/2016/05/Briefing-Paper-1.pdf 55. Countering Terrorist Narratives. Best Practices from Around the Globe.

http://www.marshallcenter.org/MCPUBLICWEB/mcdocs/files/College/F_PTSS/coi_reports/coi_report_countering_terrorist_narratives.pdf

56. Helmus, Todd C., et al. Promoting Online Voices for Countering Violent Extremism. RAND Corporation, 2013.

57. Speckhard, Anne. “Anne Speckhard.” ICSVE, 8 Mar. 2018, www.icsve.org/research-reports/fighting-isis-on-facebook-breaking-the-isis-brand-counter-narratives-project/.

58. “Countering the Narrative: Understanding Terrorist's Influence and Tactics, Analyzing Opportunities for Intervention, and Delegitimizing the Attraction to Extremism.” Small Wars Journal, smallwarsjournal.com/jrnl/art/countering-the-narrative-understanding-terrorist’s-influence-and-tactics-analyzing-opportun.

63

59. " Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif

60. " Profiles of Individual Radicalization in the United States - PIRUS (Keshif) | START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-pirus-keshif

61. Written by Neal Keny-Guyer, Chief Executive Officer, Mercy Corps. “How Can We Stop Young People Joining Extremist Groups?” World Economic Forum, www.weforum.org/agenda/2015/10/how-can-we-stop-young-people-joining-extremist-groups/.

62. Beall, George. “8 Key Differences between Gen Z and Millennials.” The Huffington Post, TheHuffingtonPost.com, 5 Nov. 2016, www.huffingtonpost.com/george-beall/8-key-differences-between_b_12814200.html.

63. Beall, George. “8 Key Differences between Gen Z and Millennials.” The Huffington Post, TheHuffingtonPost.com, 5 Nov. 2016, www.huffingtonpost.com/george-beall/8-key-differences-between_b_12814200.html.

64. Sandra L. Colby and Jennifer M. Ortman. “Library.” Projections of the Size and Composition of the U.S: 2014-2060, 3 Mar. 2015, www.census.gov/library/publications/2015/demo/p25-1143.html.

65. “What Is a Digital Native? - Definition from Techopedia.” Techopedia.com, www.techopedia.com/definition/28094/digital-native.

66. Avis, William Robert. “The role of online/social media in countering violent extremism in East Africa”. http://www.gsdrc.org/wp-content/uploads/2016/06/HDQ1380.pdf

67. “Turning Point.” Turning Point | Center for Strategic and International Studies, 14 Nov. 2016, www.csis.org/features/turning-point.

68. ibid 69. Ashour, Omar. “Online De-Radicalization? Countering Violent Extremist Narratives:

Message, Messenger and Media Strategy.” Perspectives on Terrorism, www.terrorismanalysts.com/pt/index.php/pot/article/view/128/html.

70. “Developing Effective Counter-Narrative Frameworks for Countering Violent Extremism.” ICCT, icct.nl/publication/developing-effective-counter-narrative-frameworks-for-countering-violent-extremism/.

71. Egan, Karisa. “The Difference Between Facebook, Twitter, Linkedin, Google , YouTube, & Pinterest.” IMPACT: Inbound Marketing Strategy, Advice, and Agency, www.impactbnd.com/blog/the-difference-between-facebook-twitter-linkedin-google-youtube-pinterest.

72. Curtis, and Hootsuite. “10 Types of Social Media and How Each Can Benefit Your Business.” Hootsuite Social Media Management, 20 June 2017, blog.hootsuite.com/types-of-social-media/.

73. “One to One.” ISD, www.isdglobal.org/programmes/communications-technology/one-to-one/.

74. “One to One.” ISD, www.isdglobal.org/programmes/communications-technology/one-to-one/.

75. “About Us.” Average Mohamed, www.averagemohamed.com/about-us.html. 76. ibid 77. Impact of Counter-Narratives Online https://www.isdglobal.org/wp-

content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE_1.pdf

64

78. Impact of Counter-Narratives Online https://www.isdglobal.org/wp-content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE_1.pdf

79. Factsheet: A Comprehensive U.S. Government Approach to Countering Violent Extremism. https://www.dhs.gov/sites/default/files/publications/US%20Government%20Approach%20to%20CVE-Fact%20Sheet_0.pdf

80. It Just Takes One. University of Maryland. https://www.dropbox.com/s/eq0srfy6u8jyieu/S17_P2P_DHS_1st.pdf?dl=0

81. ibid 82. Marcella Goldring. Interview with Author. Washington, D.C. August 4, 2017. 83. June One UMD. Email to the author on October 15, 2017. 84. Just ONE UMD. Correspondence with Google Play Developer Support on Appeal for

Reinstatement. June 9, 2017. 85. “Breakout Session: Counter-Messaging from a Practitioner Perspective.” YouTube, YouTube,

29 Mar. 2017, www.youtube.com/watch?v=-dPGPc7GO_M. 86. Personal Interview. In-person, July 8, 2018. 87. “Breakout Session: Counter-Messaging from a Practitioner Perspective.” YouTube,

YouTube, 29 Mar. 2017, www.youtube.com/watch?v=-dPGPc7GO_M. 88. " Profiles of Individual Radicalization in the United States” - PIRUS (Keshif) |

START.umd.edu. http://www.start.umd.edu/profiles-individual-radicalization-united-states-

pirus-keshif