turning the tables on attackers - agari · turning the tables on attackers ... generate...

9
www.agari.com Executive Summary After two decades and billions spent adding layers of security controls, the rate at which organizations are being defrauded and breached has never been higher. For 95% of security breaches, cyber criminals use a targeted email attack as the initial point of entry. 1 Many CISOs have been stuck in a valiant but defensive battle as they combat an infinite number of attack variations. For every new cyber attack, security providers have created a defense based on identifying indicators of malicious activity; and cyber criminals have evaded detection by avoiding use of these indicators. What if you could turn the tables on cyber criminals and shift from fighting a series of defensive battles on attackers’ terms to actively securing yourself against all targeted attacks on your terms? There are two fundamental shifts in thinking that are required to make the transition. First, you have to change your approach from implementing controls that try to detect bad activity to those that model good trusted communications. The factor that provides the most signal in distinguishing between malicious and legitimate email is the identity of the sender and the level of trust associated with that sender. The common thread among all targeted email attacks is that attackers impersonate a trusted entity. If you can prevent identity deception, you can reliably stop current and future types of targeted attacks. Second, it is critical to recognize that the most common target and most vulnerable part of your business are your employees, customers, and partners—the humans that make decisions every day that can result in financial losses, security breaches, and brand damage. Unfortunately, research has shown it is impossible to reliably train humans to differentiate between malicious and legitimate communication. 2 This whitepaper discusses why the predominant security paradigm fails and how to turn the tables on the criminals, as well as explaining Agari’s solution to the problem of malicious email. The Agari Email Trust Platform is the only system that models authentic, trustworthy communications to protect humans from being deceived by cyberattacks such as phishing, ransomware and business email compromise – including targeted attacks with no attachments or URLs in the emails. Agari’s technology analyzes emails at a scale of more than two trillion emails per year to identify characteristics of authentic and trusted communications and protect you against everything else. www.agari.com Turning the Tables on Attackers Disrupting Advanced Email Attacks by ‘modeling good’ 1 Verizon Data Breach Investigations Report, 2017, http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/ 2 Deanna Caputo, Shari Lawrence Pfleeger, Jesse D. Freeman, and M. Eric Johnson, IEEE, 2014, Going Spear Phishing: Exploring Embedded Training and Awareness, https://www.computer.org/cms/Computer.org/ComputingNow/pdfs/ IEEESecurityPrivacy-SpearPhishing-Jan-Feb2014.pdf 1 | Many CISOs have been stuck in a valiant but defensive battle as they combat an infinite number of attack variations.

Upload: duongnhu

Post on 05-Apr-2018

228 views

Category:

Documents


5 download

TRANSCRIPT

www.agari.com

Executive SummaryAfter two decades and billions spent adding layers of security controls, the rate at which organizations are being defrauded and breached has never been higher. For 95% of security breaches, cyber criminals use a targeted email attack as the initial point of entry.1 Many CISOs have been stuck in a valiant but defensive battle as they combat an infinite number of attack variations. For every new cyber attack, security providers have created a defense based on identifying indicators of malicious activity; and cyber criminals have evaded detection by avoiding use of these indicators.

What if you could turn the tables on cyber criminals and shift from fighting a series of defensive battles on attackers’ terms to actively securing yourself against all targeted attacks on your terms? There are two fundamental shifts in thinking that are required to make the transition.

First, you have to change your approach from implementing controls that try to detect bad activity to those that model good trusted communications. The factor that provides the most signal in distinguishing between malicious and legitimate email is the identity of the sender and the level of trust associated with that sender. The common thread among all targeted email attacks is that attackers impersonate a trusted entity. If you can prevent identity deception, you can reliably stop current and future types of targeted attacks.

Second, it is critical to recognize that the most common target and most vulnerable part of your business are your employees, customers, and partners—the humans that make decisions every day that can result in financial losses, security breaches, and brand damage. Unfortunately, research has shown it is impossible to reliably train humans to differentiate between malicious and legitimate communication.2

This whitepaper discusses why the predominant security paradigm fails and how to turn the tables on the criminals, as well as explaining Agari’s solution to the problem of malicious email. The Agari Email Trust Platform is the only system that models authentic, trustworthy communications to protect humans from being deceived by cyberattacks such as phishing, ransomware and business email compromise – including targeted attacks with no attachments or URLs in the emails. Agari’s technology analyzes emails at a scale of more than two trillion emails per year to identify characteristics of authentic and trusted communications and protect you against everything else.

www.agari.com

Turning the Tables on Attackers Disrupting Advanced Email Attacks by ‘modeling good’

1 Verizon Data Breach Investigations Report, 2017, http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/

2 Deanna Caputo, Shari Lawrence Pfleeger, Jesse D. Freeman, and M. Eric Johnson, IEEE, 2014, Going Spear Phishing:

Exploring Embedded Training and Awareness, https://www.computer.org/cms/Computer.org/ComputingNow/pdfs/

IEEESecurityPrivacy-SpearPhishing-Jan-Feb2014.pdf

1 |

Many CISOs have been stuck in a valiant but defensive battle as they combat an infinite number of attack variations.

www.agari.com

Today’s Security Industry: Blindly in Love with a Flawed Security ParadigmAs unwanted email has transitioned from annoyance to malice, and from large scattershot

batches to small targeted campaigns, the principal detection method has remained the same:

detecting known bad, whether by volume, sender identity, or content. Detection of known bad

URLs, for example, is commonly used to identify and block phishing messages. While useful to

limit the impact of large-scale campaigns, this approach doesn’t address the problem of spear

phishing, where the odds of even learning about offending URLs are low – and the chances of

doing this while it still matters are negligible. Criminals routinely circumvent filters by frequently

modifying their URLs, making identification of bad a constant catch-up game. Similarly, criminals

evade signature-based malware detection (a form of blacklisting) by using crypters to periodically

generate never-seen-before instances of known malware threats.

The failure of traditional security technologies is evident—not just from seeing the explosive

growth of email-centric crime syndicates and entire national economies supported by email-based

crime—but also from the perspective of the security concerns felt in board rooms and within

governments. Business Email Compromise (BEC), one of the most prominent types of targeted

email attacks, grew by 2370% between early 2015 and late 2016, according to the FBI.3 As the

email threat evolved, the blacklisting paradigm became obsolete...but was not abandoned. Email-

based attacks have turned into an existential threat to many organizations, which are currently

unprotected as the security industry has stubbornly clung to a paradigm that can’t hope to

address the evolving threat of targeted email attacks leveraging identity deception.

To understand what prompted the security industry to rely on chasing the bad, we need to look

back at the history of online abuse.

The History of Email Security: Chasing the BadWhen large-scale spam first hit the scene in 1994, service providers did their best to fight back,

with AOL deploying countermeasures which were equal parts anomaly detection and blacklisting

(with service providers deploying large teams to manually catalog the attacks). While this reactive

approach could only shorten the duration of any given attack, it was a tolerable strategy during

a time when the low per-recipient losses dictated the need for tremendous batches of emails. A

paradigm was born.

Why Chasing Bad Is a Game that can’t be Won

The attack-countermeasure cycle that arose in response to the blacklisting

paradigm began with bulk spam written in ASCII simple text; then

anti-spam engines looking for indicators of badness such as keywords

or known offending IP addresses.

www.agari.com2 |

As the email threat evolved, the blacklisting paradigm became obsolete...but was not abandoned.

The blacklisting paradigm was developed to fight unwanted commercial emails: spam.

3 FBI Public Service Announcement, “Business E-mail Compromise E-mail Account Compromise the 5 Billion Dollar Scam”, May 4, 2017, https://www.ic3.gov/media/2017/170504.aspx.

ASCII Sample Text

www.agari.com

This forced spammers to replace the keywords with other terms and move

to new IP addresses, which in turn resulted in the spam filter policies being

updated again. While these early attempts at blocking spam worked, the

successes were always temporary. In addition to not rejecting all spam

(i.e., resulting in false negatives), these methods were known to cause

collateral damage by rejecting large amounts of legitimate emails (false

positives). This was the beginning of a reactive process that would span

decades, and which still continues today.

The same battle was fought in the antivirus world. As viruses, trojans,

and worms were deployed, whether in the context of emails or not, the

defense was to detect patterns – whether of code or what the code

did –corresponding to signature detection and behavioral detection.

Again, criminals responded by modifying the appearance of the threats.

Today, there are off-the-shelf tools, such as “crypters,” sold on the dark

web, that automate the shape-changing and help attackers evade

detection. In response, antivirus companies pump out patches at a

dizzying speed, attempting to keep up with the attackers and

their modifications.

Recognizing the limitations of anti-virus signatures, vendors have started

to respond to malware threats by also using advanced threat protection

(ATP) sandboxes. While still looking for badness, the sandboxes observe

executables in isolated environments to limit the potential damage of

payloads. In response, attackers increased their sophistication by

creating sandbox-aware malware, which is able to detect sandboxes

and avoid detection.

3 |

Converging Threats (viruses and worms)

Crypters

Mutations and Randomization

www.agari.com

Similarly, phishing turned mainstream about 15 years ago, prompting the

need to address this additional threat. Defined as an attempt to obtain

sensitive information (usernames, passwords, and credit card details;

and indirectly, money), phishing emails are disguised as communications

from trustworthy entities. While phishing attacks began as poorly spelled,

unprofessional-looking emails, the attempts have become more polished

over time.4

Today, even IT administrators have difficulties distinguishing legitimate

emails from phish. In fact, the average click-through rate on phishing

emails is 20%,5 and over the last year 37% of companies were victims of a

successful phishing attack.

Security technologies responded to the threat of phishing emails by

blacklisting URLs seen in reported phishing emails, in turn causing

phishers to replace the URLs in consecutive phishing campaigns.

4 |

4 Osterman Research, Securing Office 365, 2017, https://www.agari.com/resources/whitepapers/office-365-research-report/ Fortune, Facebook and Google Were Victims of $100M Payment Scam, 2017, http://fortune.com/2017/04/27/facebook-google-rimasauskas/

5 Security Intelligence, Lowering succeptibility to Phishing Emails, 2015, https://securityintelligence.com/news/employee-training-lowers-susceptibility-to-phishing-emails-report-finds/

Phishing

Advanced Phishing

www.agari.com

Without Artifacts, Chasing Bad Fails CompletelyAs we have described, the traditional security paradigm is based on the idea of detecting known bad – whether this is expressed as a

word, an IP address, an attachment, or a URL. This makes malicious emails that don’t contain one of these types of artifacts difficult to

blacklist, which is the exact problem now posing a whole new type of challenge to the security industry.

The poster child among malicious emails with no artifact is the BEC attack,

a targeted attack in which attackers impersonate trusted colleagues of

their intended victims, requesting funds transfers or sensitive data to be

sent. A BEC attack targeted both Facebook and Google in 2017—in a

sophisticated email scam by a Lithuanian man pretending to be a supplier

to these companies, the criminal collected over $100m over the course of

his crime.6 BEC attacks have grown dramatically in the last few years, with

the FBI reporting a 2370% growth from 2015-2016. The growth is due to

both the spectacular profit opportunities for criminals, and the failure of

the traditional security technologies. BEC emails are very hard to block

based on chasing bad: they commonly don’t have any attachments, URLs

or keywords that aren’t typically used in common business conversations.

According to the FBI, BEC comprised $5.3 billion in exposed losses over

the last two years.7

The practical lack of artifacts is also what explains the recent growth

of spear phishing emails. While these emails do contain artifacts, the

attackers carefully avoid reusing them, effectively disabling traditional

methods based on chasing bad. This is because the notion of blacklisting

relies on blocking recurring badness. However, a spear phishing attack

is a targeted phishing attack that, by not reusing previously used URLs

and by being tailored to the intended victims, evades detection by

traditional security technologies…while also becoming more credible

to the recipients.

To understand how to automatically detect malicious emails with no

artifacts, it is necessary to first understand identity deception.

5 |

www.agari.com

A Common Theme: Identity Deception A common thread throughout this attack-defense-evasion evolution has been the impersonation of a trusted entity. Targeted email

attacks use identity deception to manipulate victims into opening attachments, clicking on links, responding with sensitive information,

or wiring money.

The first form of identity deception was email spoofing, which became

prevalent in the mid-2000s, and was commonly used by phishers posing

as financial institutions. As authentication standards such as DMARC were

introduced and large numbers of security conscious organizations started

adopting these new standards, criminals were forced to look for new

methods of identity deception.

They responded by shifting to attacks using deceptive display names that

closely resembled a trusted brand they wished to impersonate. These

attacks circumvent traditional anti-spoofing methods by sending emails

from legitimate accounts using deceptive display names to masquerade

as trusted senders. Today, deceptive display names comprise the most

common type of identity deception, corresponding to 94% of all the BEC

attacks. With these attacks, it is typically not a financial institution that is

impersonated, but a trusted contact, such as a colleague or the CEO of the

company. The criminals ask for help with a funds transfer, or for sensitive

information, and surprisingly, are often successful. Unfortunately, DMARC,

the industry standard designed to protect brands from direct domain

spoofs, does nothing to protect against deceptive display names

In the early days of identity deception, many people derided the victims of

these attacks as stupid, ridiculing them for having fallen for the fraudulent

emails. Others avoided the blame game, and instead emphasized user

awareness training—which is just another (but kinder) way of placing the

security responsibility with the end user. As the sophistication of attacks

has increased, it has become evident that it isn’t reasonable for end users

to be responsible for their security—or for that of their employers. One

study showed user awareness training had no significant impact on the

click-through rate for phishing emails.8 This has become increasingly clear

as attackers have found more ways to exploit the confusion that results

from making urgent fraudulent requests, while their intended victims are

simultaneously juggling many other legitimate tasks in a given day.

6 |

User Training and Phishing

Study: Awareness training had

no significant impact on the

click-through rate for

phishing emails.

www.agari.com

Turning the Tables on Cyber CriminalsWe have described why chasing the bad has been both a common security strategy, and an increasingly irrelevant one. The chasing

the bad paradigm has been the least successful when used to address targeted email attacks – as proven by the explosive growth of

this type of threat. This is because chasing bad “lops off the long tail” of attacks – but for targeted attacks, there is no long tail.

Consistency of Good

Fortunately, the industry continues to evolve. Instead of perpetually playing catch-up with the threat actors, which is what the chasing

bad model implicitly prescribes, a new paradigm based on modeling what is good has arisen. This paradigm change recognizes two

important facts:

Fact 1

When it comes to email identity, the universe of good is

finite, whereas the universe of bad is unlimited.

Practically speaking, what signifies benevolent users is

generally predictable while what signifies malicious email

senders constantly changes.

Fact 2

Good does not react to security countermeasures,

but bad does

Criminals are used to moving from domain to domain,

jettisoning blocked accounts, reformulating pitches, and

recompiling malware, and every popular patch gives rise to a

modified attack strategy.

This new paradigm is maybe best explained with a few examples, the first of which addresses the predominant threat vector associated

with Business Email Compromise (BEC) – namely, deceptive display names.

Deceptive Display Names

It is well understood that end users typically look first at the display name of an email (e.g., “who sent it”); then at the content of the

email; and then take appropriate action. Their understanding of the email content, therefore, is guided by their impression of the

sender’s identity. Very few people scrutinize the email address of the sender. This is likely because the display name offers the same

information in all legitimate cases, that is – in a format that is more readily accessible to the user. Also, some mail readers never show

the email address, but only the display name – examples include Apple Mail using the “smart addresses” configuration, and most

mobile mail readers.

This is not lost on criminals. According to Agari research, in the context

of BEC, 94% of all the attacks use deceptive display names – with the

remainder instead relying on email spoofing.

Even email with malicious attachments or malicious embedded URLs relies

on deceptive display names, since this increases the probability of success

for the attacker. That means that if you can detect emails with deceptive

display names, the need for a sandbox or URL analysis significantly

diminishes – and so does the risk as sophisticated malware is increasingly

sandbox aware.

6 |

Attack Types used in BEC

94% Deceptive Display

Names

www.agari.com

Display name deception is very simple, as shown in Figure M1; in most cases, the criminal simply registers a free webmail account and

sets the display name to match the name of the person or institution the criminal wants to impersonate.

Figure M1. The figure shows a hypothetical legitimate email (on the left) and a hypothetical email

using display name deception (on the right). Most people do not pay attention to the email address,

or think that this must be a personal email account for the sender.

Detecting Deceptive Display Names

The detection of deceptive display names starts with asserting what display names are legitimate for various users. This could be

done by an admin manually entering the names (e.g., the names of the C-suite); by uploading a database of names; or by inferring, from

email traffic, who interacts with whom. In addition, the security service can use a list of commonly known brands that most users trust –

whether they are customers of these brands or not.

Second, the security control automatically scrutinizes each incoming email to determine whether it has a display name that is on the list

of legitimate entities. It also determines if the email was sent from such a party. If a display name matches an entry in the list of senders,

but not the corresponding email address, that is potentially deceptive.

The third step is protecting the user when a potentially deceptive email has been detected. Occasionally it is not appropriate to block

the email – after all, this could be another user with the same name as that of the legitimate entity. When this occurs, replacing the

display name with a warning or removing it altogether are a few effective controls to warn the user to proceed cautiously.

When to Block?

If the security control can establish with near-certainty that an email is dangerous, then it is always better to block it than to place it in

a spam folder or add a warning to it. The reason is simple: research has shown a large percentage of users don’t heed warnings, and

many users periodically search the contents of their spam folders for emails that were mistakenly placed there.9

Consider an email with a spreadsheet attachment, where the spreadsheet has a macro. As a macro is an executable script, it can be

harmful. While most macros are safe – and typically, macros from colleagues are safe – some are not.

If this hypothetical email is found to be sent from an authoritative party – and there are no signs that this party has been corrupted –

then it is almost certainly safe to deliver the email, with the attachment. As the recipient opens the attachment, they will typically be

notified that it contains a macro.

However, if the email has a deceptive display name – and it has a macro – then the risks of delivery overshadow the benefits. After all,

this is an email that not only is likely to cause a misunderstanding with the recipient in terms of who sent it, but it also has an attachment

that is potentially dangerous. This email is better to block.

In reality, this type of filtering is very complex, and there are many more aspects to consider. For example, if it is determined that the

sender is an authoritative party, and the attachment not only has a macro but a macro that is known to be harmful – then this has

another meaning: It means that the sender most likely has been compromised, and that a criminal uses his or her account to send

dangerous attachment. If this is detected, the best course of action might be to automatically notify an admin of the sender, or to

remove the attachment and notify the recipient to alert the sender via telephone.

6 |

9 The maybe most famous example of this problem is the “John Podesta” phishing email, which was detected to be a spoofed message; automatically placed in the spam folder of

the recipient; but pulled out of the spam folder by an associate of the account holder.

www.agari.com

No matter what the complexity is of the filtering mechanism, though, the guiding principle remains the same: to model what is good (in

this example, who is an authoritative sender), and to determine dangerous discrepancies. These discrepancies correspond to emails

that are not determined to be good, but which share important characteristics with good emails, and therefore will likely be interpreted

as good by the recipients – a.k.a. deceptive emails.

To make this possible, we use Artificial Intelligence methods to model what is good. By constantly observing traffic and identifying

goodness from repeated, benevolent patterns, the system creates an understanding of what truly is good. It then identifies

discrepancies from good, where these resemble the good traffic, but does not fit the model. These cases are the high-risk, potentially

deceptive cases.

The greater benefits and efficacy of modelling good instead of chasing bad are easy to understand. The attacker can use a never-

before-seen email address, can send the message from an IP range that has never been associated with fraud, can avoid reusing

known-bad URLs, and can use a crypter to obfuscate any malware he is attaching – and still be detected. The reason is simple: if the

attacker doesn’t masquerade as a trustworthy identity, then why would the recipient agree to the requests in the email? Detecting

deceptive display names hits criminals where it hurts them the most – by detecting the deception attempt on which their entire

opportunity rests. And at the same time, taking away their most powerful tool to evade being blocked: it does not matter how much

they try to obfuscate the contents.

ConclusionsThe security industry has doggedly repeated its search for bad in the hopes that criminals will eventually run out of tricks. It is now time

we change that paradigm and focus on what good, authentic indicators of communication look like. Forcing threat actors to reveal who

they, and detecting when they are masquerading as somebody they are not, is the only way that we can identify new, never-before-

seen attacks – which are constantly and rapidly changing.

Agari applies the approach of using Artificial Intelligence to model good trusted communications and then detect deceptive

discrepancies. Doing this, we are neutralizing the two most powerful weapon attackers have: identity deception and evasion/

obfuscation. Good is predictable and consistent, while bad constantly changes, and by modelling good, it doesn’t matter whether the

attackers change their email addresses, IP addresses, URLs or attachments to evade detection. At the same time, zero day attacks

become less of a concern. Simply put, there is nowhere for the attackers to go. Agari’s approach by definition doesn’t rely on having

previously seen an attack to catch the next instance.

6 |

About Agari Agari, a leading cybersecurity company, is trusted by leading Fortune 1000 companies to protect their enterprise, partners and customers from advanced email phishing attacks. The Agari Email Trust Platform is the industry’s only solution that ‘understands’ the true sender of emails, leveraging the company’s proprietary, global email telemetry network and patent-pending, predictive Agari Trust Analytics to identify and stop phishing attacks. The platform powers Agari Enterprise Protect, which help organizations protect themselves from advanced spear phishing attacks, and Agari Customer Protect, which protects consumers from email attacks that spoof enterprise brands. Agari, a recipient of the JPMorgan Chase Hall of Innovation Award and recognized as a Gartner Cool Vendor in Security, is backed by Alloy Ventures, Battery Ventures, First Round Capital, Greylock Partners, Norwest Venture Partners and Scale Venture Partners. Learn more at http://www.agari.com and follow us on Twitter @AgariInc.