verizon 2015 dbir vm portion

3
2015 DATA BREACH INVESTIGATIONS REPORT 15 Of all the risk factors in the InfoSec domain, vulnerabilities are probably the most discussed, tracked, and assessed over the last 20 years. But how well do we really understand them? Their link to security incidents is clear enough after the fact, but what can we do before the breach to improve vulnerability management programs? These are the questions on our minds as we enter this section, and Risk I/O was kind enough to join us in the search for answers. Risk I/O started aggregating vulnerability exploit data from its threat feed partners in late 2013. The data set spans 200 million+ successful exploitations across 500+ common vulnerabilities and exposures (CVEs) 11 from over 20,000 enterprises in more than 150 countries. Risk I/O does this by correlating SIEM logs, analyzing them for exploit signatures, and pairing those with vulnerability scans of the same environments to create an aggregated picture of exploited vulnerabilities over time. We focused on mining the patterns in the successful exploits to see if we could figure out ways to prioritize remediation and patching efforts for known vulnerabilities. ‘SPLOITIN TO THE OLDIES In the inaugural DBIR (vintage 2008), we made the following observation: For the overwhelming majority of attacks exploiting known vulnerabilities, the patch had been available for months prior to the breach [and 71% >1 year]. This strongly suggests that a patch deployment strategy focusing on coverage and consistency is far more effective at preventing data breaches than “fire drills” attempting to patch particular systems as soon as patches are released. We decided to see if the recent and broader exploit data set still backed up that statement. We found that 99.9% of the exploited vulnerabilities had been compromised more than a year after the associated CVE was published. Our next step was to focus in on the CVEs and look at the age of CVEs exploited in 2014. Figure 10 arranges these CVEs according to their publication date and gives a count of CVEs for each year. Apparently, hackers really do still party like it’s 1999. The tally of really old CVEs suggests that any vulnerability management program should include broad coverage of the “oldies but goodies.” Just because a CVE gets old doesn’t mean it goes out of style with the exploit crowd. And that means that hanging on to that vintage patch collection makes a lot of sense. 11 Common Vulnerabilities and Exposures (CVE) is “a dictionary of publicly known information security vulnerabilities and exposures.”—http://cve.mitre.org VULNERABILITIES Do We Need Those Stinking Patches? 99.9 % OF THE EXPLOITED VULNERABILITIES WERE COMPROMISED MORE THAN A YEAR AFTER THE CVE WAS PUBLISHED. 10 30 50 70 90 ’99 ’00 ’01 ’02 ’03 ’04 ’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12 ’13 ’14 YEAR CVE WAS PUBLISHED NUMBER OF PUBLISHED CVE’S EXPLOITED Figure 10. Count of exploited CVEs in 2014 by CVE publish date

Upload: tawnia-beckwith

Post on 08-Aug-2015

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Verizon 2015 DBIR VM portion

2015 DATA BREACH INVESTIGATIONS REPORT 15

Of all the risk factors in the InfoSec domain, vulnerabilities are probably the most discussed,

tracked, and assessed over the last 20 years. But how well do we really understand them? Their

link to security incidents is clear enough after the fact, but what can we do before the breach to

improve vulnerability management programs? These are the questions on our minds as we enter

this section, and Risk I/O was kind enough to join us in the search for answers.

Risk I/O started aggregating vulnerability exploit data from its threat feed partners in late 2013.

The data set spans 200 million+ successful exploitations across 500+ common vulnerabilities

and exposures (CVEs)11 from over 20,000 enterprises in more than 150 countries. Risk I/O does

this by correlating SIEM logs, analyzing them for exploit signatures, and pairing those with

vulnerability scans of the same environments to create an aggregated picture of exploited

vulnerabilities over time. We focused on mining the patterns in the successful exploits to see if

we could figure out ways to prioritize remediation and patching efforts for known vulnerabilities.

‘SPLOITIN TO THE OLDIES

In the inaugural DBIR (vintage 2008), we made the following observation: For the overwhelming

majority of attacks exploiting known vulnerabilities, the patch had been available for months prior

to the breach [and 71% >1 year]. This strongly suggests that a patch deployment strategy focusing

on coverage and consistency is far more effective at preventing data breaches than “fire drills”

attempting to patch particular systems as soon as patches are released.

We decided to see if the recent and broader exploit data set still backed up that statement. We

found that 99.9% of the exploited vulnerabilities had been compromised more than a year after the

associated CVE was published. Our next step was to focus in on the CVEs and look at the age of CVEs

exploited in 2014. Figure 10 arranges these CVEs according to their publication date and gives a

count of CVEs for each year. Apparently, hackers really do still party like it’s 1999. The tally of really

old CVEs suggests that any vulnerability management program should include broad coverage of the

“oldies but goodies.” Just because a CVE gets old doesn’t mean it goes out of style with the exploit

crowd. And that means that hanging on to that vintage patch collection makes a lot of sense.

11 Common Vulnerabilities and Exposures (CVE) is “a dictionary of publicly known information security vulnerabilities and

exposures.”—http://cve.mitre.org

VULNERABILITIES

Do We Need Those Stinking Patches?

99.9%

OF THE EXPLOITED VULNERABILITIES WERE COMPROMISED MORE THAN A YEAR AFTER THE CVE WAS PUBLISHED.

10

30

50

70

90

’99 ’00 ’01 ’02 ’03 ’04 ’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12 ’13 ’14

YEAR CVE WAS PUBLISHED

NU

MB

ER

OF

PU

BL

ISH

ED

CV

E’S

EX

PL

OIT

ED

Figure 10.

Count of exploited CVEs in 2014 by CVE publish date

Page 2: Verizon 2015 DBIR VM portion

16 VERIZON ENTERPRISE SOLUTIONS

NOT ALL CVES ARE CREATED EQUAL.

If we look at the frequency of exploitation in Figure 11, we see a much different picture than

what’s shown by the raw vulnerability count of Figure 12. Ten CVEs account for almost 97%

of the exploits observed in 2014. While that’s a pretty amazing statistic, don’t be lulled into

thinking you’ve found an easy way out of the vulnerability remediation rodeo. Prioritization will

definitely help from a risk-cutting perspective, but beyond the top 10 are 7 million other exploited

vulnerabilities that may need to be ridden down. And therein, of course, lies the challenge; once the

“mega-vulns” are roped in (assuming you could identify them ahead of time), how do you approach

addressing the rest of the horde in an orderly, comprehensive, and continuous manner over time?

FROM PUB TO PWN

If Figure 11—along with our statement above from 2008—advocates the turtle method of

vulnerability management (slow and steady wins the race), then Figure 12 prefers the hare’s

approach. And in this version of the parable, it might just be the hare that’s teaching us the lesson.

Half of the CVEs exploited in 2014 fell within two weeks. What’s more, the actual time lines in

this particular data set are likely underestimated due to the inherent lag between initial attack

and detection readiness (generation, deployment, and correlation of exploits/signatures).

These results undeniably create a sense of urgency to address publicly announced critical

vulnerabilities in a timely (and comprehensive) manner. They do, however, beg the question:

What constitutes a “critical vulnerability,” and how do we make that determination?

WHAT’S IN A SCORE, THAT WHICH WE ALL COMPOSE?

The industry standard for rating the criticality of vulnerabilities is CVSS,12 which incorporates

factors related to exploitability and impact into an overall base score. Figure 13 (next page)

displays the CVSS scores for three different groupings of CVEs: all CVEs analyzed (top), all CVEs

exploited in 2014 (middle), and CVEs exploited within one month of publication (bottom). The idea

is to determine which CVSS factors (if any) pop out and thus might serve as a type of early warning

system for vulnerabilities that need quick remediation due to high likelihood of exploitation.

12 The Common Vulnerability Scoring System (CVSS) is designed to provide an open and standardized method for rating

IT vulnerabilities.

0%

20%

40%

60%

80%

100%

CVE−1999−0517

CVE−2001−0540

CVE−2002−0012

CVE−2002−0013

CVE−2014−3566

CVE−2012−0152

CVE−2001−0680

CVE−2002−1054

CVE−2002−1931

CVE−2002−1932

TOP 10 CVE'S EXPLOITED

PE

RC

EN

T O

F E

XP

LO

ITE

D C

VE

'S

Figure 11.

Cumulative percentage of exploited vulnerabilities by top 10 CVEs

About half of the CVEs exploited in 2014 went from publish to pwn in less than a month.

0%

20%

40%

60%

80%

100%

0 4 8 12 16 20 24 28 32 36 40 44 48

WEEK EXPLOIT OCCURED AFTER CVE PUBLISH DATE

PR

OP

OR

TIO

N O

F C

VE

’S E

XP

LO

ITE

D

Figure 12.

Cumulative percentage of exploited vulnerabilities by week(s) from CVE publish dates

Page 3: Verizon 2015 DBIR VM portion

2015 DATA BREACH INVESTIGATIONS REPORT 17

None of the exploitability factors appear much different across the groups; it seems that just

about all CVEs have a network access vector and require no authentication, so those won’t be

good predictors. The impact factors get interesting; the proportion of CVEs with a “complete”

rating for C-I-A13 rises rather dramatically as we move from all CVEs to quickly exploited CVEs.

The base score is really just a composite of the other two factors, but it’s still worth noting that

most of those exploited within a month post a score of nine or ten. We performed some statistical

significance tests and found some extremely low p-values, signifying that those differences are

meaningful rather than random variation. Even so, we agree with RISK I/O’s finding that a CVE

being added to Metasploit is probably the single most reliable predictor of exploitation in the wild.14

Outside the CVSS score, there is one other attribute of a “critical” vulnerability to bring up, and

this is a purely subjective observation. If a vulnerability gets a cool name in the media, it probably

falls into this “critical vulnerability” label.15 As an example, in 2014, Heartbleed, POODLE, Schannel,

and Sandworm were all observed being exploited within a month of CVE publication date.

In closing, we want to restate that the lesson here isn’t “Which of these should I patch?” Figure

13 demonstrates the need for all those stinking patches on all your stinking systems. The real

decision is whether a given vulnerability should be patched more quickly than your normal cycle

or if it can just be pushed with the rest. We hope this section provides some support for that

decision, as well as some encouragement for more data sharing and more analysis.

13 As all good CISSPs know, that’s Confidentiality, Integrity, and Availability.

14 www.risk.io/resources/fix-what-matters-presentation

15 As this section was penned, the “Freak” vulnerability in SSL/TLS was disclosed. http://freakattack.com

Figure 13.

CVSS attributes across classes of CVEs

EXPLOITABILITY IMPACT CVSS BASE SCORE

50%

100%

50%

100%

50%

100%

ALL CVEs (n= 67,567)

Loca

l

Adja

cent

Netw

ork

Low

Mediu

m

Hig

h

None

Sin

gle

Mult

iple

Com

ple

te

Part

ial

None

Com

ple

te

Part

ial

None

Com

ple

te

Part

ial

None 1 2 3 4 5 6 7 8 9

10

JUST EXPLOITED (n=792)

CRITICAL (exploited within one month of publication; n=24)

Acc

ess

Vecto

r

Acc

ess

Com

ple

xit

y

Auth

enti

cati

on

Confi

denti

ali

ty

Inte

gri

ty

Avail

abil

ity

NU

MB

ER

OF

CV

E’s

A CVE being added to Metaspoit is probably the single most reliable predictor of exploitation in the wild.