fault tolerance and adversarial risk analysis · however, the process of risk analysis is often...

99
1 FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS A FRAMEWORK By Frank Radeck 7/23/2012 Davenport University CAPS795, MSIA

Upload: others

Post on 22-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

1

FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS

A FRAMEWORK

By Frank Radeck

7/23/2012

Davenport University

CAPS795, MSIA

Page 2: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

2

ABSTRACT

Risk analysis can provide crucial insight into an organization’s specific security needs.

Unfortunately, traditional risk analysis methods are flawed. The resulting data is then used to

create mitigation strategies that are costly and potentially ineffective. Alternative approaches,

such as those based on game theory, have been crafted to address the weaknesses of

traditional risk analysis, but they suffer from other limiting factors. A new model must be

created that is realistic, practical, and applicable. This model will incorporate elements of

traditional risk analysis and game theoretic approaches. Such elements include adversary

profiling, fault tolerance, collaborative layering of defenses, cost analysis, and attack

classification.

Page 3: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

3

TABLE OF CONTENTS

INTRODUCTION ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

RISK ANALYSIS .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

THESIS STATEMENT ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

ALTERNATIVES TO TRADITIONAL RISK ANALYSIS .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

APPLICABLE CONCEPTS.... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

PROPOSAL ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

SCENARIO ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

ISO/IEC 27001:2005 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS .. . . . . . . . . . . . . . . . . . . . . . . . . . 37

COMPARING METHODOLOGIES ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

SUMMARY ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

FUTURE WORK ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

REFERENCES ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

APPENDIX A – Business Impact Analysis Questionnaires .. . . . . . . . . . . . . . . . . . . . . . . . . 72

APPENDIX B – Exposure Tables by Attack Classification ... . . . . . . . . . . . . . . . . . . . . . . . . 75

APPENDIX C – Proposed Analysis Tables .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

APPENDIX D – Impact Tables .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

APPENDIX E – ISO Risk Analysis Spreadsheets .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

APPENDIX F – Assets .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Page 4: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

4

INTRODUCTION OVERVIEW OF SECURITY

The foundation of information security programs lies in risk analysis. Properly

identifying, prioritizing, and mitigating risk is crucial to the existence of any organization which

relies on electronic information. Particularly, when the electronic information is that which

requires confidentiality, integrity, and availability, commonly referred to as the CIA triad, risk

analysis plays an integral role. However, the process of risk analysis is often misguided, if not

completely overlooked. There are many reasons for this, including the lack of managerial

support, lack of understanding of appropriate methodologies, and lack of strategic vision.

Problem is, there is no universally accepted risk analysis methodology in the information

security landscape. Due to this, many responsible parties are left confused as to where to begin.

Additionally, the problems of managerial support and strategic vision are similar in that they

are fundamentally based on the misunderstanding of what security is and what it is not.

CONFIDENTIALITY, INTEGRITY, AND AVAILABILITY

The concept of security has taken on a different nature since the beginning of the

modern computing era. In order to understand computer security, one must understand the

aforementioned CIA model. This notion can be applied to many different points of interest,

including, but not limited to: systems, data, information, and other electronic resources.

Confidentiality refers to the privacy or obfuscation of said points of interest. Regarding an

information resource, confidentiality could be applied via encryption, for example. The goal of

confidentiality is to only allow access to the resource to those that require access to it. Integrity

refers to the accuracy or validity of an item. Regarding an information resource, integrity could

Page 5: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

5

be verified with file checksums. The goal of integrity is to ensure a resource is what it claims to

be or what it is believed to be. Finally, availability refers to the usability and timely accessibility

of an item. Regarding an information resource, availability implies that, while secure, data is

ready to be used and in a reasonable manner. The goal of availability is simply to provide the

means of access to those that require it. Choosing a security solution should always be based

on its ability to address these three concepts.

THE VALUE OF SECURITY

Implementing confidentiality, integrity, and availability is an art form. Like any segment

of business, there is a limit on usable resources. For any given vulnerability or addressable and

known weakness, there will be a very attractive and very expensive manner in which to

mitigate the adverse effects of exploitation. However, this is often not the solution due to cost.

Security is a balance. A common depiction of this balance is the inherent imbalance of over-

securing a resource. For example, regarding an item that costs $1 to replace, implementing a

mitigation strategy that costs $500 to protect this resource would be foolish. Disregarding all

other potential factors, a more attractive option given the choice of that strategy or nothing at

all would be to do nothing at all, and simply wait until something happens. However, as the

price declines, usually via the introduction of additional strategic options, the resource’s

protection becomes increasingly reasonable.

Unfortunately, in order to truly sell the idea of security it is necessary to display the

value. This can be difficult because security is typically viewed as having no tangible value.

Securing a resource has no value and the secure state of a resource also has no value.

Page 6: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

6

Inherently, security appears to be little in the way of cost effective in that it is only a cost. One

must display the intangible value of security, usually by exemplifying the cost of a scenario that

is likely to occur without the warranted security measures. One way this is commonly achieved

if by crafting an Annualized Loss Expectancy value based on historical data as well as the

experiences of organizations in an overall similar situation. The general idea is that by

implementing security measures, the Annualized Loss Expectancy value is decreased, taking

into consideration, of course, the cost of the measures. These values are ultimately

assumptions, as are the effects of securing a resource. However, security is no longer just about

feeling better concerning one’s chances of successfully defending an attack, and more about

defending one’s self from regulatory infractions. These factors combine to help illustrate to

decision makers the importance of an implementation.

EXTERNAL FACTORS IN SECURITY DECISIONS

The cost of security is rising. Not only are more complex attack vectors creating a

market for more complex mitigation measures, but regulatory requirements placed on

organizations housing sensitive information are becoming increasingly stringent. This burden

can be hard to deal with, particularly as the size of the organization scales downward. A very

small organization will fall under the same guidance as a much larger organization when it

comes to most regulations simply because it houses the same types of data. Obviously, a small

organization typically has proportionately less resources than a larger organization, and this

fact hits home when it comes to security and reasonable measures. Fortunately, regulatory

requirements only outline what an organization should achieve, and not how to achieve it. As

Page 7: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

7

the size of an organization decreases, it puts the onus on the creativity of responsible parties to

craft solutions that are effective and efficient, in practice and in cost.

SECURITY ARCHITECTURE

There are seemingly endless options of security measures to choose from. When a

weakness is found, it can almost assuredly be strengthened by a product or technique. As

discussed earlier, security is at the mercy of an organization’s resources. When a security

budget is introduced, it must not be devoured haphazardly by solutions assumed effective.

Spending too much on one solution leaves less for other solutions, which is a flawed approach.

Solutions should collaborate, at the very least financially and ideally in practice. This is how to

get the most out of a security budget, and this is very important for smaller scale organizations

or organizations will smaller scale budgets. In order to achieve this principle, a high level

understanding of an organization’s unique risk environment is required. This is where risk

analysis comes into play.

RISK ANALYSIS OVERVIEW OF METHODOLOGIES AND TERMINOLOGY

Risk analysis has been around for a long time and has been applied to many industries in

different ways. Traditionally, there are two flavors of risk analysis: qualitative and quantitative.

Qualitative risk analysis refers to that which is based in classifications. For example, a risk may

be classified as High Severity, or Low Severity. Using High and Low are a way of classifying

measurements. Quantitative risk analysis refers to that which is based in numbers. However,

these numbers cannot be classifications, they must be real world values derived from some

Page 8: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

8

manner of applicable data. For example, measuring severity based on a 1-10 scale is not

quantitative, although it is utilizing numbers. However, measuring severity based on annualized

loss expectancy of a given risk is quantitative. There are advantages and disadvantages to both

methods, often linked. Typically, qualitative risk analysis would be used as a quick way of

analyzing a scenario, or as a way of analyzing in the absence of usable historical data. The

disadvantage of this is that the results are not satisfactorily accurate, and often not entirely

applicable. Having results as accurate as possible is not only inherently attractive but also

establishes the foundation by which to build an effective mitigation strategy, particularly in

terms of a budget. A quantitative risk analysis is a more drawn out method, but will usually

provide the desired results. Because of the lengthy process and research required for this

method, it can be costly and as such not always appropriate in all situations.

METRICS

There are many methods for carrying out either of the two aforementioned styles of risk

analysis. Traditionally, whatever method is chosen is based on probabilities in some way.

Probabilistic risk analysis is used particularly in information security scenarios and any other

complex situation with highly technical stages. This model, which will henceforth be referred to

as the “traditional” model for risks analysis, utilizes two variables as its basis. The first variable

is severity, which may also be known as magnitude. Severity refers to the adverse impact of a

risk event’s occurrence and is typically an estimation, classified or quantified. The second

variable is the likelihood or probability of the risk event occurring and creating an adverse

impact. These variables can be based on historical data of events that occurred to similar

Page 9: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

9

organizations, or they can be blind guesses. The more research that is done to define these

values, the more accurate the results of an analysis process will be.

Furthermore, the traditional model is often expanded when applied to computer

security. The introduction of additional metrics such as exposure, capability, and motivation

seek to enhance results. Exposure refers to the landscape of the risk, and is typically defined as

electronic or physical. This variable divides the results into groups of different exposures, and

can also give insight into a landscape that is particularly vulnerable in a given scenario.

Capability refers to the actual feasibility of an attack based on an organization’s situation as

well as the perceived difficulty of the exploitation of a given vulnerability. Finally, motivation

refers to the concept of defining the reasons why an exploitation or attack vector may take

place. Motivation may be unclear or invalid for certain threats. These variables combine to

create an overall image of the risk, and allow for the prioritization of risk which can then

ultimately be used as a basis for mitigation strategies.

RISK IDENTIFICATION

In order to analyze risks, they must first be identified. Traditionally, this process involves

simply guessing which risks exist and which do not. Intrinsically, this process neglects those risks

that exist but are not identified for one reason or another. Risk identification can be very

haphazard. However, basing the discovery on historical data or similar organizational findings

may aid the situation.

Fortunately, for computer security this process becomes significantly easier. Since the

majority of high priority risks in largely technical scenarios are due to vulnerabilities in software,

Page 10: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

10

the conducting of a vulnerability assessment is often enough to satisfy the requirements of a

risk identification process. Vulnerability assessments on computer systems are primarily

automated endeavors, the results of which are essentially laundry lists of risks in the form of

exploitable weaknesses. Using that list, it is then possible to go about analyzing the previously

discussed metrics.

FLAWS IN TRADITIONAL RISK ANALYSIS

M.V. Ramana published an article concerning the effectiveness of probabilistic risk

assessments, with a particular focus on the problems facing current risk assessment

methodologies. In this article, the idea of complex systems is elaborated on as applied to risk.

Specifically, risks in complex systems cannot be accurately measured with human imaginations.

This is in reference to the severity metric discussed previously. Furthermore, the likelihood

variable is also flawed because it relies largely in historical data which is either incomplete or so

inconsistent in reporting that it is invalid. The resulting metrics and findings of these

assessments are then used to guide high level decision making, which is then exposed as

unsound when disaster actually strikes. Ramana also explores the reality that failure modes, or

risk events, can often chain react and cause additional risk events to occur, unexpectedly. In

order to account for these possibilities, a different strategy must be applied (Ramana, M.V.,

2011).

In a related work presented to the Massachusetts Institute of Technology in 2004, the

effectiveness of probabilistic risk assessments was questioned in theory. The writers concluded

that said risk assessments are flawed because they fail to take into account “indirect, non-

Page 11: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

11

linear, and feedback relationships” that occur during incidents in complex systems, such as

those found in information technology. Essentially, statically identifying threat vectors via

human imagination is not a realistic practice because key elements will be missing. Additionally,

identifying threat vectors via computer-aided discovery, such as with vulnerability assessments,

may also be missing key elements because those assessments are based on known items. Risk

analysts must account for known unknowns and unknown unknowns (Ramana, M.V., 2011).

In a paper by N. Bambos on information security risk management, the effectiveness of

risk assessments as a whole is explored. Bambos explains that because risk assessments are

essentially snapshots in time of an organization’s risk environment, that the results are really

only applicable for a short period of time after the assessment. While it is true organizations

perform such assessments at regular intervals, what Bambos is truly getting at is that risk

management needs to take a more dynamic role (Bambos, N., 2010).

Bambos also discusses the concept of risk sources. These are defined using different

scales based on who is involved. For example, an engineer may be concerned with risks that

must be remediated in minutes, and is focused on absolute security. However, a manager may

be more concerned with risks that must be remediated in months, and is more focused on

limiting exposure. It is argued that risk management is increasingly ineffective as the scale of

the risk source shrinks, that is, as it becomes more important to react aggressively. While it is

important that risk assessments capture risk sources of all scales, it is also important to realize

the speed at which remediation is required, and treat those risk sources accordingly. Bambos

proposes tools for dynamically managing small scale risks that require aggressive action. While

Page 12: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

12

intriguing, these tools would work under the assumption of known attack vectors. However, the

principle of dynamically addressing risk sources can be applied differently, not just for the

purposes of speed, but for the purposes of fault tolerance (Bambos, N., 2010).

THESIS STATEMENT FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS

Traditional, probabilistic risk analysis is flawed. Many alternative approaches have

attempted to address these flaws. One such approach involves applying game theory to

information warfare in a two-player, attacker versus defender style game. However, the

typically applied models fail to account for unknowns. Additionally, although game theoretic

approaches have been proven to defeat traditional analysis methods in terms of realism, the

inclusion of key assumptions is a limiting factor.

What is required for realistic, practical, and applicable risk analysis in today’s

environment is a model that accounts for unknowns, is based on controllable factors, and

considers the existence of threats based on adversarial information. The proposed model will

theoretically outperform traditional risk analysis given an identical scenario. Supporting work

has been completed on elements of the proposed model, and applicable concepts will be

utilized.

ALTERNATIVE TO TRADITIONAL RISK ANALYSIS A GAME THEORETIC APPROACH

Game theory provides a different way of viewing computer security, particularly in

regards to risk analysis. Game theory is the study of situations involving two or more conflicting

Page 13: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

13

or sometimes cooperating entities. This study can be applied to a variety of situations, so long

as they can meet the inherent requirements of a game. In order for a situation to be modeled

as a game, it must include two or more players. Additionally, the players must have a set of

playable strategies. In other words, the players must have a set of different decisions to make

that may lead them to a goal. The goal of the game, which may be different for each player, is

known as the payoff, and is another required element. For a traditional normal form game, the

assumption is these elements are all known to every player. However, that assumption is simply

not the case for complex games. Game theory is an intriguing decision strategy because it takes

into account what others are doing or what they are going to do. It also takes into account what

others know, and also who they are and why they are. Simply put, utilizing game theory for risk

analysis, as well as risk identification, adds a dimension that can greatly affect results.

The goal of applying game theory to given scenario is to identify the solution concepts, if

any. That is, what are the moves that players can make that effectively lead them to their

payoff. One common solution concept is known as Nash equilibrium. This is a situation where

all players have performed in such a way that no deviation from their current stance is

desirable. Knowing the existence of a Nash equilibrium, or any solution concept for that matter,

can greatly affect the strategy employed by a player.

ATTACKERS AND DEFENDERS

Modeling computer security in a warfare style game between attackers and defenders

has been done with encouraging outcomes. In order to understand this particular game, one

must understand its players. There are a number of common assumptions about the defenders

Page 14: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

14

and attackers in this game. It is assumed that the defenders are knowledgeable and interested

in security. It is also assumed that attackers are omnipotent, and pursue targeted attacks. The

reality is much different. That is, defenders do not know everything, and attackers are only

economically rational.

The players will attempt to understand one another. The defenders will seek to learn

who their adversaries are, and estimate their adversaries’ resources. These resources may

include economic, technological, and behavioral. By understanding the extent of these

resources the defender may develop a better understanding of the probability and potential

magnitude of an attack by the adversary. Additionally, by understanding these resources the

defender may be able to discern the potential moves of the adversary, particularly, specific

attack vectors.

GAME MODELS

There are several different forms of games that have been applied to information

warfare. A Perfect Information Game is a game where involved players know the set of possible

strategies of other players as well as every player’s payoffs. Conversely, a game in which one

player is unaware of a particular piece of information, such as the possible moves of another

player, is called an Imperfect Information Game. A Static/Strategic Game is a one-move game in

which involved players all choose their strategy simultaneously and are unaware of the choices

of other players. A Dynamic/Extensive Game expands on the previous style by allowing

additional moves, performed simultaneously and again without the knowledge of other players’

moves. Finally, a Stochastic Game is a game in which the current state of the system dictates

Page 15: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

15

the flow of the game. For example, a payoff of the game may be reached by the player, which

then sends the game into a new state, which may or may not change elements of the game.

Information warfare scenarios can be modeled using any one of these types of games. The

effectiveness of these models is debatable.

FLAWS IN APPLIED GAME THEORETIC APPROACHES

In a survey of game theory as applied to network security by S. Roy and collaborators of

the University of Memphis, it was noted that the majority of games modeled are static, games

with perfect information, or games with complete information. However, this is simply not the

case in reality, where dynamic games with imperfect and incomplete information are prevalent.

Information warfare games must be modeled with reality in mind, in order to be truly effective

(Roy, S., 2010).

As an example, K. Lye and J. M. Wing of Carnegie Mellon University presented a

Stochastic model for a network security game. In this game, all potential moves are identified in

the form of a list of possible attacks, a list of possible defenses (or responses to attacks), and a

defined list of network states. Assuming these elements are all known, this model, when

applied, would be very intriguing due to the existence of Nash equilibrium. However, in reality

there are no defined lists of possible moves. While risk analysts may be able to capture a large

amount of these moves via imagination and applicable historical data, any amount of unknown

information changes the game’s type and the model is no longer valid. If the game’s type is

ignored, the results of the analysis will be flawed (Lye, K. & Wing, J., 2002).

Page 16: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

16

While the foundation by which results are achieved may be unrealistic, the effectiveness

of the potential outcomes is still intriguing. In a recent work by E. A. Lemay, a model for

inputting adversary information and behavior was created and applied. What the model proved

was that knowledge of adversaries can be truly helpful during analysis. Knowing the adversary

may reveal general attack patterns or generically classified attacks. The idea of classifying

attacks rather than defining a particular vector is interesting because it may account for

unknowns. However, the actual model employed in the work continues by breaking down

attacks into steps, which is then used for specific analysis. Breaking down attacks into specific

steps assumes a high level technical knowledge of attacks, which is not always the case for

defenders (Lemay, E., 2011).

APPLICABLE CONCEPTS PLAYER AWARENESS

An interesting aspect of game theory and network security game models is player

awareness and its varying degrees. In a work by J. Halpern on the validity of Nash equilibria in

network security games, this concept of player awareness is explored. Assuming the style of

game is correctly identified, the actual knowledge of players must be understood. This

knowledge is measured in terms of awareness. According to Halpern’s work, players lacking

awareness behave abnormally and apparently irrational. These players do not respond to

potential incentives, probably because they do not know they exist, or because they are

irrational in addition to lacking awareness. These players may also be at the mercy of faulty

tools. The reason this is important is because a player’s knowledge and rationality greatly affect

Page 17: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

17

solution concepts in a game. For instance, a Nash equilibrium relies heavily on rationality. If a

player deviates from a Nash equilibrium it can be assumed they were unaware that it was a

Nash equilibrium or they are irrational. Players lacking awareness make apparently random

decisions, and are unpredictable in nature (Halpern, J., 2011).

Halpern introduces the idea that network security games should be modeled in what is

known as an augmented game, or a game with varying degrees of awareness. Additionally, the

game changes slightly as the level of awareness increases, primarily due to the game being

repeated, and players learning from historical outcomes. However, this game is still based on

an unreasonable assumption, that is, that the specific game being played between two

particular players will be repeated. An attacker is likely to strike once, and move on. However,

the idea of awareness is very applicable. Sometimes players are aware that they are unaware,

which is always a good state of play. For instance, a chess player may make a mental move and

assume an outcome based on their perceived value of a location on the board. This mental

move is a precautionary step because the player knows they are unaware of what the other

player will do, but must consider that player’s options (Halpern, J., 2011).

Expanding on awareness is G. Stocco and G. Cybenko’s work on exploiting adversaries.

Particularly, the work continues the observation that awareness is dynamic as a game is played.

Depending on historical data and length of the game, a player’s personal awareness may

change dramatically. This awareness includes awareness of their own moves as well as

awareness of their opponent’s moves. In extreme situations, this awareness may include

awareness that there is a particular opponent, or perhaps that there is a game being played at

Page 18: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

18

all. An example of changing awareness can be seen in a card game. If a player has never played

the game before, they lack overall awareness about what they can do, what their opponents

can do, what the payoffs are, and why. However, after playing the game, the player will pick up

on these elements through simple observation. It is not entirely unreasonable to assume that

players in a network security game will have increased awareness. The question is whether or

not increasing awareness is a goal of a particular player. It could be argued that a defender

would be attracted to awareness as it would help greatly in strategizing. Of course, awareness

will theoretically help either player fundamentally. On the other hand, it could be argued that

an attacker may or may not use the awareness at all. Based on the idea that an attacker is not

launching specific targeted attacks, awareness may not be of value. Reason being, the addition

of awareness will not change the attacker’s strategy (Stocco, G. & Cybenko, G., 2011).

ADVERSARIES AND INFLUENCE IN SECURITY GAMES

Information warfare is not unlike traditional warfare. Particularly in scenarios where two

entities hold immense power and accessibility to targets, information warfare can appear

similar to nuclear warfare in that it would seem both parties would prefer deterrence to action.

However, the adversaries in information warfare are very different from adversaries in

traditional warfare.

In a paper by N. Christin on network security games, adversaries were explored. These

adversaries, as referenced previously, are largely motivated by financial gain, rather than the

destruction of their targets. Highly technical, targeted attacks are rare. Hacktivist style attacks,

or those motivated by some political bias, are also rare. The modern threats facing

Page 19: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

19

organizations are those that seek to financially gain from the attack. Additionally, attackers prey

on behavioral biases. This fact is especially obvious when considering phishing attacks, which

generally prey on fear. The incentives for these types of attacks are high and the penalties are

disproportionately low, which is why they are common (Christin, N., 2011).

Interestingly, Christin also introduces the possibility that modeling attackers may not

actually lead to effective strategies. However, by knowing an adversary’s motivation, the

defender already knows more about the game than the attacker, which is inherently

advantageous. A defender must assume an attacker will do what they do not want them to do.

Finally, Christin introduces the idea that the game is commonly attackers versus end users, due

to the nature of the attacks. Therefore, attacks on organizations are likely due to end user error

or lack of awareness. Clearly, understanding the adversary can shed light on at least portions of

the game, which can be used to guide decision making (Christin, N., 2011).

By understanding who an adversary is, it could be perceived possible to influence their

actions and strategic decisions in a game. Related work by A. Clark and R. Poovedran discussed

how to maximize influence over adversaries in competitive environments. Specifically, how a

defender can influence the actions of an attacker was explored. The ability to influence in a

game is based on the assumption that the game is perfect as defined previously, which is not

the case in reality. Influencing adversary actions is also based on the assumption that the

adversary is rational and will react to the incentives. These assumptions are simply not reliable,

particularly in real world scenarios. Furthermore, it is in question whether or not attackers are

concerned with incentives at all. Because attackers are largely influenced by a specific

Page 20: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

20

motivation, an incentive will theoretically fail to attract or detract. Additionally, because a

significant amount of attacks are automated in nature, and only respond to inputs, the

incentive would have to be conceived based on previous knowledge of the attack vector, which

is unrealistic (Clark, A. & Poovedran, R., 2011).

An example of attempted influence in a proposed network security game is contained in

B. Tuffin’s work on the interplay between security providers and attackers. Tuffin proposes the

idea that a defender may use vendor decisions to influence attackers. For example, if a network

is primarily inhabited with Microsoft based products versus Apple products, there may be a

different reaction by an adversary. It is argued that in this specific example, an attacker would

prefer the network inhabited by Microsoft products, because the vulnerabilities are more

widely known and Microsoft products are simply more widely attacked (Tuffin, B., 2010).

This concept is interesting but it is debatable whether or not the attacker’s actions are

actually influenced as much as they are simply deterred from playing the game at all.

Furthermore, it is an unrealistic expectation to design a network’s operating systems using

attackers as a driver. Unfortunately, other examples of vendor influence are not as compelling.

Regardless of vendor, and at a larger scale, regardless of attempted influence, if an attacker is

motivated and has a means of attack, they will attack, or so it must be assumed. Basing one’s

security on the idea that an attacker will do what another player wants is nothing more than

wishful thinking.

Page 21: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

21

ADVERSARY PROFILING

Although it may be unrealistic to influence adversary actions, it is still important to

understand who they are. This is achieved through a process known as adversary profiling. In

work by T. Parker and Colleagues for a Blackhat USA Conference presentation, the theoretical

and practical aspects of adversary profiling were explored. Theoretically, the advantage of

adversary profiling is the potential for increased understanding of attacker behavior as well as

an increased ability to anticipate adverse actions. In practice, the advantage of adversary

profiling is an improved understanding of how to respond to adverse actions as well as an

improved understanding of attacker capabilities. The authors go as far as to suggest role playing

adversarial scenarios via a simulation. The authors suggest that attackers and their tools are

typically more advanced than defenders, so defenders need to work on coming up to speed.

While an intriguing idea, role playing and simulations will not be an attractive option to many

organizations due to time and resource involvement. However, any information from

completed simulations or historical data on adversaries can be useful (Parker, T., 2003).

In order to perform risk analysis, threats must be considered. Adversaries are threats,

and therefore should be analyzed thoroughly. Parker and his colleagues introduced several

adversary profiles, as follows, and in order of least potential damage to greatest:

Unstructured Hackers

Structured Hackers

Organized Crime / Industrial Espionage

Malicious or Non-Malicious Insiders (i.e. disgruntled employees, users, aids to

outsiders)

Page 22: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

22

Unfunded or Funded Hacktivists / Terrorist Groups

State Sponsored Groups

By determining the existence of these profiles and their individual prevalence in an

organization’s risk landscape, relevant analysis can be achieved. All adversaries will fall into one

of the above groups, and it should be determined which group is of particular interest, although

all should be considered seriously (Parker, T., 2003).

ATTACK CLASSIFICATION

In addition to the adversary profiles described in the previous section, capabilities of

particular adversaries should be considered. In work by M. Sachs on adversary profiling, the

concept of threat classes is introduced. Again, the idea is to create a working list that captures

every possible type of attack. This allows for the accommodation of unknowns. The discussed

list of attack classifications are as follows, in no particular order:

Direct Penetration; workstations, servers, infrastructure components

Indirect Penetration; workstations, servers, infrastructure components

Penetration Tools

Misused Insider Privileges

Directed Malicious Code

Indirect Malicious Code

Denial of Service and Distributed Denial of Service

Interception or Sniffing of Communications

Spoofing or Masquerading

Modification of Information

Diversions

Page 23: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

23

These classifications capture all possible attack vectors. The combination of these

classifications as well as adversary profiles allows for risk analysis that takes into account

unknowns. This allows for the adoption of a fault tolerant strategy (Sachs, M., 2003).

COST AS A METRIC

Clearly, basing analysis on large assumptions is not ideal. In order to effectively weight

options, controllable factors should be leveraged. One such factor is cost. In a work by D. Banks

on game theory based risk analysis as applied to counterterrorism, the concept of utilizing a

cost matrix as a driver in mitigation strategy decision making was explored. Particularly, the

work introduced the idea that the efficacy of a solution should not entirely be based on ability

to address a weakness. Utilizing cost as a metric is an effective way to judge a solution.

However, the work was based on a game model that assumed perfect information. Specifically,

the costs identified in the example pertained to known attack vectors and their related

mitigation strategy. The strategies were then compared. While it cannot be assumed that the

attack vectors are known, the idea of leveraging cost as a metric in analysis should be explored

further in a complex game with imperfect information (Banks, D., 2006).

FAULT TOLERANCE

In work by D. L. Kewley and J. Lowry on the effects of defense in depth on adversary

behavior, the problem of uncooperatively layering defenses is explored. It is commonly

believed that defense in depth, in other words layering defensive strategies, will increase the

level of security in a system. However, not only is this usually not the case, it may actually harm

Page 24: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

24

the security of the system. For example, introducing additional security solutions automatically

introduces a new vector to be attacked. A major weakness of security implementations is

complexity (Kewley, D.L. & Lowry, J., 2001).

In a study done by the authors, it was realized that layering defenses in an

uncooperative manner, that is, in a way that each defensive layer is unaware of the other

adjacent layers, is ineffective. This was proven by measuring the amount of work done by the

adversary. In short, the amount of work was not significantly increased when a layered strategy

was employed (Kewley, D.L. & Lowry, J., 2001).

The work also explores the idea of defense in breadth. This concept is proven to be far

more effective in practice by the resulting amount of work performed by adversaries in the

experiment. The idea of defense in breadth implies fault tolerance by the ability to defend

multiple attack vectors. However, defense in breadth is not enough by itself. Layered strategies

should still be employed but in a cooperative manner. This puts a burden on the designers of

the security implementation to research methods by which to achieve successful interactions,

but the resulting amount of increased work by adversaries would be evident. This strategy

would also be more cost effective than simple layering because less overall solutions would be

required. Finally, the work explains that all classes of attacks must be addressed, or else the

attacker will simply circumvent applied strategies by targeting the vulnerable area. The idea is

to address all of these classes not individually, but collectively through collaborative layering

(Kewley, D.L. & Lowry, J., 2001).

Page 25: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

25

PROPOSAL THE PROBLEM

Security programs based on traditional, probabilistic risk analysis are flawed. Reliance

on human imagination, as well as inconsistent historical data, leads to deceptive results. These

results are then the foundation of mitigation strategies that can be very costly and ineffective.

Particularly, when chosen strategies are specific to a given risk, the amount of solutions

required to mitigate all identified risks will be large. Introducing great numbers of solutions not

only automatically introduces the potential for additional weaknesses in the system, but it

creates a more complex system, which is the bane of security. Furthermore, basing analysis on

specifically identified risks fails to take into account unknowns. Simply put, traditional risk

analysis is an incomplete and potentially wasteful endeavor.

Game theoretic approaches to risk analysis in network security have attempted to

address these weaknesses by crafting a more realistic scenario. While modeling information

warfare as a variety of games has yielded interesting results, these results are based on the

assumption that multiple elements are known. For example, some of these models assume that

a defender knows all the possible attack vectors of an adversary. This is unrealistic. Additionally,

some models are based on the assumption that one entity can influence another, which is also

unrealistic. However, there are some elements of game theoretic approaches that may be

useful. In reality, information warfare is an imperfect information game.

Page 26: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

26

THE SOLUTION

What is required is a truly realistic and practical model for risk analysis. The model must

be grounded in what can be controlled. Basing decisions on concepts that cannot be controlled,

particularly on concepts that must be assumed based on nothing more than human intuition, is

a waste of effort and resources. A model must be devised that takes into account how to

appropriately identify risks, as well as how to analyze them based on controllable factors. The

model must account for unknowns. Elements of traditional probabilistic risk analysis, as well as

game theoretic approaches, should be applied where necessary. Applicable work on adversary

profiling, fault tolerance, collaborative layering of defenses, cost analysis, and attack

classification, as discussed, will be utilized to influence the creation the model.

METHODOLOGY

The proposed method will be compared to a traditional risk analysis process, and the

key differences will be explored. This exploration will include tangible, measurable differences

between the two models. While it is hypothesized that the proposed model will outperform the

traditional model, any weaknesses will be identified and addressed. Criteria for comparison will

include the following:

Cost of identified mitigation strategies

Ability to address a wide range of attack classifications based on a given security

budget

Ease of use

Applicability to a given scenario

Realism

Page 27: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

27

Risks identified

Each of these criteria will be analyzed in terms of each model as applied to a specific,

pre-defined environment.

In order for a valid comparison to be achieved, each model must be applied to identical,

realistic scenarios. Three different sized organizations will be profiled. In reality, there are more

than three sizes of organization, however, using three sizes will capture the necessary data.

These sizes are simply qualifications of small, medium, and large. The factors leading to a

particular qualification include employee populations, cash flow, the amount of data used, and

network scale. For this research methodology to be sound, the same style of organization will

be used for all sizes. The chosen organization’s industry will be in the financial sector, due to its

specific regulatory requirements and considerations based on the sensitivity of housed

information.

The style of research to be employed for this project will be a combination of primary

and secondary. As stated, portions of the creation of the proposed model will be elements

applied from previous work on the topic. However, additional elements will be theorized and

explored in the main experimentation phase, which will be the primary research.

SCENARIO A CASE STUDY

XYZ Corporation, a financial institution with generally 175-200 employees, has been

around for just over a decade, growing year after year. The current state of their security is in

flux, as they begin to position themselves for further growth. The ad hoc attitude toward

Page 28: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

28

security is identified as an area of weakness early on, and management has decided to begin

using risk assessments as a more organized way to address potential threats.

THE USERS

Approximately 50% of all employees are mobile users, and as such they have been

equipped with company laptops and smartphones. The laptops are all deployed using the same

image, and are all the same model. The users decide what phones they use, and will be synced

with their corporate email. Laptop users use a VPN to access internal network resources on the

road.

The rest of the users utilize virtual machines, and they connect to them using thin client

terminals. The virtual machines are deployed from a single image, and are destroyed and

recreated nightly.

THE INFRASTRUCTURE

XYZ Corporation has two main datacenters, approximately 100 miles apart. Connecting

to these datacenters are several satellite offices. The satellite locations reference their nearest

datacenter for any non-local resources. A large amount of replication occurs between the two

datacenters, facilitating the existing disaster recovery plan which simply calls for failing critical

machines over to the living datacenter.

The server environment is approximately 80% virtualized. The only physical servers are

local satellite file servers, domain controllers, and virtual hosts. All other servers are virtualized,

which includes services such as email, application databases, and content filtering.

Page 29: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

29

ASSESSMENTS

Since this is XYZ Corporation’s first time employing a formal risk assessment

methodology, they have chosen two options to compare and contrast. The first is a

methodology that aligns with the ISO 27001 standard. The second is methodology proposed in

this document.

ISO/IEC 27001:2005 OVERVIEW

Published in 2005 by the International Organization for Standardization (ISO) and

International Electrotechnical Commission (IEC), this standard introduces the importance of an

Information Security Management System, or an ISMS for short. Such a system, as defined by

the standard, is one that brings information security under one managerial umbrella. The

information to be protected includes digital and physical assets, the latter of which is

sometimes overlooked in information security. Organizations may be formally audited against

this standard and receive a certification displaying proficiency.

An actual audit against this standard will rarely include a look at implemented technical

controls, the assumption being that they are in place and determined effective by responsible

parties. This allows organizations the freedom to implement safeguards which fit their

environment at a given time. However, a risk assessment is suggested by the standard which

may identify areas of particular weakness or interest, previously unidentified or otherwise

overlooked entirely. Concerning risk assessments as outlined by ISO 27001, please reference

Figure 1.

Page 30: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

30

Figure 1. ISO Based Risk Analysis Framework

ASSET IDENTIFICATION

The first step of an ISO 27001 compliant risk assessment involves identifying all assets in

the organization. Assets can be classified in five (5) categories, with slight overlap. These

categories are as follows:

Hardware Assets

Information Assets

Software Assets

Personnel Assets

Service Assets

Hardware assets are those assets that are related to computer equipment or any other

equipment that may impact other assets. XYZ Corporation has identified a list of hardware

assets, which may be seen in Figure 2. In this scenario, some virtual machines are identified on

the hardware list. This may seem ironic at first, and could be argued that they would fit better

Page 31: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

31

into Software Assets, but these machines really should be treated the same as the hardware

they emulate. Additionally, because the laptops are deployed from a single image, and because

the end user virtual machines are deployed from the single image, these are classified as one

logical asset when it comes to addressing weaknesses.

Hardware Assets

Laptops

Smartphones

Networking Equipment

Terminals

Domain Controllers

File Servers

Web Servers

SQL Database Server

Application Servers

Email Servers

Storage Systems Figure 2. Identified Hardware Assets

Information assets may be electronic or physical in nature. These assets are varying

collections of data, presented formally in some way. XYZ Corporation has chosen to group their

specific information assets into defined categories based on a combination of department and

sensitivity. The identified list of information assets may be seen in Figure 3.

Information Assets

Physical Files

User Profile Data

Credit Data

Mortgage Data

Personalized Forms

Other Digital Files Figure 3. Identified Information Assets

Software assets generally consist of applications running on servers and end user

machines. It is not uncommon for these software assets to be critical for day-to-day business

operations. The list of identified software assets for XYZ Corporation may be seen in Figure 4.

Page 32: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

32

Software Assets

Loan Processing Software

Credit Analysis Software

Teller Software

Online Banking

Anti Virus

Web Content Filter

Email Gateway

Email

Virtualization Software

Patch Management

Active Directory

Profile Management

VPN

Mirroring and Replication

Figure 4. Identified Software Assets

Personnel assets may include employees, stakeholders, shareholders, or any other

group of people or individual who is of some importance to the organization and its normal

operation. For XYZ Corporation, all personnel are equally important, and they have chosen to

exclude them from analysis, although asset ownership is still considered.

Finally, Service assets are those assets that related to external services supplied by

vendors that are necessary for normal business operations and continuity within the

organization. The list of identified service assets for XYZ Corporation may be seen in Figure 5.

Service Assets

Web Site Hosting

Certificates and Trust

Event Monitoring

Internet

Backend Banking Databases

DNS Figure 5. Identified Service Assets

Page 33: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

33

THREATS AND IMPACT

As discussed previously, these two factors play a significant role in a traditional risk

assessment. Of particular interest is the impact metric due to the role it plays in measuring risk.

This is no different in an ISO 27001 aligned methodology.

By identifying what threats exist to given assets, it is possible to assess the potential

impact of a successful exploitation. Using the lists of assets obtained and discussed in the

previous section of this document, corresponding threats may be hypothesized. Please

reference Figures E.1 – E.4 in Appendix E for a completed table of assets, identified threats, and

their associated impact metrics. A sample is provided in Figure 6 for hardware assets.

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Laptops Mobile Users Virus/Malware Outdated Browser Med

2 Virus/Malware Outdated Antivirus Med

3 Unauthorized Access Left Unattended High

4 Stolen Left Unattended High

5 Data Leakage Unauthorized Duplication High

6 Traffic Sniffing Uncontrolled Network High

7 Smartphones Mobile Users Virus/Malware Downloaded App High

8 Unauthorized Access Left Unattended High

9 Stolen Left Unattended High

10 Data Leakage Unauthorized Duplication High

11 Networking Equipment Network Administrators

Unauthorized Access

Direct Port Access to Internal LAN Med

12 Denial of Service Outdated Firmware High

13 Terminals End Users, Network Administrators

Denial of Service

Lack of Connectivity to Virtual Desktop High

Figure 6. ISO Based Risk Analysis, Threats

While impacts may be measured arbitrarily, it is also possible to do it more scientifically.

Some ISO compliance aids suggest the use of questionnaires and process known as Business

Impact Analysis. The specifics of this form of analysis are beyond the scope of this document,

but please see Figures A.1 – A.3 in Appendix A for an example of completed questionnaires for

Page 34: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

34

the Laptops asset as defined by XYZ Corporation. The questionnaires are based on each of the

CIA fundamentals and may be used as a foundation for determining final impact

measurements.

VULNERABILITIES AND EXISTING CONTROLS

For the following sections, please see Figures E.1 – E.4 in Appendix E for a completed table consisting of data as it

pertains to XYZ Corporation. See Figure 7 for a sample.

After identifying the threats to given assets, it is necessary to expose the weaknesses by

which a successful exploitation may occur. These vulnerabilities may be vague or specific. After

this is completed, it is possible to map out existing controls that are supposedly closing the gap.

PROBABILITY

The controversial Probability metric is a mainstay in the ISO compliant methodology.

After identifying all of the other ISO related metrics identified in this section, it is necessary to

hypothesize the likelihood by which an exploitation may occur, considering the existing

controls.

INHERENT RISK, ADJUSTED RISK, AND MITIGATION

An ISO compliant methodology utilizes two different Risk composites. The first is

inherent risk which refers to risk before existing controls are applied. The second is adjusted risk

which refers to the remaining risk after existing controls are applied. The adjusted risk is what is

ultimately leveraged as a foundation for developing a mitigation strategy. For XYZ Corporation,

it was decided that residual risks with a rating of High required attention in the mitigation

recommendations. Any residual risks with a rating of Medium or Low were determined

acceptable residual risks. Using this data, it is possible to develop a mitigation strategy by

Page 35: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

35

researching applicable safeguards for the identified risks requiring attention. Specific guidance

for safeguard analysis is not given. Typically, security engineers would select solutions that

address specific threats, without regard for other identified threats or those threats that are

unknown.

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Laptops Mobile Users Virus/Malware Outdated Browser Med

2 Virus/Malware Outdated Antivirus Med

3 Unauthorized Access Left Unattended High

4 Stolen Left Unattended High

5 Data Leakage Unauthorized Duplication High

6 Traffic Sniffing Uncontrolled Network High

7 Smartphones Mobile Users Virus/Malware Downloaded App High

8 Unauthorized Access Left Unattended High

9 Stolen Left Unattended High Figure 7a. ISO Based Risk Analysis, Complete

No. Asset Name Probability Inherent Risk Existing Controls Adjusted

Risk

1 Laptops Low Low Antivirus Software

2 Med Med

Intrusion Prevention System Med

3 Med Med Screen Lockout after 10 Minutes Med

4 Low Low Encryption Software

5 Med High

Endpoint Security and Email Gateway Med

6 Med Med VPN Med

7 Smartphones Med High None High

8 Low Low Screen Lockout after 30 Seconds

9 Med Med Screen Lockout and Remote Wipe Med

Figure 7b. ISO Based Risk Analysis, Complete

Page 36: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

36

FLAWS AND OTHER OBSERVATIONS

As touched upon previously, the ISO compliant methodology falls into the trap of

leveraging arbitrarily hypothesized measurements such as probability and impact, and

deductively, risk as a whole. Additionally, isolating specific vulnerabilities, which was avoided in

the XYZ Corporation assessment, is a pitfall because it is possible that in a few hours the

identified mitigation is no longer effective. Being too specific in identifying threats and

vulnerabilities shortens the lifespan of the assessment in terms of applicability. The results

cannot be trusted over time, and may also be invalid as a foundation for additional assessments

in the future.

Additionally, threat identification is largely arbitrary. Theorizing hundreds of possible

threats for a given asset, in addition to being unrealistic and inefficient, would still leave out

something. Unknown threats should be accounted for. Otherwise, this style of assessment

really becomes more of a threat analysis with risk tied in. Simply put, subjective and overly

specific threat identification does not truly paint the risk landscape of an organization.

Finally, the ISO standard does not give insight into the safeguard selection or analysis

process. Arguably the most important part of risk management is the mitigation strategy.

Typically, after the threats are identified they are addressed one by one, with little or no

overlap. This mentality often leads to an overabundance of solutions which generally results in

overspending what has been allocated in the budget plan.

Page 37: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

37

FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS OVERVIEW

Alongside the following discussion, please reference Figure 8 for a visual overview of the framework.

The following sections discuss the proposed methodology in detail. Alongside the

definitions and explanations of concepts and metrics, the example case of XYZ Corporation is

studied for illustrative purposes.

Figure 8. Proposed Risk Analysis Framework

The proposed methodology is broken down into several high level stages. A brief

overview is as follows.

1. Identify Assets – Complete an asset inventory including hardware, software,

information, and services.

Page 38: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

38

2. Identify Adversaries – Use a pre-defined table of adversaries and their motivations

for alignment with identified assets. By doing this, determine a final list of potential

attackers based on the organization, its specific situation, and environment.

3. Identify Threats – A pre-defined table of attack classifications are used as the

foundation for further analysis. The classifications included in this table will grow as

threats change.

4. Determine Impacts – Utilize the pre-defined tables of attack classifications and their

associated impact measurements as dictated by each specific adversary identified.

Correlate each attack classification’s impact by averaging those of the identified

adversaries.

5. Measure Exposure – Analyze every identified asset against each attack classification

and determine the amount of potential targets.

6. Determine Inherent Risk – Correlate the Exposure of each attack classification with

its associated Impact.

7. Define and Eliminate Acceptable Risk – Determine what level of inherent risk is

acceptable to the organization, and filter those risks from further analysis.

8. Identify Potential Solutions – Utilize expertise in the field as well as best practices to

determine safeguards for the identified risks.

9. Analyze Potential Solutions – Create a Fault Tolerance Matrix to document each

potential solution’s effectiveness against each identified risk. Additionally,

determine the cost of these solutions. Prioritize potential solutions for

Page 39: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

39

implementation by correlating effectiveness, efficiency, and cost in the form of the

Safeguard Score.

10. Create Mitigation Strategy – Using the measured Safeguard Scores to first prioritize,

select the solutions that the budget allows.

11. Determine Residual Risk – Measure the Capability of attack classifications after the

theoretical implementation of the mitigation strategy, including existing controls.

Correlate the Capability of each attack classification with the Inherent Risk to

determine the Adjusted Risk, or Residual Risk.

12. Re-assess – After a period of time, it may be necessary to perform this assessment

again. Additionally, if any Residual Risks are not acceptable, an immediate re-

assessment may be required.

The following sections discuss the proposed methodology in detail. Alongside the

definitions and explanations of concepts and metrics, the example case of XYZ Corporation is

studied for illustrative purposes.

ADVERSARIES AND ATTACK CLASSIFICATIONS

The foundation of this methodology involves utilizing a pre-defined table of adversaries

and applicable attack classifications. Adversaries, as discussed previously, are individuals or

groups of individuals that can carry out an exploitation against an organization. Attack

classifications are essentially generalized groupings of potential threats that adversaries may

employ to exploit an organization’s weaknesses. By utilizing these two concepts, the results of

the assessment will have the benefit of longevity, and apply by default to unknown threats that

fall underneath the umbrella of a defined classification and adversary profile. Assuming

Page 40: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

40

anything less than that each class of adversary is capable of each class of attack would be

flawed. However, certain attacks are just too unlikely to be considered for particular

adversaries. For example, a disgruntled employee with little or no computer penetration

experience is unlikely to pursue certain attack classes. Furthermore, not every adversary may

be applicable to a given scenario. For example, a state sponsored group of hackers is unlikely to

target a local flower shop’s unencrypted WLAN.

Responsible parties must assess their current situation and environment and decide

which adversaries would target them. This is achieved by aligning identified assets with the

motivations of potential adversaries. Simply put, many of the defined adversary classes will not

attack an organization unless that organization possesses an asset of interest. Being targeted

includes individually targeted attacks as well as being targeted as part of a larger ranging attack

that spans multiple organizations. The identified tables of adversaries and attack classes as

discussed may be seen in Figure 9 and Figure 10. This data may be used as a template for any

organization, including XYZ Corporation.

Adversary

Unstructured Hackers

Structured Hackers

Organized Crime / Industrial Espionage

Malicious or Non-Malicious Insiders

Unfunded or Funded Hacktivists / Terrorist Groups

State Sponsored Groups Figure 9. Adversaries

The defined adversaries have been profiled with general motivations and explanations

which will also be leveraged.

Page 41: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

41

Unstructured Hackers – This group contains adversaries of various skill levels but

little in true targeted motivation. Typically, the only motivation for members of

this group is financial in nature. If their attack hits enough obstacles, they are

likely to concede and move on to the next target.

Structured Hackers – This group contains adversaries of a higher skill level, and

higher motivation, than unstructured hackers. Motivations for this group may be

financial. However, this group may also contain actual gangs of hackers that may

be out to damage organizations to establish elite status amongst their peers. It is

for this reason that their attacks are likely to continue even in the face of

adversity.

Organized Crime / Industrial Espionage – This group contains professional

hackers with proven skill. Their motivation is the motivation of their employer.

This may be financial in nature, but is often rooted in damaging a competing

organization in some way.

Malicious or Non-Malicious Insiders – This group typically contains employees of

a targeted organization. Otherwise, the group may contain other individuals

affiliated with the organization in some way with a notable level of privilege or

access. Depending on the organization, these individuals may have very low to

very high knowledge of offensive behavior. Motivations vary. While some attacks

may be financial in nature, others may be carried out to damage the

organization. In the case of the non-malicious insider, or accidental attacker,

there may be no motivation at all.

Page 42: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

42

Unfunded or Funded Hacktivists / Terrorist Groups – This group contains

attackers with at least a competent skill level. Their motivations are strong and

align with their beliefs. Targeted organizations are those on the other side of

these beliefs. The attacks are carried out with the intent to damage the

organization.

State Sponsored Groups – This group contains professional attackers of a very

high breed. Their motivations are those of their employer. These attacks seek to

damage the targeted organization. Typically, targets include enemy states.

Frequently, these attacks may be of a reconnaissance style and may go

undetected for a great deal of time.

Attack Classification

Direct Penetration

Indirect Penetration

Penetration Tools

Misused Insider Privileges

Directed Malicious Code

Indirect Malicious Code

Denial of Service

Interception

Spoofing

Modification

Diversions Figure 10. Attack Classifications

Direct Penetration – Attacks in this group include those that involve physically

attacking a workstation, server, or infrastructure component. An example of this

could include approaching an unlocked workstation and gaining access to

internal network resources.

Page 43: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

43

Indirect Penetration – Attacks in this group involve using an additional entity to

carry out a direct penetration. Examples of additional entities include malicious

insiders, or keyloggers. The idea is that an one entity is aiding another entity in

an attack.

Penetration Tools – This container includes all attacks that are automated in

nature by proven tools. For example, the popular Linux distribution Backtrack

includes a wealth of penetration tools, and is often the source of network based

attacks. Attacks in this class seek to allow the adversary access to privileged

network resources.

Misused Insider Privileges – Attacks in this group involve malicious use of high

privileges. Typically, these attacks are carried out by insiders who have privileges

to misuse. Otherwise, the credentials must be obtained by an additional attack.

Directed Malicious Code – Attacks in this group include targeted deployments of

rootkits, viruses, worms, etc.

Indirect Malicious Code – Attacks in this group include random, untargeted, and

often mass deployments of rootkits, viruses, worms, etc.

Denial of Service – Attacks in this class seek to disrupt the normal operation of an

organization or segment of the organization. Quite often these attacks are

Distributed and are powerful enough to crippled even the most robust networks

and services.

Page 44: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

44

Interception – Attacks in this class are of a reconnaissance nature. Sniffing

network traffic can yield information that may eventually be harmful to the

organization.

Spoofing – Attacks in this class seek to masquerade malicious entities as

markedly non-malicious entities. This way, additional attacks may go unnoticed.

Furthermore, spoofing may be used as a method of redirection. For example,

DNS spoofing can be used to redirect a user’s browser from Google.com to an

attack site.

Modification – Attacks in this class seek to disrupt the integrity of information or

data in an organization. These attacks can target elements such as log files,

configuration files, reports, etc.

Diversions – Attacks in this class seek to disrupt an organization’s ability to detect

an attack for a period of time. For example, an attacker may launch a particularly

noisy attack for several hours, and while that attack is occurring carries out their

true endeavor amidst all the noise. “Noise” refers to logging traffic and other

alerting traffic. During a noisy attack, the logs will fill very fast and administrators

sometimes have the tendency to overlook specific entries due to the volume of

blatantly malicious entries.

ASSET IDENTIFICATION

As with traditional risk assessment methodologies, this framework involves collecting a

list of identified assets. These assets will include the same classifications as those discussed in

the ISO compliant methodology, and as follows.

Page 45: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

45

Hardware Assets

Information Assets

Software Assets

Personnel Assets

Service Assets

However, rather than determining specific threats to a given asset, the list will be used

to align with potential adversary motivations. Please see Figure 11 for identified assets as listed

by XYZ Corporation.

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email Application Servers

Virtualization Software

Email Servers

Patch Management Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Figure 11. Identified Assets

ADVERSARY IDENTIFICATION

When a list of assets is identified, it is possible to begin assessing the alignment with

potential adversary motivations. By examining the previously discussed table of adversary

profiles, an organization’s assets should be correlated of in terms of what an adversary may

Page 46: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

46

want to gain or damage. By doing this, XYZ Corporation has identified the list of adversaries in

Figure 12.

Adversary

Unstructured Hackers

Structured Hackers

Malicious or Non-Malicious Insiders Figure 12. Identified Adversaries

It is important to be realistic in this identification process. It is possible to make

arguments that any group could potentially attack any given organization. However, it is

necessary to consider the likelihood of such an attack considering the current situation of the

organization as well as the risk environment facing similar organizations. In the case of XYZ

Corporation, a small financial institution, the chance of a State Sponsored Group to target their

resources is negligible.

DETERMING IMPACT

Each class of adversary has an inherent level of potential impact, and that potential

applies to whichever attack classification or combination of attack classifications they choose.

This impact metric plays into calculating overall Risk. In this methodology, impact can be

defined in much the same way as magnitude and severity were in the ISO compliant

methodology. The impact is not thought of in terms of a “what’s the worst that could happen”

mentality. Thinking this way would result in every impact being High. Rather, these impacts are

based on the motivation of the adversaries, and what they are likely to do to an organization.

Additionally, the ratings are analyzed considering the other adversaries and their impacts.

Because these impacts play into eventual mitigation strategies, it is important to differentiate

Page 47: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

47

the abilities of adversaries in order to set a foundation for prioritization. Each adversary’s

inherent impact is measured against every attack classification to create an actual impact

metric for a given threat. Please see Figure D.1 in Appendix D for a complete indexing of these

measurements. Figure 13 provides an example concerning the State Sponsored Group

adversary class. Explanations for these ratings are as follows.

Unstructured Hackers – Attackers in this group carry out relatively simple and

painless attacks against simple targets and will give up when obstructed.

Structured Hackers – Attackers in this group carry out somewhat complicated yet

detectable attacks against their random targets.

Organized Crime / Industrial Espionage – Attackers in this group seek to damage

an organization with their highly targeted attacks.

Malicious or Non-Malicious Insiders – Attackers in this group seek to damage an

organization with their wildly varying degrees of attack intensity in very targeted

attacks.

Unfunded or Funded Hacktivists / Terrorist Groups – Attackers in this group seek

to heavily damage opposing organizations with their skilled yet detectable

attacks.

State Sponsored Groups – Attackers in this group are typically playing a role in a

larger struggle, and seek to perform highly skilled and undetectable intrusions

into their target’s infrastructure.

Page 48: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

48

State Sponsored Groups

Attack Classification Impact

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges Medium

Directed Malicious Code High

Indirect Malicious Code Low

Denial of Service Medium

Interception High

Spoofing High

Modification High

Diversions High

Figure 13. Impact of State Sponsored Groups by Attack Classification

The impact metric in these tables qualifies potential impact considering the typical

methodologies employed by the given adversary class, as well as their ability to perform such

an attack. In order to correlate the impacts of multiple identified adversaries, an average should

be taken. Based on the adversaries that XYZ Corporation has identified, the table in Figure 14

will be used as the basis for further analysis.

Unstructured Hackers

Structured Hackers

Malicious or Non - Malicious Insiders

Attack Classification Impact Impact Impact

Composite Impact

Direct Penetration Low Low High Medium

Indirect Penetration Medium High Low Medium

Penetration Tools Medium Medium Low Medium

Misused Insider Privileges Low Low High Medium

Directed Malicious Code Low High Medium Medium

Indirect Malicious Code Medium Medium Low Medium

Denial of Service Low Medium Medium Medium

Interception Low Low Medium Low

Spoofing Low Medium Low Low

Modification Low Low High Medium

Diversions Low Medium Low Low

Figure 14. Identified Adversaries and Attack Classifications

Page 49: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

49

THE RELATIONSHIP BETWEEN ADVERSARIES AND ATTACK CLASSES

The inherent impacts of the identified adversaries need to be correlated to the list of

potential attack classifications in order to create the overall threat environment. The average

impact of the adversaries should be measured. In the case of XYZ Corporation, this is Medium.

In some cases, the average would be High. From this point forward, every attack class must be

thought of in terms of this average impact. Any attack class that affects XYZ Corporation should

be considered with a Medium impact.

Inherently, any attack classification has the ability to be of Low, Medium, or High

impact. However, this is largely based on the ability and motivations of the adversary.

Additionally, simply considering every classification in terms of a High impact, while sometimes

legitimately the case, will result in more costly mitigation strategies, and potentially a

completely different set of safeguards. By remaining realistic in the adversary identification

process, the true potential impact of attack classifications will be evident.

MEASURING EXPOSURE

In this methodology, exposure is defined as a measurement of the amount of potential

targets of an attack classification, regardless of vulnerability. This measurement is grouped into

three (3) specifications, as outlined below:

Low – Less than 30% of all assets are potential targets

Medium – Between 30% and 70% of all assets are potential targets

High – Over 70% of all assets are potential targets

Page 50: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

50

These measurements create clear boundaries for attack classifications and the extent by

which a successful exploitation my deliver the previously identified impact. By disregarding

specific vulnerabilities, this measurement will remain effective over periods of time.

Vulnerability based measurements only remain effective as long as there is not a new

vulnerability that is not yet taken into consideration, which is often only a day or two, if not

less. XYZ Corporation has hypothesized their exposures to each attack classification, which may

be viewed in Figures B.1 – B.11 in Appendix B. A sample may be seen in Figure 15 for the Direct

Penetration attack class.

Direct Penetration

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

13 / 37

Application Servers

Virtualization Software

35.14%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Figure 15. Exposure of Direct Penetration Attack Class

The items in bold are those identified as potential targets for a Direct Penetration

attack. During this identification, it is important to remain realistic. It could be argued that any

asset could be a potential target for any attack, but that is irrational. Some attack classes are

Page 51: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

51

very unlikely to be used against certain assets, if it is even possible at all. A summary of XYZ

Coporation’s exposure measurements may be viewed in Figure 16.

Attack Classification Exposure

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges Medium

Directed Malicious Code Medium

Indirect Malicious Code Medium

Denial of Service High

Interception Medium

Spoofing Low

Modification Medium

Diversions Low

Figure 16. Exposure Measurements by Attack Classification

DETERMINING INHERENT RISK

In this methodology, risk is a composite measurement of the previously identified

impact of a given attack classification, and the exposure. A third metric is introduced during the

Safeguard Analysis phase. However, inherent risk only takes into consideration these two

metrics, and no existing controls. The relationship between Risk (inherent), Impact, and

Exposure may be seen in Figure 17. As the impact and exposure of a threat increase, the

inherent risk also increases. The weight of this measurement is placed on Impact.

Page 52: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

52

Figure 17. The Relationship Between Inherent Risk, Impact, and Exposure

By correlating their identified Impact with the Exposures measured previously, XYZ

Corporation has generated the table in Figure 18.

Attack Classification Exposure Composite

Impact Inherent

Risk

Direct Penetration Medium Medium Medium

Indirect Penetration High Medium High

Penetration Tools High Medium High

Misused Insider Privileges Medium Medium Medium

Directed Malicious Code Medium Medium Medium

Indirect Malicious Code Medium Medium Medium

Denial of Service High Medium High

Interception Medium Low Low

Spoofing Low Low Low

Modification Medium Medium Medium

Diversions Low Low Low

Figure 18. Inherent Risk

Page 53: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

53

ELIMINATE ACCEPTABLE RISK

Depending on the goals of the organization, a choice may be made to eliminate Medium

inherent Risks from further analysis. XYZ Corporation has decided to focus only on High Risks.

All inherent risks measuring Medium or Low have been determined acceptable. It should be

noted that inherent risks are measured without the consideration of existing safeguards. For

some organizations, it may be possible to focus on larger numbers of risks here, but it depends

largely on the budget. The idea is to not spread oneself too thin. Due to this, XYZ Corporation

will focus its efforts on Indirect Penetration, Penetration Tools, and Denial of Service attacks.

SAFEGUARD ANALYSIS

In order to consider pre-existing and potential controls, it is necessary to analyze their

effectiveness. In this methodology, safeguards are analyzed utilizing two concepts; cost and

fault tolerance.

The cost of potential safeguards is almost always overlooked during analysis. Perhaps at

the end of analysis, or during the mitigation planning stage, it is possible that cost is taken

seriously into account. However, cost should be far more engrained within an effective

methodology since it plays such a large decision making role. Simply put, the security budget

needs to be considered when assessing the efficacy and efficiency of a safeguard. This may

apply to current and future safeguards.

In order to utilize this concept in analysis, it must be measurable in some way. First, a

total allocated budget must be identified. Each safeguard’s cost must be measured in terms of

its percentage of the total budget. For example, with a budget of one hundred thousand

Page 54: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

54

dollars, a safeguard system that costs fifty thousand dollars accounts for fifty-percent of the

total budget. XYZ Corporation has been allocated five thousand dollars for the current fiscal

year. Potential safeguards must take this budget into consideration.

The other half of safeguard analysis is the fault tolerance metric. This metric relies

heavily on matrix-based analysis. Each attack class is outlined alongside each identified

safeguard. The perceived effectiveness of these safeguards in terms of each attack class is

recorded and a composite score for each safeguard is documented. This composite score is the

fault tolerance metric. The measurements for each safeguard’s effectiveness for a specific

attack class include the following.

(0) – Not applicable – Controls of this category are either highly ineffective

against the given attack class or else they are simply not applicable in any way.

(1-3) – Not effective – Controls of this category are not effective against the

given attack class, and should not be applied in most situations.

(4-6) – Applicable – Controls in this category may be used against the given

attack class in some situations, but should not be relied upon.

(7-8) – Effective – Controls in this category can be used against the given attack

class in most situations, and can be reasonably relied upon.

(9-10) – Highly Effective – Controls in this category should be used against the

given attack class in all situations, and can be completely relied upon.

A specific safeguard, when applied to one attack class, may appear Highly Effective.

However, when stacked up against another attack class, it may only be Applicable. In order to

Page 55: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

55

get the most of a security budget, especially a small budget, it is crucial to consider a

safeguard’s ability to defend against multiple attack classes. This is the principle of fault

tolerance. The determination of these effectiveness scores is based highly on the ability and

experience of the security engineer involved in the project. The process of analyzing a

safeguard’s effectiveness specifically is beyond the scope of this document.

One half of the Fault Tolerance Matrix is the collection of identified attack

classifications. The other half of the matrix is the collection of identified safeguards. By

analyzing these two elements, it is possible to measure the fault tolerance metric. Determining

potential safeguard solutions is left to the discretion of security engineers. These individuals

should look at the identified risks and determine courses of action based on industry best

practices and personal experience. Specifics of safeguard solutions and the decision making

process behind creating a list of potential safeguards are also beyond the scope of this

document. XYZ Corporation has decided on five (5) potential solutions, henceforth referred to

as S1 – S5 (Safeguard 1 – Safeguard 5). Figure 19 illustrates a completed matrix for XYZ

Corporation based on the safeguards they have identified.

Page 56: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

56

Figure 19. Fault Tolerance Matrix

EXISTING CONTROLS

In this methodology, existing controls are utilized differently than in traditional risk

analysis. In an ISO compliant methodology, existing controls are used as an additional filtering

mechanism to limit the total risks in prioritization. However, this is a flawed concept.

Eliminating a risk based on an existing solution makes the assumption that there are no better

solutions for that risk. “Better” typically refers to a more cost effective solution, or a more

technologically effective solution. In this methodology, existing controls are taken into

consideration for use with the capability metric. These controls must be analyzed alongside

potential controls in order to truly determine a strategy’s fault tolerance ability. However, these

Page 57: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

57

existing controls are left out of the cost analysis portion, due to their pre-existing ownership by

the organization. In this example, Safeguard 5 is an existing control.

MEASURING SAFEGUARD SCORES

In this methodology, a composite safeguard score is utilized to prioritize safeguards in

mitigation strategies. This score is based on the cost metric and the fault tolerance metric. The

equation below is used to calculate the safeguard score.

x = cost in percentage of total budget

y = fault tolerance metric

z = composite safeguard score

For 0 < x < 100 and 0 < y < 1

(100 – x) y = z

As the percentage of the budget decreases, and the fault tolerant ability increases, the

safeguard score increases. Utilizing this algorithm, it is possible to rate safeguards based on

their cost and ability to defend against multiple attack classes. This is increasingly important as

budgets decrease.

Based on their completed Fault Tolerance Matrix, XYZ Corporation has determined their

Fault Tolerance metric measurements. These measurements can be seen in Figure 20.

Safeguard Fault Tolerance Metric

S1 0.57

S2 0.43

S3 0.50

S4 0.60

S5 0.57

Figure 20. Fault Tolerance Metrics

Page 58: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

58

Additionally, the security engineers at XYZ Corporation have received quotes for their

potential solutions. Again, S5 is an existing control, and is not included in the cost analysis. For

the sake of illustration, these numbers have been placed in the table in Figure 21, as well as

their percentage of the total allocated budget of $5,000.

Safeguard Cost % of Total

Budget

S1 1350.00 27.00%

S2 2500.00 50.00%

S3 2800.00 56.00%

S4 3900.00 78.00%

Figure 21. Cost Analysis

Utilizing the formula outlined previously; (100 – x)y = z, it is possible to correlate the

cost of the safeguards to their fault tolerance metric. The resulting safeguard score is the heart

of safeguard analysis in this methodology. XYZ Corporation has carried out this calculation and

the resulting table can be seen in Figure 22.

Safeguard Cost % of Total

Budget Fault Tolerance

Metric Safeguard Score

S1 1350.00 27.00% 0.57 41.61

S2 2500.00 50.00% 0.43 21.50

S3 2800.00 56.00% 0.50 22.00

S4 3900.00 78.00% 0.60 13.20

Figure 22. Safeguard Scores

With the information available at this point in the analysis, it is possible for the

organization to make a decision for their mitigation strategy. Ultimately, this stage is controlled

by the budget. The idea is to implement the solutions, prioritized by safeguard score, as

allowed by the budget. The easiest way to do this is to simply arrange the same table of

information by safeguard score in a descending fashion. The resulting table for XYZ Corporation

may be seen in Figure 23.

Page 59: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

59

Safeguard Cost % of Total

Budget Fault Tolerance

Metric Safeguard Score

S1 1350.00 27.00% 0.57 41.61

S3 2800.00 56.00% 0.50 22.00

S2 2500.00 50.00% 0.43 21.50

S4 3900.00 78.00% 0.60 13.20

Figure 23. Safeguard Analysis, Sorted by Safeguard Score

In the case of XYZ Corporation, it is only possible to implement solutions S1 and S3 at

this time. Adding S2 brings the strategy over-budget. However, utilizing the concepts outlined

by this methodology, XYZ Corporation should be confident they are doing the right thing. By

choosing the highest safeguard scores, they are choosing the most dynamic, efficient, and

effective solutions.

MEASURING CAPABILITY AND ADJUSTED RISK

In this methodology, capability is defined as the feasibility of an attack class, considering

the existing or proposed control, or set of controls. The capability measurement is based on the

highest individual effectiveness measurement aligned with a particular attack classification. The

classifications of the capability metric are defined below:

Low – Greatest effectiveness value of eight (8) or above.

Medium – Greatest effectiveness value between four (4) and seven (7).

High – Greatest effectiveness value below three (3)

A capability rating of low means the safeguard is highly effective in thwarting an attack

class. A medium rating implies competent safeguarding from an attack class. A high capability

means that an attack class is very feasible. This analysis is done utilizing the same matrix as

discussed previously. However, we are simply isolating the greatest effectiveness value for each

Page 60: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

60

attack class. To do this, XYZ Corporation has again taken a look at their completed Fault

Tolerance Matrix. However, this time they focused on the highest individual effectiveness

ratings of each of their chosen safeguards, as well as their existing safeguard. The resulting

diagram is illustrated in Figure 24.

Figure 24. Individual Effectiveness of Identified Safeguards

The safeguards in grey have been eliminated from the strategy due to the budget. Using

the measurements defined for the Capability metric, XYZ Corporation has determined their

capabilities for the identified attack classifications based on their safeguard analysis. This

information is documented in the table contained in Figure 25.

Page 61: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

61

Attack Classification Greatest Individual

Effectiveness Capability

Indirect Penetration 8 Low

Penetration Tools 7 Medium

Denial of Service 9 Low

Figure 25. Measuring Capability

Utilizing the capability measurement, it is possible to calculate the adjusted risk.

Essentially, adjusted risk is a composite of capability and the inherent risk. This relationship may

be seen in Figure 26. As the capability of a threat increases, and the inherent risk increases, the

adjusted risk increases.

Figure 26. The Relationship Between Adjusted Risk, Inherent Risk, and Capability

In this methodology, in order to affect a risk rating it is necessary to influence capability.

The calculated capability measurements for identified attack classes considering existing and

potential controls, and the resulting adjusted risk scores may be seen in Figure 27.

Page 62: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

62

Attack Classification

Greatest Individual Effectiveness Capability Inherent Risk Adjusted Risk

Indirect Penetration 8 Low High Medium

Penetration Tools 7 Medium High Medium

Denial of Service 9 Low High Medium

Figure 27. Adjusted Risk

MAINTENANCE

The adjusted risks now fall into the Medium category, which was identified as an

acceptable level of risk by XYZ Corporation. It is possible that a specific attack classification may

not have an acceptable level of residual risk. However, based on the allocated budget and the

safeguards identified by the security engineers, it is the best that can be done at that time. At

this point, if an attack classification has an adjusted risk of High, XYZ Corporation would need to

either research additional safeguards that fit their budget and needs, or look into getting more

resources allocated to the project.

FLAWS AND OTHER OBSERVATIONS

It may be pointed out that because safeguards are analyzed against all attack

classifications, that those safeguards that are particularly effective against specific attacks may

be overlooked. This may be true in some extreme cases. The safeguard score is fair and

impartial to safeguards exhibiting non-outlier behavior. “Non-outlier behavior” means

effectiveness measurements that are not extremely low or extremely high for a single

safeguard. The concept of this methodology is to be cost effective, and implementing a solution

that is effective against one attack classification and nothing else goes against that concept. It

could be argued that the most effective strategy is to simply identify solutions that are highly

effective against one attack classification, for each and every attack classification. That may be

Page 63: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

63

an option for some, but definitely not all. That is a very cost ineffective strategy, and only plays

into defense in breadth, rather than both defense in breadth and defense in depth principles.

It could be argued that because in some specific instances we are addressing every

attack class, then it is unnecessary to identify the adversaries. However, without adversaries it

is not possible to align motivations to our assets in order to hypothesize impact. The reasons by

which an attack takes place largely dictate the potential damage done. If someone sets out to

damage an organization, attacks coming from that source will inherently have a higher

potential impact.

On the topic of the Capability metric, it may be argued that rather than using the

greatest individual effectiveness of safeguards applied to a given attack classification, that the

composite of all those safeguards should be considered. However, even the most ideal

safeguards for a scenario sometimes exhibit very weak effectiveness against certain attack

classifications. In these cases, utilizing a composite would be unrealistically weighted down, and

the resulting Adjusted Risk measurement would be very aggressive in its severity.

When analyzing safeguards, it may be said that the attack classifications should be

prioritized before choosing solutions, as well as before analyzing. Simply put, the effectiveness

of a strategy has nothing to do with the severity of a risk. Tailoring safeguard decisions based

on one prioritized risk undermines the core concepts of this methodology. A mindset such as

that would converge on the previously discussed strategy of simply implementing all

individually highly effective strategies. Considering threats singularly does not accurately depict

an organizations threat environment.

Page 64: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

64

COMPARING METHODOLOGIES ANALYSIS

By carrying out a sample risk assessment on the same organization utilizing two unique

methodologies, the contrast becomes apparent. The foundations of the two methodologies

used in this document are very different.

The ISO compliant method bases its analysis on threat identification. This identification

process is arbitrary. Threats are chosen, ranging from broad classifications to very specific

attacks. The proposed model is based on adversary identification. The ISO compliant model

weighs heavily on probabilities for its simulation. The probabilities are largely random and are

left to the discretion of the parties responsible for the analysis. The proposed model uses

adversary profiles to realistically illustrate a threat’s impact to the organization based on the

ability and tendencies of the attacker.

Both methods require the identification of assets. This task will always be important as it

is the assets that are affected by risk. Understanding what an organization has that is of value is

crucial to understanding risk. However, this information is used differently in each model. The

ISO model uses assets as a basis for threats. Essentially, threats are identified based on what

could potentially happen to an asset. In the proposed model, assets are used as the basis for

adversary identification. By studying potential adversaries and their motivations, it is possible

to align their goals with an organization’s assets to determine the basis for an attack.

The most important difference between the two models outlined in this document is

the way threats are perceived. The ISO model regards only those threats to the assets

identified. The problem with this logic is that it does not consider other potential threats. With

Page 65: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

65

the speed at with the computer threat landscape shifts, it is important to account for unknowns

in the present as well as the development of new threats in the future. The ISO compliant

model identifies a finite amount of threats, and accounts for them. However, if something

occurs outside of that group of threats, it is theoretically unaccounted for. If risk management

were as easy as selecting specific threats and protecting against them, no one would get

exploited. However, this is not the case.

The model proposed in this document utilizes the concept of attack classifications to

account for an infinite number of attacks. This concept is reinforced by the current ability of

security solutions to protect against a variety of attacks based on signatures as well as behavior.

Assessing vulnerabilities during a point in time and patching the holes is no longer effective.

Zero-day attacks are growing more prevalent and it is becoming very important for safeguards

to have the ability to protect against unknowns. Attack classifications are essentially groupings

of more specific attacks. For example, the Denial of Service attack class includes a vast number

of particular Denial of Service attacks, exploits, and techniques. As this group grows, the

definition remains the same. Protecting against Denial of Service techniques in a way other

than simply blocking the ports used by a specific tool is intelligent and suggested.

Finally, safeguard analysis differs between the two models used in this document

because one of the models does not promote it. The ISO standard offers no guidance on

selecting or analyzing safeguards. The proposed model also offers no guidance for selecting

safeguards, however it does offer a pair of concepts for analyzing them in terms of cost and

fault tolerance. It is understandable to not include guidance for selecting safeguards as that is

Page 66: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

66

ultimately up to the organization. On the other hand, with no guidance on safeguard analysis

security engineers are left completely to their discretion on what solutions to implement. When

the budget is tight, this can be a particularly trying process.

Typical risk assessments involve prioritizing the risks and addressing them one at a time

as the budget allows. This strategy is can be costly in multiple ways. First, the number of

solutions that are identified will grow fast as it is 1:1 with identified threats. Not only will a

higher number of solutions inherently require more money to implement, but it will also

require more man hours to manage.

The proposed model suggests finding solutions that have the ability to safeguard all of

the risks in some way. Solutions with this ability excel in the safeguard scoring system, and are

the ideal choices with a small budget. Additionally, employing a strategy like this leads to a

number of total solutions that is less than the number of threats, saving on initial cost and

managerial time. Furthermore, a strategy that focuses on selecting safeguards with the ability

to address multiple threats creates overlap, enforcing a defense in depth strategy. The range of

threats that these safeguards protect against also enforces principles of defense in breadth.

Because the ISO model includes nothing in terms of safeguard analysis, these benefits go

unclaimed. Logically, the safeguards identified at the end of an ISO compliant mitigation

strategy have little or no interplay, as they are often too focused.

The two models discussed in this document are similar in that they both largely employ

qualitative analysis. The benefit of qualitative analysis is that it is faster and more efficient.

Because risk analysis is not a perfect process, it is arguable that spending a significant amount

Page 67: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

67

of time pouring resources into a more quantitative strategy is a waste. At the end of the day,

risk management is about the safeguards that are selected and implemented. That is where the

money should be going. Additionally, it is just as hard to trust the numbers used in qualitative

analysis, even based on historical data, as it is to trust qualifications.

The ISO compliant method, or traditional probabilistic risk analysis, makes sense and is

easy to use. The problem is, the foundation upon which the analysis is built is flawed and

arbitrary. The model leads in the right direction, but the results cannot be trusted, especially as

time passes. The methodology proposed in this document is based on current players in the

network security game. Adversary profiles can paint a picture of who an organization is facing.

By understanding this information, it is possible to attain more applicable results. These results

then translate to more applicable and lasting solutions.

SUMMARY Traditional risk analysis methodologies are flawed. Basing large purchasing decisions on

arbitrarily identified threats, as well as probabilities that are nothing more than blind guesses,

have left many organizations reconsidering their risk management strategy.

There are many alternatives to probabilistic methodologies. The most intriguing of

which are those founded in game theory. Analyzing risks in the form of a game between two

entities presents a realistic view of computer security. However, many methodologies in this

area fail because they assume too much, and are only applicable to particularly designed

scenarios.

Page 68: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

68

The literature in this field shows much promise. By borrowing ideas and concepts from

several well known minds on the subject, a new methodology has been proposed. This

methodology seeks to answer the flaws presented by traditional probabilistic methodologies as

well as those based on game theory. Adversarial profiles, cost analysis, and fault tolerance are

utilized to create a realistic view of risk. The core concept of this proposed method is to help

forge a cost effective and efficient implementation that limits the capability of threat classes

across a wide range of potential targets. The fault tolerant approach outlined in this document

provides defense in depth principles while the wide range of protection provides defense in

breadth.

FUTURE WORK The proposed methodology in this document is based on the existence of adversarial

data and attack classification data. This data, much like virus signatures, is only effective so long

as it is current. The adversary profiles and attack classes were created with the future in mind,

and should encompass growing threats. However, should significant changes occur in the

overall computer security landscape, these sets of data may need to be updated. Most notably,

the “impact tables” discussed are based heavily on an understanding of the adversaries at a

given time. The impact tables defined in this document are based on the abilities and

motivations of attackers at the time of writing. These impact tables may be inaccurate over

time.

Ideally, a readily available database of this information should be made available online.

This way, the most current data can be accessed in one spot. Preferably, this repository would

Page 69: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

69

also be home to security professionals around the world who will participate in open discussion

about the accuracy of the data. Accurate perceptions of adversaries and attack classes form a

solid foundation for this methodology.

Page 70: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

70

REFERENCES

Bambos, N. (2010). A risk management view to information security. Berlin: GameSec'10 Proceedings of

the First international conference on Decision and Game Theory for Security.

Banks, D. (2006). Game theory and risk analysis for counterterrorism. N.p.: Rutgers University.

Christin, N. (2011). Network security games: combining game theory, behavioral economics, and

network measurements. N.p.: Carnegie Mellon University. Retrieved April 23, 2012, from

http://www.andrew.cmu.edu/user/nicolasc/publications/Christin-GameSec11.pdf

Clark, A., & Poovendran, R. (2011). Maximizing influence in competitive environments: a game-theoretic

approach. N.p.: GameSec'11 Proceedings of the Second international conference on Decision

and Game Theory for Security.

Halpern, J. (2011). Beyond Nash equilibrium: solution concepts for the 21st century. New York, NY:

Cornell University.

Kewley, D., & Lowry, J. (2001). Observations on the effects of defense in depth on adversary behavior in

cyber warfare. West Point, NY: Proceedings of the IEEE SMC Information Assurance Workshop.

Lemay, E. (2011). Model-based security metrics using adversary view security evaluation (ADVISE). QEST

'11 Proceedings of the 2011 Eighth International Conference on Quantitative Evaluation of

SysTems

Lye, K., & Wing, J. (2002). Game strategies in network security. Pittsburgh, RI: Carnegie Mellon

University. Retrieved April 22, 2012, from

http://www.cs.cmu.edu/~wing/publications/LyeWing02.pdf

Maille, P., Reichl, P., & Tuffin, B. (2010). Interplay between security providers, consumers, and attackers:

a weighted congestion game approach. N.p.: University of Bretagne.

Parker, T. (2003). Cyber adversary characterization. N.p.: Blackhat USA Conference. Retrieved April 24,

2012, from http://www.blackhat.com/presentations/bh-usa-03/bh-us-03-parker.pdf

Page 71: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

71

Ramana, M. V. (2011). Beyond our imagination: Fukushima and the problem of assessing risk. N.p.:

Bulletin of the Atomic Scientists. Retrieved April 25, 2012, from http://thebulletin.org/web-

edition/features/beyond-our-imagination-fukushima-and-the-problem-of-assessing-risk

Roy, S., Ellis, C., Shiva, S., Dasgupta, D., Shandilya, V., & Wu, Q. (2010). A survey of game theory as

applied to network security. Memphis, TN: University of Memphis. Retrieved April 25, 2012,

from ais.cs.memphis.edu/files/papers/Survey.pdf

Sachs, M. (2003). The cyber threat to the United States. N.p.: Blackhat USA Conference. Retrieved April

24, 2012, from http://www.blackhat.com/presentations/bh-usa-03/bh-us-03-parker.pdf

Stocco, G., & Cybenko, G. (2011). Exploiting adversary's risk profiles in imperfect information security

games. GameSec'11 Proceedings of the Second international conference on Decision and Game

Theory for Security.

Page 72: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

72

APPENDIX A – Business Impact Analysis Questionnaires

Figure A.1 - CONFIDENTIALITY QUESTIONAIRE

Asset: Laptops

Confidentiality: Organizational impact when information in the system is compromised

Question Impact Explanation

Privacy sensitivity of data

None

Standard registry (e.g., memberships

)

Sensitive data (e.g. financial, medical

or criminal)

Laptops sometimes house personal data such as social security

numbers, addresses, name, phone numbers, etc.

Financial loss as a result of information disclosure

< 2,5k 2,5k -- 50k > 50k Fines.

Possible fraud as a result of information disclosure

< 2,5k 2,5k -- 50k > 50k

Reputation loss as a result of information disclosure

No negativ

e publicity

Local negative publicity

National negative publicity

Liability issues as a result of information disclosure

None Limited High Lawsuits.

To what extent can information disclosure lead to injuries

None Serious Injuries

Loss of Life

Result Low Med High

*Template Source: www.cs.ru.nl/~klaus/secorg/slides/02_IS_IMPL_20v0.51.pdf

Page 73: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

73

Figure A.2 - INTEGRITY QUESTIONAIRE

Asset: Laptops

Integrity: Organizational impact when information in the system is incorrect

Question Impact Explanation

Financial loss as a result of unintentional changes in information

< 2,5k 2,5k --

50k > 50k Fines.

Financial loss as a result of intentional changes in information (fraud)

< 2,5k 2,5k --

50k > 50k

Reputation loss as a result of incorrect information

No negativ

e publicity

Local negativ

e publicity

National negativ

e publicity

Liability issues as a result of incorrect information

None Limited High Lawsuits.

Possible wrong management decisions as a result of incorrect information

None Limited High

Safety dangers as a result of incorrect information

None Serious injuries

Loss of life

Result Low Med High

*Template Source: www.cs.ru.nl/~klaus/secorg/slides/02_IS_IMPL_20v0.51.pdf

Page 74: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

74

Figure A.3 - AVAILABILITY QUESTIONAIRE

Asset: Laptops

Availability: Organizational impact when information in the system is unavailable

Question Impact Explanation

Acceptable downtime before substantial financial loss (> 50k) occurs

> 1 day

< 1 day < 4

hours

How long is manual processing as an alternative feasible

> 1 day

< 1 day < 4

hours Alternative processing is available.

After what downtime are important management decisions no longer possible

> 1 day

< 1 day < 4

hours N/A

After what downtime is the reputation of the organization in danger

> 1 day

< 1 day < 4

hours

After what downtime are external requirements no longer met

> 1 day

< 1 day < 4

hours

How many employees can not work when the system is unavailable

1% 10% 50% Individual Laptop.

To what extent can unavailability lead to injuries

None Serious Injuries

Loss of life

Result Low Med High Acceptable Downtime

> 1 day

*Template Source: www.cs.ru.nl/~klaus/secorg/slides/02_IS_IMPL_20v0.51.pdf

Page 75: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

75

APPENDIX B – Exposure Tables by Attack Classification

Figure B.1 - DIRECT PENETRATION

Direct Penetration

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

13 / 37

Application Servers

Virtualization Software

35.14%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 76: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

76

Figure B.2 - INDIRECT PENETRATION

Indirect Penetration

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

27 / 37

Application Servers

Virtualization Software

72.97%

Email Servers

Patch Management

High

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 77: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

77

Figure B.3 - PENETRATION TOOLS

Penetration Tools

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

27 / 37

Application Servers

Virtualization Software

72.97%

Email Servers

Patch Management

High

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 78: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

78

Figure B.4 - MISUSED INSIDER PRIVILEGES

Misused Insider Privileges

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

16 / 37

Application Servers

Virtualization Software

43.24%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 79: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

79

Figure B.5 - DIRECTED MALICIOUS CODE

Directed Malicious Code

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

17 / 37

Application Servers

Virtualization Software

45.95%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 80: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

80

Figure B.6 - INDIRECT MALICIOUS CODE

Indirect Malicious Code

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

17 / 37

Application Servers

Virtualization Software

45.95%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 81: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

81

Figure B.7 - DENIAL OF SERVICE

Denial of Service Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

28 / 37

Application Servers

Virtualization Software

75.68%

Email Servers

Patch Management

High

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 82: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

82

Figure B.8 - INTERCEPTION

Interception

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

17 / 37

Application Servers

Virtualization Software

45.95%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 83: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

83

Figure B.9 - SPOOFING

Spoofing

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

11 / 37.

Application Servers

Virtualization Software

29.73%

Email Servers

Patch Management

Low

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 84: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

84

Figure B.10 - MODIFICATION

Modification

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

16 / 37

Application Servers

Virtualization Software

43.24%

Email Servers

Patch Management

Medium

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 85: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

85

Figure B.11 - DIVERSIONS

Diversions

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email

11 / 37.

Application Servers

Virtualization Software

29.73%

Email Servers

Patch Management

Low

Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication

Page 86: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

86

APPENDIX C – Proposed Analysis Tables

Figure C.1 - COMPLETED EXPOSURE TABLE

Attack Classification Exposure

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges Medium

Directed Malicious Code Medium

Indirect Malicious Code Medium

Denial of Service High

Interception Medium

Spoofing Low

Modification Medium

Diversions Low

Figure C.2 - COMPLETED COMPOSITE IMPACT TABLE

Unstructured Hackers

Structured Hackers

Malicious or Non - Malicious Insiders

Attack Classification Impact Impact Impact

Composite Impact

Direct Penetration Low Low High Medium

Indirect Penetration Medium High Low Medium

Penetration Tools Medium Medium Low Medium

Misused Insider Privileges Low Low High Medium

Directed Malicious Code Low High Medium Medium

Indirect Malicious Code Medium Medium Low Medium

Denial of Service Low Medium Medium Medium

Interception Low Low Medium Low

Spoofing Low Medium Low Low

Modification Low Low High Medium

Diversions Low Medium Low Low

Page 87: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

87

Figure C.3 - COMPLETED INHERENT RISK TABLE

Attack Classification Exposure Composite

Impact Inherent

Risk

Direct Penetration Medium Medium Medium

Indirect Penetration High Medium High

Penetration Tools High Medium High

Misused Insider Privileges Medium Medium Medium

Directed Malicious Code Medium Medium Medium

Indirect Malicious Code Medium Medium Medium

Denial of Service High Medium High

Interception Medium Low Low

Spoofing Low Low Low

Modification Medium Medium Medium

Diversions Low Low Low

Page 88: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

88

APPENDIX D – Impact Tables

Figure D.1 IMPACT TABLES BY ADVERSARY

Structured Hackers

Attack Classification Impact

Direct Penetration Low

Indirect Penetration High

Penetration Tools Medium

Misused Insider Privileges Low

Directed Malicious Code High

Indirect Malicious Code Medium

Denial of Service Medium

Interception Low

Spoofing Medium

Modification Low

Diversions Medium

Malicious or Non - Malicious Insiders

Attack Classification Impact

Direct Penetration High

Indirect Penetration Low

Penetration Tools Low

Misused Insider Privileges High

Directed Malicious Code Medium

Indirect Malicious Code Low

Denial of Service Medium

Interception Medium

Spoofing Low

Modification High

Diversions Low

Page 89: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

89

Funded or Unfunded Hacktivists / Terrorist Groups

Attack Classification Impact

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges Medium

Directed Malicious Code High

Indirect Malicious Code Low

Denial of Service High

Interception Medium

Spoofing Low

Modification Low

Diversions Medium

Crime / Industrial Espionage

Attack Classification Impact

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges High

Directed Malicious Code High

Indirect Malicious Code Low

Denial of Service Medium

Interception High

Spoofing High

Modification High

Diversions Medium

State Sponsored Groups

Attack Classification Impact

Direct Penetration Medium

Indirect Penetration High

Penetration Tools High

Misused Insider Privileges Medium

Directed Malicious Code High

Indirect Malicious Code Low

Denial of Service Medium

Interception High

Spoofing High

Modification High

Diversions High

Page 90: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

90

APPENDIX E - ISO Risk Analysis Spreadsheets

Figure E.1a - HARDWARE ASSETS AND ANALYSIS

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Laptops Mobile Users Virus/Malware Outdated Browser Med

2 Virus/Malware Outdated Antivirus Med

3 Unauthorized Access Left Unattended High

4 Stolen Left Unattended High

5 Data Leakage Unauthorized Duplication High

6 Traffic Sniffing Uncontrolled Network High

7 Smartphones Mobile Users Virus/Malware Downloaded App High

8 Unauthorized Access Left Unattended High

9 Stolen Left Unattended High

10 Data Leakage Unauthorized Duplication High

11 Networking Equipment

Network Administrators

Unauthorized Access

Direct Port Access to Internal LAN Med

12 Denial of Service Outdated Firmware High

13 Terminals End Users, Network Administrators

Denial of Service

Lack of Connectivity to Virtual Desktop High

14 Domain Controllers System Administrators

Unauthorized Access Left Unattended High

15 Virus/Malware Outdated Antivirus High

16 File Servers System Administrators Denial of Service Network Outtage Med

17 Virus/Malware Outdated Antivirus High

18 Unauthorized Access Left Unattended High

19 Unauthorized Access

Misconfigured Access Controls High

20 Data Leakage Unauthorized Duplication High

21 Web Servers

System Administrators, End Users

Denial of Service Network Outtage Med

22 Virus/Malware Outdated Antivirus Med

23 Unauthorized Access

Misconfigured Access Controls High

24 Data Leakage Unauthorized Duplication High

25

SQL Database Server System Administrators

Denial of Service Network Outtage Med

26 Virus/Malware Outdated Antivirus High

27 Data Leakage Intrusion High

28 Application Servers

Application Administrators

Denial of Service Network Outtage Med

29 Virus/Malware Outdated Antivirus High

Page 91: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

91

30 Unauthorized Access

Misconfigured Access Controls High

31 Data Leakage Intrusion High

32 Data Leakage Unauthorized Duplication High

33 Email Servers System Administrators

Denial of Service Network Outtage Med

34 Virus/Malware Outdated Antivirus Med

35 Data Leakage Intrusion High

36 Data Leakage Unauthorized Duplication High

37 Unauthorized Access

Misconfigured Access Controls High

38 Storage Systems System Administrators

Denial of Service Network Outtage High

39 Unauthorized Access

Misconfigured Access Controls High

40 Data Leakage Intrusion Med

Page 92: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

92

Figure E.1b - HARDWARE ASSETS AND ANALYSIS

No. Asset Name Probabilit

y Inherent Risk Existing Controls

Adjusted Risk

1 Laptops Low Low Antivirus Software

2 Med Med Intrusion Prevention System Med

3 Med Med Screen Lockout after 10 Minutes Med

4 Low Low Encryption Software

5 Med High Endpoint Security and Email Gateway Med

6 Med Med VPN Med

7 Smartphones Med High None High

8 Low Low Screen Lockout after 30 Seconds

9 Med Med Screen Lockout and Remote Wipe Med

10 Med High Email Gateway High

11 Networking Equipment Low Low Port Monitoring and VLANs

12 Low Low Firmware Updates, Redundant Equipment

13 Terminals Med Med Redundant Pools, Desktops, and Connection Servers Med

14 Domain Controllers Low Low Access Controls, Screen Lockout

15 Low Low Intrusion Prevention System, Host Intrustion Prevention

16 File Servers Med Low Redundant Mirrored File Servers

17 Med Low Intrusion Prevention System, Host Intrustion Prevention

18 Low Low Access Controls, Screen Lockout

19 Med High Access Controls, Training, Monitoring Med

20 Med High Endpoint Protection, Email Gateway Med

21 Web Servers Med Med Redundant Mirrored Web Servers Med

22 Med Low Intrusion Prevention System, Host Intrustion Prevention

23 Med High Access Controls, Training, Monitoring Med

24 Med High Endpoint Protection, Email Gateway Med

25 SQL Database Server Med Med Redundant Mirrored SQL Servers Med

26 Med Low Intrusion Prevention System, Host Intrustion Prevention

27 Low High Intrusion Prevention System, Host Intrustion Prevention Med

28 Application Servers Med Med Backup Servers Med

29 Med Low Intrusion Prevention System, Host Intrustion Prevention

30 Med High Access Controls, Training, Monitoring Med

31 Low High Intrusion Prevention System, Host Intrustion Prevention Med

32 Med High Endpoint Protection, Email Gateway Med

33 Email Servers Med Med Redundant Mirrored Email Servers Med

34 Med Low Intrusion Prevention System, Host Intrustion Prevention

Page 93: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

93

35 Low High Intrusion Prevention System, Host Intrustion Prevention Med

36 Med High Endpoint Security and Email Gateway Med

37 Med High Access Controls, Training, Monitoring Med

38 Storage Systems Med High Redundant Sites, Servers Med

39 Med High Access Controls, Training, Monitoring Med

40 Low Med Intrusion Prevention System, Host Intrustion Prevention Med

Page 94: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

94

Figure E.2a -INFORMATION ASSETS AND ANALYSIS

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Physical Files Document Administrators

Data Leakage

Unauthorized Duplication High

2 Unauthorized Access Doors Left Open High

3 Data Leakage Unauthorized Access High

4 User Profile Data

System Administrators, End Users

Data Leakage

Unauthorized Duplication High

5 Data Destruction Deletion High

6 Unauthorized Access

Misconfigured Access Controls High

7 Credit Data Document Administrators, System Administrators

Data Leakage

Unauthorized Duplication High

8 Data Destruction Deletion High

9 Unauthorized Access

Misconfigured Access Controls High

10 Mortgage Data

Document Administrators, System Administrators

Data Leakage

Unauthorized Duplication High

11 Data Destruction Deletion High

12 Unauthorized Access

Misconfigured Access Controls High

13 Personalized Forms

Document Administrators, System Administrators

Data Leakage

Unauthorized Duplication High

14 Data Destruction Deletion High

15 Unauthorized Access

Misconfigured Access Controls High

16 Other Digital Files

Document Administrators, System Administrators

Data Leakage

Unauthorized Duplication High

17 Data Destruction Deletion High

18 Unauthorized Access

Misconfigured Access Controls High

Page 95: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

95

Figure E.2b -INFORMATION ASSETS AND ANALYSIS

No. Asset Name Probability Inherent

Risk Existing Controls Adjusted

Risk

1 Physical Files Med High Cameras High

2 Med High Cameras, Monitoring Low

3 Low Low Cameras, Door Security via Keycard

4 User Profile Data Med High Endpoint Security, Email Gateway Med

5 Med Low Backups, VSS

6 Med High Access Controls, Training, Monitoring Med

7 Credit Data Med High Endpoint Security, Email Gateway Med

8 Med Low Backups, VSS

9 Med High Access Controls, Training, Monitoring Med

10 Mortgage Data Med High Endpoint Security, Email Gateway Med

11 Med Low Backups, VSS

12 Med High Access Controls, Training, Monitoring Med

13 Personalized Forms Med High

Endpoint Security, Email Gateway Med

14 Med Low Backups, VSS

15 Med High Access Controls, Training, Monitoring Med

16 Other Digital Files Med High Endpoint Security, Email Gateway Med

17 Med Low Backups, VSS

18 Med High Access Controls, Training, Monitoring Med

Page 96: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

96

Figure E.3a - SOFTWARE ASSETS AND ANALYSIS

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Loan Software

End Users, Application Administrators

Unauthorized Access

Misconfigured Access Controls High

2 Unauthorized Access

Session Left Unattended High

3 Credit Software

End Users, Application Administrators

Unauthorized Access

Misconfigured Access Controls High

4 Unauthorized Access

Session Left Unattended High

5 Teller Software

End Users, Application Administrators

Unauthorized Access

Misconfigured Access Controls High

6 Unauthorized Access

Session Left Unattended High

7 Online Banking

End Users, Application Administrators

Unauthorized Access

Misconfigured Access Controls High

8 Denial of Service Outtage High

9 AntiVirus System Administrators Denial of Service Network Outtage Med

10 Web Content Filter System Administrators

Denial of Service Network Outtage Med

11 Virus/Malware Misconfigured Filters High

12 Email Gateway System Administrators

Denial of Service Network Outtage Med

13 Junk Email/Malicious Misconfigured Filters High

14 Email System Administrators Denial of Service Network Outtage Med

15 Virus/Malware Junk Email/Malicious High

16 Unauthorized Emailing Client Malware Med

17 Virtualization Software System Administrators

Unauthorized Access

Misconfigured Access Controls High

18 Patch Management System Administrators

Denial of Service Network Outtage Med

19 Active Directory

End Users, Application Administrators

Unauthorized Access

Misconfigured Access Controls High

20 Profile Management System Administrators

Unauthorized Access

Misconfigured Access Controls High

21 Denial of Service Deletion Med

22 VPN Network Administrators Unauthorized Access

Misconfigured Access Controls High

23 Denial of Service VPN Endpoint Down Med

24 Mirroring Software System Administrators

Denial of Service Network Outtage Med

Page 97: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

97

Figure E.3b - SOFTWARE ASSETS AND ANALYSIS

No. Asset Name Probability Inherent

Risk Existing Controls Adjusted

Risk

1 Loan Software Med High Access Controls, Training, Monitoring Low

2 Low Med Cameras, Training, Session Lockout Med

3 Credit Software Med High Access Controls, Training, Monitoring Low

4 Low Med Cameras, Training, Session Lockout Med

5 Teller Software Med High Access Controls, Training, Monitoring Low

6 Low Med Cameras, Training, Session Lockout Med

7 Online Banking Med High Access Controls, Training, Monitoring Low

8 Med High Redundant Servers, Communications Med

9 AntiVirus Low Low Redundant Server

10 Web Content Filter Low Low Redundant Server

11 Med Med Antivirus Software Med

12 Email Gateway Low Low Redundant Server

13 Med Med Antivirus Software Med

14 Email Low Low Redundant Server

15 Med Med Antivirus Software, Email Gateway Med

16 Med Med Antivirus Software, Email Gateway Med

17 Virtualization Software Low Low

Access Controls, Training, Monitoring

18 Patch Management Low Low Redundant Server

19 Active Directory Med High Access Controls, Training, Monitoring Med

20 Profile Management Med High

Access Controls, Training, Monitoring Med

21 Med Low Backups

22 VPN Med High Access Controls, Training, Monitoring Low

23 Low Low Redundant VPN Peers

24 Mirroring Software Low High None High

Page 98: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

98

Figure E.4a - SERVICE ASSETS AND ANALYSIS

No. Asset Name Asset Owner(s) Threat Vulnerability Impact

1 Web Site Hosting

Application Administrators, Service Providers

Denial of Service Service Outtage High

2 Certificate Trust System Administrators, Service Providers

Denial of Service Service Outtage High

3 Event Monitoring

System Administrators, Service Providers

Denial of Service Service Outtage Low

4 Internet Network Administrators, Service Providers

Denial of Service Service Outtage High

5

Backend Banking Databases

Application Administrators, Service Providers

Denial of Service Service Outtage High

6 DNS Network Administrators, Service Providers

Denial of Service Service Outtage High

Figure E.4b - SERVICE ASSETS AND ANALYSIS

No. Asset Name Probabilit

y Inherent

Risk Existing Controls Adjusted Risk

1 Web Site Hosting Low High Redundant Servers Med

2 Certificate Trust Low Med None Med

3 Event Monitoring Med Low None

4 Internet Med High Redundant Connections, Dynamic Routing Med

5 Backend Banking Databases Low High

Redundant Databases, Connections, Backups Med

6 DNS Low Med None Med

Page 99: FAULT TOLERANCE AND ADVERSARIAL RISK ANALYSIS · However, the process of risk analysis is often misguided, if not completely overlooked. There are many reasons for this, including

99

APPENDIX F – Assets

Figure F.1 - ASSET INVENTORY

Hardware Assets

Information Assets

Software Assets

Service Assets

Laptops

Physical Files

Loan Processing Software

Web Site Hosting

Smartphones

User Profile Data

Credit Analysis Software

Certificates and Trust

Networking Equipment

Credit Data

Teller Software

Event Monitoring

Terminals

Mortgage Data

Online Banking

Internet

Domain Controllers

Personalized Forms

Anti Virus

Backend Banking Databases

File Servers

Other Digital Files

Web Content Filter

DNS

Web Servers

Email Gateway SQL Database

Server

Email Application Servers

Virtualization Software

Email Servers

Patch Management Storage Systems

Active Directory

Profile Management

VPN

Mirroring and Repllication