implementing it security controls

8
Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013 IMPLEMENTING IT SECURITY CONTROLS IT security controls are a result of protecting information system resources against unauthorized attempts that seek to access them. In an empirical view, this establishes a logical dichotomy between protecting the inside from the outside - not too terribly different than what we do when we lock the doors in our homes at night. This inside/outside approach has matured greatly, and continues to do so in todays information systems environment. Traditionally, most of the observed research and its results have produced technical measures in the forms of controls and best practices, which act as templates to “secure” information systems from those not authorized access to it. As a natural result, many guides primarily outline technical controls that prevent external access to internal information systems. The landscape of the information technology (IT) security controls has widened significantly over the past few decades, especially since the adoption of the public internet, and proliferation of internet service providers. It is further fueled today by the rise of connectedness via mobile means, whether smart phones or tablet devices, or even publicly available wifi frequently available any time and nearly anywhere. This shift has transitioned the philosophical approach to IT security to information security - information being the actual asset that is being protected though IT security controls. With this understanding, we must further recognize, accept, and conclude that information has value, and considering markets of competition, within and between the same or different industries, unauthorized attempts to access information systems are no longer just external configuration issues. They are also internal behavioral issues, which also drive not just technological implementations traditionally spawned by vendor configuration anomalies, but organizational structure, policies, vigilance, and training. Understand that few organizations in the perspective of an industry are exactly the same. Each corporation pursues a niche in an industry to produce revenue as a result of a profit model. Thus the technologies implemented within any corporation’s enterprise can and do differ. As such so do the IT security controls on the information systems which make each corporation unique in its industry. This is one of the main underlying premises of the Enterprise Architecture Cube (EA3) model. The EA3 Cube model is a framework that considers and combines strategy, business and technology. It does so over five layers, or domains, in the architecture - Goals and Initiatives, Products and Services, Data and Information, Systems and Applications, Networks and Infrastructure - each layer depending on the one that precedes it. For example, a corporation has an overall strategy of how it fits into any given market, this defines its goals and initiatives, which then dictates is products and services, which further develops how data and information are used, leading to which systems and applications are conducive for enterprise use, which then defines the requirements for the underlying network and infrastructure the enterprise needs to operate successfully. This approach is taken across each line of business a corporation has, depending of course on its portfolio diversification. However, this model addresses what is used, not necessarily how to secure what is used. This is the purpose of the Enterprise Information Security Architecture (EISA).

Upload: thomas-jones

Post on 14-Jan-2015

238 views

Category:

Technology


1 download

DESCRIPTION

IT security controls are a result of protecting information system resources against unauthorized attempts that seek to access them. In an empirical view, this establishes a logical dichotomy between protecting the inside from the outside - not too terribly different than what we do when we lock the doors in our homes at night. This inside/outside approach has matured greatly, and continues to do so in todays information systems environment. Traditionally, most of the observed research and its results have produced technical measures in the forms of controls and best practices, which act as templates to “secure” information systems from those not authorized access to it. As a natural result, many guides primarily outline technical controls that prevent external access to internal information systems. The landscape of the information technology (IT) security controls has widened significantly over the past few decades, especially since the adoption of the public internet, and proliferation of internet service providers. Even today further fueled by the rise of connectedness via mobile means, whether smart phones or tablet devices, or even publicly available wifi frequently available any time and nearly anywhere. This shift has transitioned the philosophical approach to IT security to information security - information being the actual asset that is being protected though IT security controls. With this understanding, we must further recognize, accept, and conclude that information has value, and within markets of competition, within and between the same or different industries, unauthorized attempts to access information systems are no longer just external configuration issues. They are also internal behavioral issues, which also drive not just technological implementations traditionally spawned by vendor configuration anomalies, but organizational structure, policies, vigilance, and training.

TRANSCRIPT

Page 1: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

I M P L E M E N T I N G I T S E C U R I T Y C O N T R O L S

IT security controls are a result of protecting information system resources against unauthorized attempts that seek to access them. In an empirical view, this establishes a logical dichotomy between protecting the inside from the outside - not too terribly different than what we do when we lock the doors in our homes at night. This inside/outside approach has matured greatly, and continues to do so in todays information systems environment. Traditionally, most of the observed research and its results have produced technical measures in the forms of controls and best practices, which act as templates to “secure” information systems from those not authorized access to it. As a natural result, many guides primarily outline technical controls that prevent external access to internal information systems.

The landscape of the information technology (IT) security controls has widened significantly over the past few decades, especially since the adoption of the public internet, and proliferation of internet service providers. It is further fueled today by the rise of connectedness via mobile means, whether smart phones or tablet devices, or even publicly available wifi frequently available any time and nearly anywhere.

This shift has transitioned the philosophical approach to IT security to information security - information being the actual asset that is being protected though IT security controls. With this understanding, we must further recognize, accept, and conclude that information has value, and considering markets of competition, within and between the same or different industries, unauthorized attempts to access information systems are no longer just external configuration issues. They are also internal behavioral issues, which also drive not just technological implementations traditionally spawned by vendor configuration anomalies, but organizational structure, policies, vigilance, and training.

Understand that few organizations in the perspective of an industry are exactly the same. Each corporation pursues a niche in an industry to produce revenue as a result of a profit model. Thus the technologies implemented within any corporation’s enterprise can and do differ. As such so do the IT security controls on the information systems which make each corporation unique in its industry. This is one of the main underlying premises of the Enterprise Architecture Cube (EA3) model.

The EA3 Cube model is a framework that considers and combines strategy, business and technology. It does so over five layers, or domains, in the architecture - Goals and Initiatives, Products and Services, Data and Information, Systems and Applications, Networks and Infrastructure - each layer depending on the one that precedes it. For example, a corporation has an overall strategy of how it fits into any given market, this defines its goals and initiatives, which then dictates is products and services, which further develops how data and information are used, leading to which systems and applications are conducive for enterprise use, which then defines the requirements for the underlying network and infrastructure the enterprise needs to operate successfully. This approach is taken across each line of business a corporation has, depending of course on its portfolio diversification. However, this model addresses what is used, not necessarily how to secure what is used. This is the purpose of the Enterprise Information Security Architecture (EISA).

Page 2: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

The EISA is a collection of models used which aligns well with the five layers of the EA3 architecture. Its five layers respective to the EA3 model include IS Governance, Operations and Personnel Security, Dataflow and Application Development Security, Systems Security, and Infrastructure and Physical Security. Paralleled to the business context we see in the EA3 model, the EISA model does so in a information security context. The information system governance (business drivers) dictate the operations and personnel security (products and services), which feeds into the dataflow and application development security (data and information), which defines parameters for systems security (systems and applications), which then define requirements for infrastructure and physical security (networks and infrastructure). This provides a comprehensive and contextual model for enterprise information security, and when compounded with the EA3 approach, is contextually relevant to the corporations purpose.

So, what happens when we apply this model - convert it from a model to actual controls? The threshold where knowing translates to doing; where horsepower is converted to torque, per se. A significant amount of public and private sector research has produced technical controls that, when applied to network and information system appliances, hardens the security posture of the device, making it less susceptible to circumvention. However an abundance of these controls are primarily intended on protecting internal resources from outside or external unauthorized access. This, after all, has generally been the traditional threat.

Examples of these external IT security controls which often deal with explicit technical controls, can be found in the National Institute of Standards and Technology (NIST) Special Publication 800-53A - an information security guide to assessing the security controls in federal information systems. Other documents for example include the Federal Information Processing Standard 200 - minimum security requirements for federal information and information systems, and the Defense Information Systems Agency’s (DISA) Security Technical Information Guides (STIG) for various hardware products, platforms. DISA STIG documents also consider the role that which they play in the topology of the information system or network, adding a contextual approach to technical controls. DISA STIG’s can be located on the publicly available Information Assurance Support Environment website, a plethora of information assurance information supported and operated by DISA. FIPS and NIST documents are also publicly available as well.

The US National Institute of Standards and Technology classifies information security controls into three categories:

Technical controls which traditionally include products and processes (such as firewalls, antivirus software, intrusion detection, and encryption techniques) that focus mainly on protecting an organization’s ICTs and the information flowing across and stored in them.

Operational controls which include enforcement mechanisms and methods of correcting operational deficiencies that various threats could exploit; physical access controls, backup capabilities, and protection from environmental hazards are examples of operational controls.

Management controls, such as usage policies, employee training, and business continuity planning, target information security’s nontechnical areas (Baker & Wallace, 2007).

Page 3: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

While these categorical controls are often applied to protect against external threats, they can also be applied to mitigate internal threats as well. But there is a problem with the underlying assumption - that internal threats and external threats are somehow equal. More specifically, that the information available to external adversaries, used to drive technical controls, is no more significant, detailed, or advanced than that available to internal adversaries. This is logically inconsistent with the reality of the situation. In 2007, 59% of survey respondents perceived that they had experienced insider abuse of network resources. About one in four respondents perceived that over 40% of their total financial losses from cyber attack were due to insider activities (Pfleeger, Predd, Hunker, & Bulford, 2010).

This brings into question the quality of the information security program. This leads to question what other factors, if any, are hindering effective implementation of IT security controls?

Wade Baker and Linda Wallace of Virginia Tech pursued this issue. They recognize that as sophisticated as these technologies have become, technical approaches alone can’t solve security problems for the simple reason that information security isn’t merely a technical problem. It’s also a social and organizational problem. They note that the high implementation and maintenance costs of security controls are increasing pressure on managers to distinguish between controls their organizations need and those that are less critical. Moreover, identifying the optimal level at which to implement individual controls is a delicate balance of risk reduction and cost efficiency. They further assert that because they’re unsure of the best controls for their situations, managers often deploy as many as possible, often without regard to their quality of effectiveness. In some instances, controls intended to correct security deficiencies actually add deficiencies to the system (Baker & Wallace, 2007).

Other findings from this research survey note that all of the organizations involved said that management controls had substantially lower implementation ratings than controls in technical and operation categories. Baker and Wallace go on to asset that organizations must realize that a large proportion of information security problems extend far beyond technology, and learn to appreciate the role that less technical controls, such as policy development, play in minimizing security breaches’ impact on mission-critical operations (Baker & Wallace, 2007).

This suggests that there is some disparity in decision making - the decision between what categorical controls are available, and which of those controls are to be implemented. If we consider this disparity carefully, it leads to a larger question about understanding the behavior behind managing information technology security controls, and at what point do we decide to act. A behavioral research paper of insider-threat risks from 2008 does just that. The purpose of that research is to utilize a systems dynamic model, often used with cognitive psychology, and apply a behavioral theory to insider threat risk dynamics.

Evidence from a joint U.S. Secret Service and CERT Coordination Center (CERT/CC1) study on actual insider computer-based crimes indicates that managers, at times, make decisions intended to enhance organizational performance and productivity with the unintended consequence of magnifying the organization’s exposure to, and likelihood of, insider attacks. Additionally according to the Dynamic Trigger Hypothesis, an organizational focus on external threats can lead to complacency, allowing an insider to gain confidence by exploiting known

Page 4: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

weaknesses in organizational defenses. An effective defense against insider attacks encompasses technology as well as an understanding of behavior. Because best-practices guidance to date has focused almost exclusively on implementing technological controls, this study focuses on the generally neglected portion of the defense equation(Martinez-Moyano, Rich, Conrad, Andersen, & Stewart, 2008).

This study discerns between selection (what to implement) and detection (detecting criminal behavior), and that they are both categorical decision problems. It further asserts that detection problems often involve low base rates and a high uncertainty level. The base rate refers to the proportion of individuals in a group is likely to engage in unethical behavior during a specific time frame - your candidates for insider attacks. The uncertainty involves the lack of knowledge available to defenders in organizations. Given the variance in information, threats can be so small as to be difficult to distinguish from normal activity. Thus, this leaves information workers and security officers to make judgements when they evaluate these potential threats. When these two groups of personnel - information workers and security officers - decide to act is based on what is called a decision threshold. This is likened to a mammographers decision to recommend biopsy to women for breast cancer. Decision thresholds between information workers and security officers will differ based on endogeneity - the motivations of attackers, the staff’s individual circumstances, and the identification of malicious insider activity that may not be distinguishable from normal activity (Martinez-Moyano et al., 2008).

As a result, Signal Detection Theory was applied to the Systems Dynamics Model (SDM) to observe behavioral insider threat risks. SDT separates accuracy of judgment from the decision threshold, typically measured by the likelihood ratio at the threshold value. Many descriptive studies have used SDT to measure the accuracy of judgments made by decision makers (Martinez-Moyano et al., 2008).

The researchers note that SDM is the effect of information flows over time on an organization’s ability to detect and respond to threats. The main purpose of system dynamics is to discover the structure that conditions the observed behavior of systems over time. System dynamicists pose dynamic hypotheses that endogenously describe the observed behavior of systems, linking hypothesized structures with observed behaviors (Martinez-Moyano et al., 2008).

How does this relate to the behavior of insider threats as an effective information technology security control? Insider threats in organizations persist regardless of compelling evidence of their cost and the proliferation of recommendations of many security experts. Many of these recommendations come in the form of new technology and best practices. This research approach illuminates the behavioral aspects of the insider threat. According to data analyzed by CERT/CC, organizations face three primary types of insider threats/attacks: long-term fraud, sabotage, and espionage (information theft) - this specific research focuses on long term fraud (Martinez-Moyano et al., 2008).

To surmise, information workers and security officers use their knowledge to decide what anomalies to investigate with unknown positive or negative states (right or wrong). In general, judgments of likelihood of threats are compared to decision thresholds and defense actions are launched, or not, creating specific outcomes. The outcomes, in turn, become beliefs that, when

Page 5: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

combined with organizational incentives, influence future positions of these decision thresholds (Martinez-Moyano et al., 2008). In short, it is an ecosystem of which “experience” is developed, contextually specific to the function of when to apply information security controls. The research study is more technical describing this as a comparison between judgement calculations and against decision thresholds, and at what level this triggers further inspection in the form of control audits, etc.

The roles of information workers and security officers play an important part in the effectiveness of security controls. They are different in their incentives of the function within the company. But they are also different in how their attention and memory (combined/related to experience) combine into a belief formation, which is also used in the execution of their roles of applying effective information technology security controls (Martinez-Moyano et al., 2008).

For security officers to change their beliefs, a rather lengthy process needs to be followed. It is not that security officers are inflexible, the model was parameterized with the idea that security officers rely heavily on their long-term knowledge of security issues and are less forgetful of consequences than the individuals responsible for day-to-day operations. Information workers can change beliefs most easily because they are closest to the technology and operations, and because they serve themselves in their role (Martinez-Moyano et al., 2008).

In the presence or result of attacker activity, management is likely to increase training, skills, and abilities of its staff as a response - the alignment training scenario run. In this run, as opposed to the base-case run where information workers determined the level of their decision thresholds solely on the basis of their outcome-related intelligence (or their own perceptions and experience), the internal attackers never reached the point at which their assessment of the likelihood of attack success surpassed their attack threshold and, therefore, the internal attackers never went into attack mode. The results of the precursors, generated to test the control system, never produced enough confidence to initiate the launch of attacks (Martinez-Moyano et al., 2008). This attests to an organizational policy and management practice of constant training and awareness of contemporary security tools and controls. This resulted in an 89.06% efficiency in thwarting insider attacks, 42.5% better than the base-case run.

Another approach was to improve the training the defenders to become better judges instead of merely making them more informed - the consistency training run. This resulted in a 65.38% efficiency, 4.62% better than the base-case run. Both this consistency training approach and alignment training approach resulted in a better financial performance, especially in reduced costs incurred from security audits as a result of security incidents. In fact the only scenario which actually decreased the inside attackers inferred probability of attack was the alignment training scenario, not the consistency scenario.

The previous two research examples lead to a rather contemporary solution to the insider threat - a framework proposed in 2010 to help mitigate the insider threat problem. This framework is based around completeness, uniqueness, and comparativeness as foundations to address insider threats. Four reference points can help us understand insiders and the actions posing risks: the organization, the individual, the information technology system, and the environment. The result is an even further contextual response than the previous two research efforts from 2007 and 2008.

Page 6: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

This research considers previous definitions of insider threats and adapts them to a broader definition, that an insider threat is an insider’s action that puts at risk an organization’s data, processes, or resources in a disruptive or unwelcome way (Pfleeger et al., 2010).

The organization plays a role in this framework because it must identify first who an insider is. It then must clarify the policies which regulate its security control posture. This particular framework differentiates between de jure policy (the official organizational policy) and de facto policy (the set of security practices the organization—or groups or individuals within the organization—actually follows) (Pfleeger et al., 2010).

The system is inherently involved because its job is to implement organizational IT security policy and controls. As a result the system can be both the facilitator and the victim of insider threats.

So far as the individual’s role in the framework is concerned, inappropriate insider behavior is not new: approximately one-third to three-quarters of all employees have engaged in some type of fraud, vandalism, or sabotage in the workplace. The factors shaping inappropriate workplace behavior have been studied extensively and social science research on workplace deviance provides useful insight into characteristics of inappropriate insider behavior. However, even organizational behavior researchers do not have a consistent terminology for workplace deviance; the same general domain is described as antisocial behavior, employee vice, organizational behavior, workplace aggression, organizational retaliation behavior, noncompliant behavior, and organization-motivated aggression (Pfleeger et al., 2010).

Nor do malicious insiders share a common profile, a finding consistent with studies of terrorists, surprisingly. Previous work by the Software Engineering Institute’s CERT addresses motivations including financial gain and revenge; however, individuals may have multiple reasons for engaging in or permitting misuse. Studies of espionage and white collar crime have failed to exhibit a correlation between personal attributes and malicious in- tent to do harm. Therefore, attempts to identify or classify insider threats by screening for specific personal characteristics paint an imperfect picture. Because the set of malicious insiders is small and diverse, no single personal characteristic or set of characteristics can act as a reliable predictor of future misbehavior. Certainly, the characteristics of bad actors are shared by good actors too. Consider as well even malicious insiders can be tricked, and as research asserts, intent is important because it helps shape the response space (Pfleeger et al., 2010). This is an issue not clearly addressed in many, if any, popular security models or frameworks.

Lastly the environment must be considered due to various legal frameworks and regulatory systems in various states and countries. It also defines what is morally or ethically permissible, or even socially acceptable across various cultures. This allows the framework to consider the context of location as it pertains to effective information technology security controls respective to insider threats.

So how to we respond to “insiders behaving badly” as the researchers say? The assertion states that once the nature of the insider threat action has been identified, that additional questions must be asked in relation to the four reference points (dimensions). These questions are:

Page 7: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

Between the Organization and Individual, the framework questions if the action violates dejure or defacto policy. It then asks if either of these policies are deficient, followed by asking about the intent and motive of the insider action.

Between the Organization and the System, the framework asks if the policies are implemented correctly on the system. Incorrectly implemented policies can result in actions which are not all malicious, which is an important distinction to make.

Between the Individual and the System, the framework asks what was the role of the system in the insiders action. This helps determines the systems complicit role, if any, that contributed to the insider threat attack.

Between the Environment and the Individual, the frameworks asks if the insiders actions are legal, and if they were ethical. This discerns the response space by determining if the company must report this to public authorities, or if the issue can be resolved internally.

Between the Environment and the Organization, the framework asks if the policies are legal and if the policies are ethical. The organizations functions within the physical environment (Country or State) which it resides. In the case of multi-jurisdictional organizations, whether in different States or Nations, some approved company policies may not be legal in all locations.

However, the authors of this research also recognize that there are several things the framework does not do:

1. The classification of insider actions is fundamentally organization-specific. There is no canonical organization with canonical security policies and system implementation to which a given organization can be compared.

2. Some factors, such as whether the insider is part of a group, are omitted.

3. The descriptions of individual motivation and intent exclude the individual’s perspective. For example, we do not address whether an insider’s action was motivated by financial gain or by revenge. Neither do we include the events leading to the final unwelcome action.

4. The nature of the insider threat continues to evolve as IT (and our understanding of it) evolves. Moreover, the roles of system and organization may change with the ubiquity of IT. Even if a technological border can be defined, physical borders and the ability to circumvent them through nontechnical means makes it increasingly difficult to distinguish between insider and outsider (Pfleeger et al., 2010).

To conclude, the EISA is an information technology security control model that, when combined with the EA3 model, identifies security controls respective to an organizations particular purpose in an industry. While NIST, FIPS, and DISA produce technical control lists that guide information technology security staff toward certain directions, it does not adequately address the quality of the controls, nor does there appear to be a managerial focus of the quality of the security posture rather than the quantity, especially in times of fiscally conservative restraint.

Page 8: Implementing IT Security Controls

Thomas Jones: IST 725, Dr. Scott Bernard, Spring 2013

To address the quality of security controls you must also understand the underlying behavior of those who enforce and exploit them. Applying cognitive psychology behavioral models such as Signal Detection Theory in a Systems Dynamics Model to understand and counteract inside threat attacks, contextualizes a security framework in the same manner that a business strategy and its drivers contextualizes the EA3 model.

Further combining this cognitive behavioral approach with a four dimensional insider threat framework extends this approach in the same manner the EA3 model does to the EISA model.

It is my belief that the combination of a behavioral approach with said insider threat framework could provide a sufficient, and even successful model to address and combat organizational insider threats. In the least, it does entice further consideration and research into this premise.

Baker, W. H., & Wallace, L. (2007). Is Information Security Under Control?: Investigating

Quality in Information Security Management. IEEE Security and Privacy, 5(1), 36–44.

Martinez-Moyano, I. J., Rich, E., Conrad, S., Andersen, D. F., & Stewart, T. R. (2008). A

behavioral theory of insider-threat risks: A system dynamics approach. Transactions on

Modeling and Computer Simulation (TOMACS, 18(2).

Pfleeger, S. L., Predd, J. B., Hunker, J., & Bulford, C. (2010). Insiders Behaving Badly:

Addressing Bad Actors and Their Actions. IEEE TRANSACTIONS ON INFORMATION

FORENSICS AND SECURITY, 5(1), 169–179.