about the national science and technology council - nitrd

136
NATIONAL SCIENCE AND TECHNOLOGY COUNCIL FEDERAL PLAN FOR CYBER SECURITY AND INFORMATION ASSURANCE RESEARCH AND DEVELOPMENT A Report by the Interagency Working Group on Cyber Security and Information Assurance Subcommittee on Infrastructure and Subcommittee on Networking and Information Technology Research and Development April 2006

Upload: others

Post on 12-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: About the National Science and Technology Council - NITRD
Page 2: About the National Science and Technology Council - NITRD

About the National Science and Technology Council

The National Science and Technology Council (NSTC) was established by Executive Order onNovember 23, 1993. This Cabinet-level Council is the principal means for the President tocoordinate science and technology across the diverse parts of the Federal research anddevelopment enterprise. Chaired by the President, the NSTC membership consists of the VicePresident, the Science Advisor to the President, Cabinet secretaries, agency heads with significantscience and technology responsibilities, and other White House officials.

An important objective of the NSTC is the establishment of clear national goals for Federalscience and technology investments in areas ranging from information technologies and healthresearch, to improving transportation systems and strengthening fundamental research. TheCouncil prepares research and development strategies that are coordinated across Federal agenciesto form investment packages aimed at accomplishing multiple national goals.

About this Report

This report was developed by the Interagency Working Group (IWG) on Cyber Security andInformation Assurance (CSIA), an organization under the NSTC. The CSIA IWG reports jointlyto the Subcommittee on Infrastructure of the NSTC’s Committee on National and HomelandSecurity, and the Subcommittee on Networking and Information Technology Research andDevelopment (NITRD) of the NSTC’s Committee on Technology.

The report is published by the National Coordination Office for Networking and InformationTechnology Research and Development (NCO/NITRD). The NCO/NITRD supports overallplanning, budget, and assessment activities for the multiagency NITRD Program under theauspices of the NSTC’s NITRD Subcommittee.

To Request Additional Copies

To request additional copies of the Federal Plan for Cyber Security and Information AssuranceResearch and Development or other NITRD Program publications, please contact: NCO/NITRD,Suite II-405, 4201 Wilson Boulevard, Arlington, Virginia 22230; (703) 292-4873; fax: (703)292-9097; e-mail: [email protected]. Electronic versions of this report and other NITRDdocuments are also available on the NITRD Web site: http://www.nitrd.gov.

Copyright Information

This is a work of the U.S. Government and is in the public domain. It may be freely distributed,copied, and translated; acknowledgement of publication by the National Coordination Office forNetworking and Information Technology Research and Development is requested. Anytranslation should include a disclaimer that the accuracy of the translation is the responsibility ofthe translator and not the NCO/NITRD. It is requested that a copy of any translation be sent tothe NCO/NITRD at the above address.

Page 3: About the National Science and Technology Council - NITRD

NATIONAL SCIENCE AND TECHNOLOGY COUNCIL

FEDERAL PLAN

FOR

CYBER SECURITY AND INFORMATION ASSURANCE

RESEARCH AND DEVELOPMENT

A Report by the Interagency Working Group on Cyber Security and Information Assurance

Subcommittee on Infrastructureand

Subcommittee on Networking and Information Technology Research and Development

April 2006

Page 4: About the National Science and Technology Council - NITRD

page intentionally left blank

Page 5: About the National Science and Technology Council - NITRD

EXECUTIVE OFFICE OF THE PRESIDENTOFFICE OF SCIENCE AND TECHNOLOGY POLICY

WASHINGTON, D.C. 20502

Dear colleague:

The Nation’s information technology (IT) infrastructure – the seamless fabric of interconnectedcomputing and storage systems, mobile devices, software, wired and wireless networks, andrelated technologies – has become indispensable to public- and private-sector activitiesthroughout our society and around the globe. Pervasive, cost-effective communication enables avast, constant flow of information that has transformed work environments and processes ingovernment, business and industry, and advanced research, health care, and many other fields.

This IT infrastructure also supports other critical U.S. infrastructures, such as those that supplyour food, water, energy, financial transactions, and transportation, as well as public health,emergency response, and other vital services. The interconnectivity that makes seamless deliveryof essential information and services possible, however, also exposes many previously isolatedcritical infrastructures to the risk of cyber attacks mounted through the IT infrastructure byhostile adversaries. The exposure of critical infrastructure to cyber-based attacks is expected toincrease, as convergence of network and device technologies accelerates, and as systemsincreasingly connect to the Internet to provide added functionality or greater efficiency.

Safeguarding the Nation’s IT infrastructure and critical infrastructure sectors for the future is amatter of national and homeland security. Developed by the Cyber Security and InformationAssurance Interagency Working Group under the auspices of the National Science andTechnology Council, this Federal Plan for Cyber Security and Information Assurance Researchand Development presents a coordinated interagency framework for addressing critical gaps incurrent cyber security and information assurance capabilities and technologies. The Plan focuseson interagency research and development (R&D) priorities and is intended to complementagency-specific prioritization and R&D planning efforts in cyber security and informationassurance. The Plan also describes the key Federal role in supporting R&D to strengthen theoverall security of the IT infrastructure through development of fundamentally more securenext-generation technologies.

I commend this Plan as an important step in addressing the Administration’s national and cybersecurity priorities, and I look forward to working with Federal agencies and the private sector todevelop the research roadmap called for in the Plan.

Sincerely,

John H. Marburger, IIIDirector

Page 6: About the National Science and Technology Council - NITRD
Page 7: About the National Science and Technology Council - NITRD

v

EXECUTIVE SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix

PART I: Federal Plan for Cyber Security and Information Assurance R&D

OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

Technology Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1The Federal Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2The Federal Plan in Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

Plan Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

VULNERABILITIES, THREATS, AND RISK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

Types of Threats and Threat Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Attackers’ Asymmetric Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

Malicious Hackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Organized Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Terrorists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Nation States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Threat and Vulnerability Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Insiders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Outsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Supply Chain Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Industrial Espionage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8State-Sponsored Espionage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Other Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

Immediate Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9Industrial Process Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10Banking and Finance Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

ANALYSIS AND PLAN FRAMEWORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

Recent Calls for Cyber Security and Information Assurance R&D . . . . . . . . . . . . . . . . . . . . . .13OSTP/OMB Memorandum on FY 2007 Administration R&D Budget Priorities . . . . . . .13PITAC Cyber Security Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13The National Strategy to Secure Cyberspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Cyber Security Research and Development Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14INFOSEC Research Council (IRC) Hard Problem List . . . . . . . . . . . . . . . . . . . . . . . . . .14

Strategic Federal Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Development of Baseline Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

Cyber Security and Information Assurance R&D Categories and Technical Topics . . . . . . .15Prioritization of Technical Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

TABLE OF CONTENTS

Page 8: About the National Science and Technology Council - NITRD

vi

Investment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16R&D Technical Topic Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

R&D Technical and Funding Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16Commentary on Analysis of Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

Table 1: Top Technical and Funding Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18-19

Cyber Security and Information Assurance R&D Priorities: Comparison with PITAC and IRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20

FINDINGS AND RECOMMENDATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27

PART II: Technical Perspectives on Cyber Security and Information Assurance R&D

1. Functional Cyber Security and Information Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31

1.1 Authentication, Authorization, and Trust Management . . . . . . . . . . . . . . . . . . . . . . . . . . .311.2 Access Control and Privilege Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .331.3 Attack Protection, Prevention, and Preemption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351.4 Large-Scale Cyber Situational Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .371.5 Automated Attack Detection, Warning, and Response . . . . . . . . . . . . . . . . . . . . . . . . . . . .381.6 Insider Threat Detection and Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .391.7 Detection of Hidden Information and Covert Information Flows . . . . . . . . . . . . . . . . . . .411.8 Recovery and Reconstitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421.9 Forensics, Traceback, and Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

2. Securing the Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46

2.1 Secure Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462.2 Secure Routing Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482.3 IPv6, IPsec, and Other Internet Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502.4 Secure Process Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53

3. Domain-Specific Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

3.1 Wireless Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .553.2 Secure Radio Frequency Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .563.3 Security of Converged Networks and Heterogeneous Traffic . . . . . . . . . . . . . . . . . . . . . . .573.4 Next-Generation Priority Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58

4. Cyber Security and Information Assurance Characterization and Assessment . . . . . . . . . . . . . . . .60

4.1 Software Quality Assessment and Fault Characterization . . . . . . . . . . . . . . . . . . . . . . . . . .604.2 Detection of Vulnerabilities and Malicious Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614.3 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624.4 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634.5 Software Testing and Assessment Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64

Page 9: About the National Science and Technology Council - NITRD

vii

4.6 Risk-Based Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664.7 Critical Infrastructure Dependencies and Interdependencies . . . . . . . . . . . . . . . . . . . . . . . .67

5. Foundations for Cyber Security and Information Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

5.1 Hardware and Firmware Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .695.2 Secure Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .705.3 Security-Centric Programming Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725.4 Security Technology and Policy Management Methods

and Policy Specification Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .745.5 Information Provenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .755.6 Information Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .775.7 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .785.8 Multi-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .805.9 Secure Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .815.10 Fault-Tolerant and Resilient Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .835.11 Integrated, Enterprise-Wide Security Monitoring and Management . . . . . . . . . . . . . . . . .855.12 Analytical Techniques for Security Across the IT Systems Engineering Life Cycle . . . . . .86

6. Enabling Technologies for Cyber Security and Information Assurance R&D . . . . . . . . . . . . . . . .88

6.1 Cyber Security and Information Assurance R&D Testbeds . . . . . . . . . . . . . . . . . . . . . . . .886.2 IT System Modeling, Simulation, and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .896.3 Internet Modeling, Simulation, and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .916.4 Network Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .926.5 Red Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93

7. Advanced and Next-Generation Systems and Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95

7.1 Trusted Computing Base Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .957.2 Inherently Secure, High-Assurance, and Provably Secure Systems and Architectures . . . . .967.3 Composable and Scalable Secure Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .987.4 Autonomic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .997.5 Architectures for Next-Generation Internet Infrastructure . . . . . . . . . . . . . . . . . . . . . . . .1017.6 Quantum Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102

8. Social Dimensions of Cyber Security and Information Assurance . . . . . . . . . . . . . . . . . . . . . . . . .104

8.1 Trust in the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1048.2 Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105

APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107

Appendix A: CSIA IWG Agency Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . .109Appendix B: The Networking and Information Technology

Research and Development Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116Appendix C: Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120

ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122

Page 10: About the National Science and Technology Council - NITRD
Page 11: About the National Science and Technology Council - NITRD

ix

EXECUTIVE SUMMARY

Powerful personal computers, high-bandwidthand wireless networking technologies, and thewidespread use of the Internet have transformedstand-alone computing systems and predominantlyclosed networks into the virtually seamless fabric oftoday’s information technology (IT) infrastructure.This infrastructure provides for the processing,transmission, and storage of vast amounts of vitalinformation used in virtually every facet of society,and it enables Federal agencies to routinely interactwith each other as well as with industry, privatecitizens, state and local governments, and thegovernments of other nations. As the ITinfrastructure has broadened to global scale, thevolume of electronic information exchangedthrough what is popularly known as “cyberspace”has grown dramatically and new applications andservices proliferate.

The IT infrastructure supports critical U.S.infrastructures such as power grids, emergencycommunications systems, financial systems, and air-traffic-control networks. While the vast majority ofthese critical infrastructures (including their ITcomponents) are owned and operated by theprivate sector, ensuring their operational stabilityand security is vital to U.S. national, homeland,and economic security interests.

Cyber threats are asymmetric, surreptitious, andconstantly evolving – a single individual or a smallgroup anywhere in the world can inexpensively andsecretly attempt to penetrate systems containingvital information or mount damaging attacks oncritical infrastructures. Attack tools and resourcesare readily available on the Internet and newvulnerabilities are continually discovered andexploited. Moreover, the pervasive interconnectivityof the IT infrastructure makes cyber attack anincreasingly attractive prospect for adversaries thatinclude terrorists as well as malicious hackers andcriminals.

The Federal Role

In this environment of heightened risk, theFederal government has an essential role to play incyber security and information assurance (CSIA)research and development (R&D). As in otherscience, technology, and engineering fields ofcritical importance to the Nation, Federalleadership should energize a broad collaborationwith private-sector partners and stakeholders inacademia and the national and industry laboratorieswhere the bulk of Federal research is carried out.Such a partnership can chart a national R&Dagenda for strengthening the security of theNation’s IT infrastructure.

This Federal Plan for Cyber Security andInformation Assurance Research and Developmenttakes the first step toward developing that agenda.The Plan also responds to recent calls for improvedFederal cyber security and information assuranceR&D, as outlined in the following documents: theOSTP/OMB Memorandum on Administration FY 2007 R&D Budget Priorities; Cyber Security: ACrisis of Prioritization, the 2005 report of thePresident’s Information Technology AdvisoryCommittee (PITAC); the 2003 National Strategy toSecure Cyberspace; and the 2002 Cyber SecurityResearch and Development Act (P.L. 107-305).

Developed by the Cyber Security andInformation Assurance Interagency WorkingGroup (CSIA IWG), an organization under theNational Science and Technology Council(NSTC), the Plan provides baseline informationand a technical framework for coordinated multi-agency R&D in cyber security and informationassurance. Other areas – including policy making(e.g., legislation, regulation, funding, intellectualproperty, Internet governance), economic issues, IT workforce education and training, andoperational IT security approaches and bestpractices – also have substantial roles to play in

Page 12: About the National Science and Technology Council - NITRD

x

improving cyber security and informationassurance. However, these subjects are outside thescope of the Plan, which addresses only the role ofFederal R&D.

Likewise, the Plan is not a budget document andthus does not include current or proposed agencyspending levels for cyber security and informationassurance R&D. Agencies determine theirindividual budget priorities according to theirmission needs and requirements.

Strategic Federal R&D Objectives

The following strategic Federal objectives forcyber security and information assurance R&D arederived from a review of current legislative andregulatory policy requirements, analyses of cybersecurity threats and infrastructure vulnerabilities,and agency mission requirements:

1. Support research, development, testing, andevaluation of cyber security and informationassurance technologies aimed at preventing,protecting against, detecting, responding to, andrecovering from cyber attacks that may have large-scale consequences.

2. Address cyber security and information assuranceR&D needs that are unique to criticalinfrastructures.

3. Develop and accelerate the deployment of newcommunication protocols that better assure thesecurity of information transmitted over networks.

4. Support the establishment of experimentalenvironments such as testbeds that allowgovernment, academic, and industry researchers toconduct a broad range of cyber security andinformation assurance development and assessmentactivities.

5. Provide a foundation for the long-term goal ofeconomically informed, risk-based cyber securityand information assurance decision making.

6. Provide novel and next-generation secure ITconcepts and architectures through long-termresearch.

7. Facilitate technology transition and diffusion ofFederally funded R&D results into commercialproducts and services and private-sector use.

Plan’s Baseline Information

To provide a starting point and framework forcoordinated interagency R&D to improve thestability and security of the IT infrastructure, theCSIA IWG agencies developed the followingbaseline information about ongoing Federal R&Dactivities in cyber security and informationassurance:

R&D categories and technical topics – a list ofcyber security and information assurance R&Dtechnical topics grouped in broad categories. Whilenot intended to be definitive, the list provides astructure for a survey and analysis of agencytechnical and funding priorities.

R&D technical and funding priorities – a set ofinteragency priorities derived by aggregating agencyinformation. These interagency priorities differfrom individual agency priorities, which vary basedon agency missions and R&D requirements.

Investment analysis – a comparison of interagencytechnical and investment priorities to identify topicsthat are interagency technical priorities and inwhich there might be investment opportunities.(Table 1, pages 18-19.)

R&D technical topic perspectives – commentarieson the status of R&D in each topic. Prepared andreviewed by agency representatives with expertise inspecific topics, these commentaries describe thetopic and its importance, the state of the art, andgaps in current capabilities. (Part II, beginning onpage 31.)

Together, the technical and funding priorities,investment analysis, and technical commentaries setthe stage for the development of a cyber securityand information assurance R&D roadmap andother coordinated activities proposed in the Plan’srecommendations.

Page 13: About the National Science and Technology Council - NITRD

xi

Findings and Recommendations

Strategic interagency R&D is needed tostrengthen the cyber security and informationassurance of the Nation’s IT infrastructure.Planning and conducting such R&D will requireconcerted Federal activities on several fronts as wellas collaboration with the private sector. Thespecifics of the strategy proposed in this Plan arearticulated in a set of findings andrecommendations. Presented in greater detail in thereport, these findings and recommendations aresummarized as follows:

1. Target Federal R&D investments to strategiccyber security and information assurance needs

Federal cyber security and information assuranceR&D managers should reassess the Nation’sstrategic and longer-term cyber security andinformation assurance needs to ensure that FederalR&D addresses those needs and complements areasin which the private sector is productively engaged.

2. Focus on threats with the greatest potentialimpact

Federal agencies should focus cyber security andinformation assurance R&D investments on high-impact threats as well as on investigation ofinnovative approaches to increasing the overallsecurity and information assurance of IT systems.

3. Make cyber security and informationassurance R&D both an individual agency andan interagency budget priority

Agencies should consider cyber security andinformation assurance R&D policy guidance asthey address their mission-related R&Drequirements. To achieve the greatest possiblebenefit from investments throughout the Federalgovernment, cyber security and informationassurance R&D should have high priority forindividual agencies as well as for coordinatedinteragency efforts.

4. Support sustained interagency coordination andcollaboration on cyber security and informationassurance R&D

Sustained coordination and collaboration amongagencies will be required to accomplish the goalsidentified in this Plan. Agencies should participatein interagency R&D coordination andcollaboration on an ongoing basis.

5. Build security in from the beginning

The Federal cyber security and informationassurance R&D portfolio should supportfundamental R&D exploring inherently moresecure next-generation technologies that will replacetoday’s patching of the current insecureinfrastructure.

6. Assess security implications of emerginginformation technologies

The Federal government should assess thesecurity implications and the potential impact ofR&D results in new information technologies asthey emerge in such fields as optical computing,quantum computing, and pervasively embeddedcomputing.

7. Develop a roadmap for Federal cyber securityand information assurance R&D

Agencies should use this Plan’s technicalpriorities and investment analyses to work with theprivate sector to develop a roadmap of cybersecurity and information assurance R&D priorities.This effort should emphasize coordinated agencyactivities that address technical and investment gapsand should accelerate development of strategiccapabilities.

8. Develop and apply new metrics to assess cybersecurity and information assurance

As part of roadmapping, Federal agencies shoulddevelop and implement a multi-agency plan tosupport the R&D for a new generation of methodsand technologies for cost-effectively measuring ITcomponent, network, and system security. Thesemethods should evolve with time.

Page 14: About the National Science and Technology Council - NITRD

xii

9. Institute more effective coordination with theprivate sector

The Federal government should review private-sector cyber security and information assurancepractices and countermeasures to help identifycapability gaps in existing technologies, and shouldengage the private sector in efforts to betterunderstand each other’s views on cyber security andinformation assurance R&D needs, priorities, andinvestments. Federal agencies supporting cybersecurity and information assurance R&D shouldimprove communication and coordination withoperators of both Federal and private-sector criticalinfrastructures with shared interests. Informationexchange and outreach activities that acceleratetechnology transition should be integral parts ofFederal cyber security and information assuranceR&D activities.

10. Strengthen R&D partnerships, includingthose with international partners

The Federal government should foster a broadpartnership of government, the IT industry,researchers, and private-sector users to develop, test,and deploy a more secure next-generation Internet.The Federal government should initiate thispartnership by holding a national workshop tosolicit views and guidance on cyber security andinformation assurance R&D needs fromstakeholders outside of the Federal researchcommunity. In addition, impediments tocollaborative international R&D should beidentified and addressed in order to facilitate jointactivities that support the common interests of theUnited States and international partners.

Page 15: About the National Science and Technology Council - NITRD

Part I:

Federal Plan for

Cyber Security and Information Assurance R&D

Page 16: About the National Science and Technology Council - NITRD
Page 17: About the National Science and Technology Council - NITRD

1

In less than two decades, advances ininformation and communications technologieshave revolutionized government, scientific,

educational, and commercial infrastructures.Powerful personal computers, high-bandwidth andwireless networking technologies, and thewidespread use of the Internet have transformedstand-alone systems and predominantly closednetworks into a virtually seamless fabric ofinterconnectivity. The types of devices that canconnect to this vast information technology (IT)infrastructure have multiplied to include not onlyfixed wired devices but mobile wireless ones. Agrowing percentage of access is through always-onconnections, and users and organizations areincreasingly interconnected across physical andlogical networks, organizational boundaries, andnational borders. As the fabric of connectivity hasbroadened, the volume of electronic informationexchanged through what is popularly known as“cyberspace” has grown dramatically and expandedbeyond traditional traffic to include multimediadata, process control signals, and other forms ofdata. New applications and services that use ITinfrastructure capabilities are constantly emerging.

The IT infrastructure has become an integralpart of the critical infrastructures of the UnitedStates. The IT infrastructure’s interconnectedcomputers, servers, storage devices, routers, switches,and wireline, wireless, and hybrid links increasinglysupport the functioning of such critical U.S.capabilities as power grids, emergencycommunications systems, financial systems, and air-traffic-control networks. While the vast majority ofthe critical infrastructures (including the ITcomponents of those infrastructures) are owned andoperated by the private sector, ensuring theiroperational stability and security is vital to U.S.national, homeland, and economic security interests.

In addition to its underlying role in critical U.S.infrastructures, the IT infrastructure enables large-scale processes throughout the economy, facilitatingcomplex interactions among systems of systemsacross global networks. Their split-secondinteractions propel innovation in industrial designand manufacturing, e-commerce, communications,and many other economic sectors. The ITinfrastructure provides for the processing,transmission, and storage of vast amounts of vitalinformation used in every domain of society, and itenables Federal agencies to rapidly interact witheach other as well as with industry, private citizens,state and local governments, and the governmentsof other nations.

Technology Trends

The risks associated with current and anticipatedvulnerabilities of, threats to, and attacks against theIT infrastructure provide the rationale for thisreport. Fast-shifting trends in both technologies andthreats make it likely that the security issues of theIT infrastructure will only intensify over the nextdecade. Key areas for concern include:

❖ The increasing complexity of IT systems andnetworks, which will present mounting securitychallenges for both the developers andconsumers

❖ The evolving nature of the telecommunicationsinfrastructure, as the traditional phone systemand IT networks converge into a more unifiedarchitecture

❖ The expanding wireless connectivity toindividual computers and networks, whichincreases their exposure to attack. In hybrid orall-wireless network environments, thetraditional defensive approach of “securing the

OVERVIEW

Page 18: About the National Science and Technology Council - NITRD

2

perimeter” is not effective because it isincreasingly difficult to determine the physicaland logical boundaries of networks.

❖ The increasing interconnectivity and accessibilityof (and consequently, risk to) computer-basedsystems that are critical to the U.S. economy,including supply chain management systems,financial sector networks, and distributed controlsystems for factories and utilities

❖ The breadth and increasingly global nature ofthe IT supply chain, which will increaseopportunities for subversion by adversaries, bothforeign and domestic

The Federal Role

The IT infrastructure’s significance to theNation has gained visibility in the aftermath of theSeptember 11, 2001 terrorist attacks, large-scalecyber attacks, and rapid growth in identity theft.These events have made it increasingly clear thatthe security of the IT infrastructure is no longersimply a problem for the private sector and privatecitizens but also one of strategic interest to theFederal government.

Although computer and software companies arenow making investments in security-related researchand product development, their work is directedprimarily at short-term efforts driven by marketdemands to address immediate security problems.The Federal government has a different but equallyimportant role to play in cyber security andinformation assurance R&D. In its February 2005report on cyber security R&D, Cyber Security: ACrisis of Prioritization, the President’s InformationTechnology Advisory Committee (PITAC) statedthat one of the Federal government’s responsibilitiesis to invest in long-term, fundamental research“that will fill the pipeline with new concepts,technologies, infrastructure prototypes, and trainedpersonnel” needed to spur on next-generationsecurity solutions that industry can turn into widelyavailable products.

The Federal R&D portfolio historically hasrepresented the broad public interest in long-termscience, technology, and engineering advances.Such advances in support of Federal agencymissions sustain U.S. scientific preeminence andgenerate the discoveries and innovations necessaryto fuel economic development and a rising standardof living. The Federal portfolio also includes manyinvestment areas deemed critical for nationaldefense and national and homeland security in thenear- and mid-term time frames. Cyber securityand information assurance R&D efforts arecontributing to both major purposes and all timehorizons of Federal R&D investment. Moreover,without such investment, aspects of the Nation’sindustrial and service sectors will be unable to movetoward benefiting from the IT infrastructure,curbing their growth and compromising economiccompetitiveness.

Federal leadership catalyzes activities in scientificrealms of strategic importance to the Nation. Incyber security and information assurance R&D,such leadership can energize a broad collaborationwith private-sector partners and stakeholders togenerate fundamental technological advances in thesecurity of the Nation’s IT infrastructure. First, insupport of national and economic security, theFederal government can identify the mostdangerous classes of cyber security and informationassurance threats to the Nation, the most critical ITinfrastructure vulnerabilities, and the most difficultcyber security and information assurance R&Dproblems. Second, the Government can use thesefindings to develop and implement a coordinatedFederal R&D effort focused on the key researchneeds that can only be addressed with Federalleadership. While these needs will evolve over time,this Federal Plan for Cyber Security and InformationAssurance Research and Development provides astarting point for such an effort.

Page 19: About the National Science and Technology Council - NITRD

3

The Federal Plan in Summary

In this Plan, the terms cyber security andinformation assurance refer to measures forprotecting computer systems, networks, andinformation from disruption or unauthorizedaccess, use, disclosure, modification, or destruction.The purpose of cyber security and informationassurance is to provide for:

Integrity – protection against unauthorizedmodification or destruction of systems, networks,and information, and system and informationauthentication

Confidentiality – protection against unauthorizedaccess to and disclosure of information

Availability – assurance of timely and reliableaccess to and use of systems, networks, andinformation

The Plan comprises the following sections:

❖ Types of vulnerabilities, threats, and risk❖ Analysis of recent calls for Federal R&D❖ Strategic Federal objectives❖ Technical topics in cyber security and

information assurance R&D ❖ Current technical and investment priorities of

Federal agencies in cyber security andinformation assurance R&D

❖ Results of technical and funding gaps analysis❖ Findings and recommendations❖ R&D technical topic perspectives, including

assessments of the state of the art and keytechnical challenges

❖ CSIA IWG agencies’ roles and responsibilities

The Plan recommends that cyber security andinformation assurance be accorded high priority atall levels of the Government and be integral to thedesign, implementation, and use of all componentsof the IT infrastructure. The work begun in thisdocument of identifying and prioritizing Federal

cyber security and information assurance R&Defforts must be an ongoing process. Continuationof ongoing interagency coordination is needed tofocus Federal R&D activities on the mostsignificant threats to critical infrastructures andFederal agency missions and to maximize the gainsfrom these investments. In particular, the Planpoints to the need for coordinated Federal R&D tosolve the hard technical problems that are barriersto fundamental advances in next-generation cybersecurity and information assurance technologies;such R&D is typically multidisciplinary, long-term,and high-risk.

Other areas – including policy making (e.g.,legislation, regulation, funding, intellectualproperty, Internet governance), economic issues, IT workforce education and training, andoperational IT security approaches and bestpractices – are also germane and have substantialroles to play in improving cyber security andinformation assurance. However, these subjects areoutside the scope of the Plan, which addresses onlythe role of Federal R&D.

Likewise, the Plan is not a budget document andthus does not include current or proposed agencyspending levels for cyber security and informationassurance R&D. Agencies determine theirindividual budget priorities according to theirmission needs and requirements.

Plan BackgroundIn December 2003, the National Science and

Technology Council (NSTC) chartered a newInteragency Working Group (IWG) on CriticalInformation Infrastructure Protection (CIIP),reporting to the Subcommittee on Infrastructure ofthe NSTC’s Committee on Homeland andNational Security and its Committee onTechnology. Co-Chaired by the Department ofHomeland Security (DHS) and the White HouseOffice of Science and Technology Policy (OSTP),the CIIP IWG included participants from morethan a dozen Federal departments and agencies.

Page 20: About the National Science and Technology Council - NITRD

4

In August 2005, the group was rechartered toreport jointly to the NSTC Subcommittee onNetworking and Information Technology Researchand Development (NITRD) as well as to theSubcommittee on Infrastructure, in order toimprove the integration of CSIA R&D efforts withother NITRD program component areas andcoordination activities (see Appendix B). Inconjunction with the rechartering, the group wasrenamed the Cyber Security and InformationAssurance (CSIA) IWG to better characterize thescope of the IWG’s activities and to reflect the factthat cyber security and information assurance areessential to critical information infrastructureprotection but also have a broader impact.

The IWG assumed the responsibility forgathering information about agencies’ cyber securityand information assurance R&D programmaticactivities and challenges, and for developing aninteragency Federal plan for cyber security andinformation assurance R&D. This document, whichrepresents a collaborative effort of the CSIA IWGagencies, sets forth a baseline framework forcoordinated, multi-agency activities that continue todevelop and implement the Federal Plan.

The framework is derived from a CSIA IWGanalysis that identified and prioritized cybersecurity and information assurance R&D needsacross Federal agencies. The framework also

includes extensive documentation of the currentstate of the art and major technical challengesacross a spectrum of R&D areas of importance inthe development of cyber security and informationassurance technologies.

The Federal Plan for Cyber Security andInformation Assurance Research and Developmentalso serves as a foundational document for theNational Critical Infrastructure Protection Researchand Development Plan (NCIP R&D Plan), which isrequired by Homeland Security PresidentialDirective (HSPD) 7. Developed by the NSTC’sSubcommittee on Infrastructure, this latter planfocuses on R&D needs in support of protecting theNation’s critical infrastructures. The CSIA Planfocuses on R&D to help meet IT needs outlined inthe NCIP Plan, supporting CSIA elements of keyNCIP strategic goals, including a national commonoperating picture, a secure national communicationnetwork, and a resilient, self-healing, self-diagnosing infrastructure.

The CSIA IWG has begun to implement thecoordination of agency R&D activities related tothis Plan. The coordinated activities and CSIAbudget are now reported in the annual Supplementto the President's Budget for the NITRD Program,beginning with the FY 2007 Supplement released inFebruary 2006.

Page 21: About the National Science and Technology Council - NITRD

5

VULNERABILITIES, THREATS,AND RISK

A vulnerability is a flaw or weakness in thedesign or implementation of hardware,software, networks, or computer-based

systems, including security procedures and controlsassociated with the systems. Vulnerabilities can beintentionally or unintentionally exploited toadversely affect an organization’s operations(including missions, functions, and publicconfidence), assets, or personnel.

A threat is any circumstance or event with thepotential to intentionally or unintentionally exploitone or more vulnerabilities in a system, resulting ina loss of confidentiality, integrity, or availability.Threats are implemented by threat agents.Examples of threat agents are malicious hackers,organized crime, insiders (including systemadministrators and developers), terrorists, andnation states.

Risk is a combination of the likelihood that aparticular vulnerability in an organization’s systemswill be either intentionally or unintentionallyexploited by a particular threat agent and themagnitude of the potential harm to theorganization’s operations, assets, or personnel thatcould result from the loss of confidentiality,integrity, or availability.

In the current climate of elevated risk created bythe vulnerabilities of and threats to the Nation’s ITinfrastructure, cyber security is not just a paperworkdrill. Adversaries are capable of launching harmfulattacks on U.S. systems, networks, and informationassets. Such attacks could damage both the ITinfrastructure and other critical infrastructures.

Cyber security has largely failed to gain wideadoption in many consumer products for a varietyof reasons, including a lack of appreciation for

consequences of insecurity, the difficulty ofdeveloping secure products, performance and costpenalties, user inconvenience, logistical problemsfor organizations in implementing and consistentlymaintaining security practices, and the difficulty ofassessing the value of security improvements. Butconsumer and enterprise concerns have beenheightened by increasingly sophisticated hackerattacks and identity thefts, warnings of “cyberterrorism,” and the pervasiveness of IT uses.

Consequently, many in the computer industryhave come to recognize that the industry’scontinued ability to gain consumer confidence innew, more capable applications will depend onimproved software development and systemsengineering practices and the adoption ofstrengthened security models. Thus, industryleaders, trade and professional associations, andadvocacy groups support a robust Federal role inthe long-term fundamental R&D needed toprovide the foundations for next-generation securitytechnologies.

Types of Threats and Threat Agents

Because organizations and agencies now rely soheavily on networked IT systems, day-to-dayoperations are significantly hindered when systemsare out of service or performance is degraded.Today, many vulnerabilities are easy to exploit, andindividuals and organizations worldwide can accesssystems and networks connected to the Internetacross geographic and national boundaries. Currenttechnology also makes it easy to hide or disguise theorigin and identity of the individuals ororganizations that exploit these vulnerabilities.

Page 22: About the National Science and Technology Council - NITRD

6

In addition, cyber security vulnerabilities arevolatile; even as existing vulnerabilities are patched,new ones are discovered. Even when vulnerabilitiesare discovered and patched by security professionalsprior to an attack, hackers are increasingly reverse-engineering patches in order to discover thevulnerabilities and develop attacks that exploitthem. Hostile actors are deriving attacks from newpatches with increasing speed, often launchingattacks before these patches are widely tested anddeployed to secure vulnerable systems. The result ofthese trends is a vicious cycle in which there is aconstant need for new countermeasures.

While the Internet receives the most attention inpress coverage of cyber incidents, from a nationalsecurity perspective the playing field for potentialcyber attack operations is much broader. Sensitiveinformation tends to be isolated from the Internet,but the various gateways that exist to facilitate thetransfer of information from the outside into aclosed network provide many openings for possibleattack.

Moreover, though substantial progress has beenmade in raising levels of awareness about cybersecurity across industry and government, securingcritical infrastructures remains a significant nationalchallenge. Many critical industries, previouslyisolated from Internet security problems becausethey used older mainframe computing systems andleased telephone lines in dedicated networks, arereaching the time when this legacy infrastructure isbeing retired. They are adopting modern networksusing personal computers, workstations, and serverswith mainstream operating systems, interconnectedthrough local-area networks, and connected to theInternet. In addition, the telecommunicationsindustry itself is going through a systemictransformation caused by deregulation, economicchange, and technological evolution, which mayalso leave these networks more vulnerable to attack.

Attackers’ Asymmetric Advantages

A number of factors in the current securityenvironment provide would-be attackers withsignificant advantages over those trying to protectthe large-scale networks and interconnected ITsystems on which society increasingly depends. Anattacker needs to find only one vulnerability; thedefender must try to eliminate all vulnerabilities.Powerful attack tools, including automated tools formalicious actions, are now freely available fordownloading over the Internet to anyone who wantsthem, and little skill is required to use them. Theresources – including training and equipment –needed to launch potentially harmful attacks are notonly readily available but relatively inexpensivecompared to the costs of securing systems, networks,and information and responding to attacks.

As a result, some classes of attacks can beinitiated with little sophistication. Although theseattacks are not generally significant threats tosystems that are kept patched and well secured,they are effective against the many unpatched andpoorly secured systems connected to the Internet,and contribute to a background level of ongoingmalicious network activity. The automated toolsthat can be used by people with relatively little skillor knowledge continue to multiply, and aregradually increasing in capability in step withimprovements in cyber security and informationassurance technologies. Attackers also have theability to exploit vulnerable third-party machines tolaunch their attacks.

Classes of attacks that require much greaterexpertise pose significantly greater threats. Butwhile the sophistication required to mount suchattacks limits them to a smaller set of adversaries,the capabilities of these high-threat adversaries alsocontinue to advance.

These trends offer a wide range of individuals andentities – from malicious hackers to nation states –

Page 23: About the National Science and Technology Council - NITRD

7

the opportunity to support or directly engage incyber attacks. The following profiles suggest some ofthe possible threat agents and their motivations.

Malicious HackersThe earliest computer hackers often were

individuals with sophisticated computer skills whosimply enjoyed exploring programming andstretching their computer’s capabilities. Hackers ofthis type still exist. Others, however, use their skillsto write damaging code that propagates over theInternet or to break into private networks formalicious or criminal purposes. While manymalicious hacker attacks rank as nuisances ratherthan being harmful, other hackers have moved intomore damaging hostile or criminal activities,producing increasingly sophisticated malicioustechnologies and tools that proliferate across theInternet. Some of these hackers are takingadvantage of their skills to earn money throughinformation theft, identity theft, fraud, denial-of-service (DoS) attacks, and extortion. The impact ofhackers may expand even further if nation statesand others, such as terrorist or organized criminalgroups, hire the talent or exploit the hacker-developed technologies.

Organized CrimeOrganized crime is increasingly using the

Internet to exploit any online opportunity to makemoney through the avenues mentioned above(information and identity theft, fraud, extortion) aswell as illegal gambling, pornography, and othermethods. Moreover, organized criminal elementsare generally more structured and can draw onmore extensive funding resources than loosely knithacker communities, enabling them to hire experthacker talent or bribe insiders to gain access tomore sensitive systems.

TerroristsHacking could be used by terrorist groups to

harvest information for planning physical or cyberattacks. Audit logs from Web sites, infrastructureowners, and national laboratories have recordedextensive, systematic information gathering

originating from countries that serve as home basesfor terrorist groups. Terrorist groups also are usingthe Internet for covert communications, andsympathetic hacker groups have launched various“e-jihads,” consisting primarily of Web pagedefacements and DoS attacks.

Terrorist groups are known to have included notonly engineers, computer scientists, and businesspeople with backgrounds in computers, networks,and computer-based systems but also people withaccess to hardware and software producers.Terrorist groups have even sold computer products,which could in principle include malicioussoftware. One known terrorist group is notablebecause it assembles and sells computer systems.Although law enforcement has not uncoveredinformation pointing to subversion of softwareproducts, the potential for such activity exists. Theevidence indicates that terrorist groups now have orcan acquire the necessary expertise for identifyingtargets and conducting cyber attacks with serious orcatastrophic consequences.

Nation StatesWithin their home territories, many nations

have some offensive cyber capabilities derived fromsuch defensive technologies as forensics, networkprotections, and software implants (code added tosystem or application software for a purpose such asmonitoring usage or collecting data about users) aswell as from their regulatory control over, andability to gain physical access to, localtelecommunications and Internet systems.Relatively few nation states, however, have thetechnical and operational capabilities (includingresources, logistical support, expertise, andwillingness to take risks) to orchestrate the fullrange of adversarial cyber operations through acombination of such means as recruiting insiders,setting up front companies, establishing signalscollection systems, implanting damaging hardwareor software in communications networks, andsubverting telecommunications switches,cryptographic defenses, and supply chains.

Page 24: About the National Science and Technology Council - NITRD

8

Threat and Vulnerability Trends

In addition to the exploitation of Internetvulnerabilities, adversaries seeking to gathersensitive information, commit crimes, or attackcritical U.S. infrastructures can employ othermeans, such as:

InsidersThe key to malicious or hostile activities in

cyberspace is access to networked systems andinformation. Facilitating this access through the useof insiders can greatly reduce the technologicalsophistication necessary to mount an attack,because authenticated and authorized insiders maybe able to circumvent barriers to external access, ormay have legitimate access rights and privileges thatwould be denied to unauthorized users. So whileobtaining network access via hacking provides onepotential path for malicious activity, insider(physical or logical) access to the network reduces,and can in some cases eliminate, the difficultiesassociated with hacking through network defenses.With the right insider, an offensive operation mayinvolve simply copying information to a portablemedium that can be carried from the premises. Asingle well-placed, knowledgeable insider can alsoexploit IT systems to disrupt local infrastructure.

OutsourcingThe IT outsourcing trend – affecting activities

ranging from computer help desks and dataprocessing to R&D – can increase the exposure ofan organization’s systems and information tosubversion. Outsourcing of services, to eitherforeign or domestic suppliers, increases risk byreducing control over access to systems andinformation. For example, apparently legitimatepaths into an organization’s networks and access tonetwork resources can be established that can beexploited for illegitimate purposes. In thisenvironment, effective information assurancetechnologies are imperative.

Supply Chain AttacksPotential attacks through subversion of hardware

or software supply chains can be viewed as anothertype of insider threat. Access through a hardwaresupply chain may require development andmanufacture of a subverted version of amicroelectronic component and a complicatedoperation to insert the device into the targetedcomputer, possibly through use of insiders in thesupply chain. A software supply chain attack mightinvolve, for example, a subversion embedded inlower-level system software not likely to beevaluated during testing. Another approach is tosubvert the master copy of software used for broaddistribution, which hackers recently attempted todo with a mainstream operating system. Even ifsoftware is tested, subversions may be difficult todetect since they would typically be revealed onlyunder circumstances difficult for a defender todiscover.

Industrial EspionageTechnically savvy companies have the potential

to capitalize on inadequate IT system security toengage in cyber espionage against the U.S.government and domestic corporations, primarilyto collect science and technology information thatcould provide economic benefits. Some of thesecompanies have considerable technical expertise andsignals intelligence capabilities and have a strongpresence in U.S. IT product markets – includingmicrochips, telecommunications systems, andencryption products. One consequence of thecurrent espionage climate is that travelers withlaptops and other electronic devices risk havinginformation stolen in such locations as airports andhotels.

State-Sponsored EspionageGaining access to well-protected information or

systems in closed networks remains a resource-intensive effort involving traditional espionagetradecraft. Such operations do not require thesimultaneous access to large numbers of systemsneeded for a strategic attack and thus are withinreach of a much larger array of foreign adversaries.

Page 25: About the National Science and Technology Council - NITRD

9

Foreign governments for decades havesuccessfully recruited agents in the U.S. governmentwith access to computer systems and cryptographicinformation. Foreign agents have also establishedtechnology companies in this country and served assubcontractors on U.S. defense contracts to obtainaccess to technology. Some governments now havethe operational and technical expertise for moreaggressive and sophisticated cyber espionage. U.S.counterintelligence efforts have uncovered anincreasing number of such activities by foreignintelligence services, including past and ongoingespionage operations directed against critical U.S.military and other government systems.

Other TrendsMany malicious code attacks are “blended

threats” that exploit multiple vulnerabilities orpropagate via multiple means. Among these newclasses of threats are adaptive or mutating threats,which are able to change their characteristics andappearance in order to avoid detection. Attacks canexploit operating systems, other softwareapplications, software running on hardwarecomponents (e.g., routers and firewalls), or moreinfrequently, the hardware components themselves.Cryptographic attacks to undermine encryption-based security processes might attempt to exploitone or more of these avenues of attack.

The trends discussed in this document aresupported by the report Cybersecurity for theHomeland, issued in December 2004 by theSubcommittee on Cybersecurity, Science, andResearch & Development of the U. S. House ofRepresentatives Select Committee on HomelandSecurity. The report concluded that:

❖ Hacking crews and individuals are increasinglyworking together around the globe in virtual,anonymous networks of specialists in differenttypes and parts of attacks, such as propagationspeed, denial of service, password logging, anddata theft.

❖ An increasing number of adversaries aredeveloping new options for exerting leverage

over the U.S. through cyberspace, creatingdamage as well as conducting espionage.Cyberspace provides clear avenues and theprospect of anonymity.

❖ Foreign governments, hackers, and industrialspies are constantly attempting to obtaininformation and access through clandestine entryinto computer networks and systems. This is notjust “surfing” the open Internet for informationvoluntarily placed in the public domain, butintruding into closed and protected systems tosteal secrets and proprietary information.

❖ Because many cyber attacks are not discoveredor, if discovered, are not reported, hostile actorsin cyberspace act with the knowledge that theyare highly unlikely to be caught, let aloneprosecuted and imprisoned. Attackers discoveredin other countries cannot easily be brought tojustice under U.S. laws, and their conduct maynot even be illegal in the jurisdiction in whichthey are operating.The report made the point that these trends are

exacerbated because the network and systemredundancy, diversity, and excess capacity thattraditionally contributed to IT infrastructureresilience are decreasing with time, in part due toeconomic pressures. Federal agency personnelconcerned with cyber security and informationassurance view this factor as a key contributor toincreased cyber vulnerability.

Immediate Concerns

IT infrastructure components are also potentialtargets of physical attacks, such as destruction byexplosives and disruption caused by high-energyradio frequency or electromagnetic pulses. Giventhe spectrum of potential adversaries and theirgoals, a list of immediate concerns for the U.S. IT infrastructure begins with physical attacksagainst key data centers and communicationsnodes, particularly by terrorists.

However, immediate concerns also include theuse of cyberspace for covert communications,

Page 26: About the National Science and Technology Council - NITRD

10

particularly by terrorists but also by foreignintelligence services; espionage against sensitive butpoorly defended data in government and industrysystems; subversion by insiders, including vendorsand contractors; criminal activity, primarilyinvolving fraud and theft of financial or identityinformation, by hackers and organized crimegroups; attacks on the Internet infrastructure,particularly on the routers and domain nameservers critical to its operation; and coordinatedphysical and cyber attacks, where the emergencyresponse is hindered by unreliable or unavailablenetwork communications.

The two sections that follow highlight some ofthese concerns in two specific domains, processcontrol systems in critical infrastructures and the IT infrastructure of the banking and finance sector.

Industrial Process Control SystemsComputerized industrial process control systems

(PCSs) are integrated hardware and softwaresystems specifically engineered to monitor, evaluate,and regulate complex, large-scale processes. Theyoften are embedded and hybrid systems, sincecomputers are integral parts of such systems.Examples include the Supervisory Control andData Acquisition (SCADA) systems that managethe electric power grid and the PCSs that controlthe timing and volume of processes in the chemicalindustry. PCS technologies also control thedistributed sensor and actuator elements of pipelinesystems for gas, oil, and water distribution. Theymanage supply chains and associated transportationsystems, and they increasingly control buildingsecurity, fire protection, environmental systems,lighting, and communications. Automatedmanufacturing processes often depend on PCSnetworks to improve quality control and enableresponse to crises as well as to reduce costs.

Because attacks interrupting or damaging keyPCSs could have rippling impacts across theeconomy, these systems may increasingly be viewedby adversaries as attractive targets that can beexploited to weaken or incapacitate U.S. industryand infrastructure. Critical infrastructure sectors

debate whether or not an exclusively electronicattack on control technologies could indeed havesignificant impact, given the industries’ backuppower systems and investment in “fail safe” orotherwise resilient designs for physical systems. Buttrends in the application of IT in these sectorspoint to increasing rather than decreasing levels ofvulnerability and exposure in their infrastructures.

In the past, many PCS technologies usedproprietary designs. Today, in the interest ofreducing cost and improving maintainability, thesesystems mainly rely on standardized equipment andtechnologies, including general-purpose computers,mainstream operating systems, and standardInternet protocols, which are more vulnerable toattack. Many organizations view increasing use ofthe Internet as well as wireless and Web-basedcontrol systems as not only cost-effective butinevitable developments. Furthermore, cost-reduction measures are resulting in growing linkingof networks that support control systems withinternal and external corporate networks thatsupport ordinary business operations, furtherincreasing the exposure of control systems toexternal attacks.

For example, wireless control systems reducecabling and installation costs. These systemstypically use short-range wireless technologies, butsignals still may be susceptible to attack fromoutside a building’s perimeter if transmissionpatterns are not designed carefully. Engineers froma cyber security firm recently used standard wirelesssoftware to access networks at an electric powersubstation. Within 15 minutes, they were able tomap out the entire operational control network ofthe substation, without having left their car.

The trends outlined above suggest that assaultson computerized control systems will beincreasingly within reach of a wide array ofattackers. The main uncertainty is the extent towhich systems are already at risk due to acombination of direct or indirect Internetconnectivity and security vulnerabilities such asinadequately secured wireless access, unpatched

Page 27: About the National Science and Technology Council - NITRD

11

software, or insufficient authentication or accesscontrol policies and mechanisms.

Banking and Finance SectorIn speeches after the September 11, 2001

attacks, Osama bin Laden identified the U.S.economy as a key target for terrorism. Foreignmilitary strategists also have identified the U.S.economy as a logical target in strategic warfare.With networked computer systems now playing acentral role in the financial sector, such systems areprovocative lures for adversaries of all kinds.Indeed, terrorists have cased financial institutions inNew York City, Newark, and Washington, D.C.

However, because many financial records residein electronic form inside computer databases, cybersecurity and information assurance is a core valueand a vital element of the financial industry’sbusiness model. Few industries have invested asmuch in technology, policies, and procedures toprotect their networks, systems, and data. Indeed,because of its high assurance requirements, thebanking and finance sector has put additionalsecurity measures in place and hardened systemsbeyond traditional levels of computer security.

Today’s routine cyber threats to financialsystems involve identity theft and consumer-levelfraud, most often as a result of phishing attacks,

keylogging, spyware, Trojan horses, or the theft ofsensitive information from third parties. This typeof theft is so common that it has been absorbedinto the industry’s risk model, with the costs sharedby all consumers. Criminals have been known toconduct tests to ascertain whether fraud-detectionsoftware is active and, if not, to take advantage ofthe downtime to transfer money using stolenaccount information. Quickly noticing when onebank set a particular monetary threshold for fraudinvestigation, criminals made a large number oftransactions below the threshold.

Computer systems used within banks or forbank-to-bank transactions offer a more lucrativetarget, but the computer security and accountingmeasures used with these systems are significantlytighter. The most serious cyber incidents tend toinvolve insiders. For example, a group with reportedmob connections used insiders to try to launderhundreds of millions of Euros belonging to theEuropean Union that were diverted from the Bankof Sicily. More recently, investigators foiled anattempt at stealing more than $400 million from aLondon branch of a Japanese bank through illicitelectronic transfers reportedly enabled through theuse of stolen passwords and access informationobtained by insiders who made use of keystrokelogging devices.

Page 28: About the National Science and Technology Council - NITRD
Page 29: About the National Science and Technology Council - NITRD

13

Recent Calls for Cyber Securityand Information Assurance R&D

In addition to the historic Federal role insupporting long-term R&D, significant driversfor Federal cyber security and information

assurance R&D arise from current nationalcircumstances and Federal priorities. These driversare identified in a number of Federal documents.

OSTP/OMB Memorandum on FY 2007Administration R&D Budget Priorities

In a July 2005 memorandum entitled“Administration R&D Budget Priorities for FY2007,” the Directors of the Office of Managementand Budget (OMB) and the Office of Science andTechnology Policy (OSTP) identified cyber securityR&D as an FY 2007 budget priority that shouldreceive special focus at the interagency level. Cybersecurity and information assurance R&D fallssquarely at the intersection of homeland securityR&D and networking and information technologyR&D, which are both highlighted as broadinteragency R&D priorities.

The budget guidance memo cites cyber securityR&D as one of three priority areas in the $3-billionFederal Networking and Information TechnologyResearch and Development (NITRD) Program,along with high-end computing and advancednetworking, “due to their potential for broadimpact.” (See Appendix B.) The memo states:“Reflecting the importance of cyber security,agencies should continue to work through theNSTC to generate a detailed gap analysis of R&Dfunding in this area.” While not called outexplicitly under Homeland Security R&D, cybersecurity and information assurance are also

technological requirements of many priorityhomeland security capabilities cited in thememorandum.

PITAC Cyber Security ReportIn Cyber Security: A Crisis of Prioritization, a

February 2005 PITAC report to the President, theindependent Presidential advisory panel warns thatthe Nation’s IT infrastructure is highly vulnerableto attacks that could damage not only the economybut national defense and national security systemsas well. Noting that “market forces direct private-sector investment away from research and towardthe application of existing technologies to developmarketable products,” the report calls on theFederal government to fundamentally improve itsapproach to cyber security R&D by increasinginvestments in unclassified cyber security R&D;intensifying its efforts to expand the size of today’ssmall cyber security research community;improving technology transfer to the private sector;and increasing the focus and efficiency of FederalR&D through better coordination and oversight.

The report lists 10 areas as R&D priorities,based on a PITAC analysis of more than 30documents and reports on cyber security R&D.The report concludes that the Nation will not beable to secure its IT infrastructure withoutsignificant advances in the following areas:

❖ Authentication technologies❖ Secure fundamental protocols❖ Secure software engineering and software

assurance❖ Holistic system security❖ Monitoring and detection

ANALYSIS

AND PLAN FRAMEWORK

Page 30: About the National Science and Technology Council - NITRD

❖ Mitigation and recovery methodologies❖ Cyber forensics❖ Modeling and testbeds for new technologies❖ Metrics, benchmarks, and best practices❖ Non-technology issues that can compromise

cyber security

The National Strategy to Secure CyberspaceThe February 2003 National Strategy to Secure

Cyberspace calls for Federal R&D leadership incertain circumstances, such as to address anincreasing number of vulnerabilities and to providecontinuity of government. In the latter situation,the document states, the role of the Federalgovernment is to ensure the safety of its cyberinfrastructure and those assets required for essentialmissions and services. Cyber security R&D areasthat support this goal, according to the report,include: forensics and attack attribution; protectionof systems, networks, and information critical tonational security; indications and warnings; andprotection against organized attacks capable ofinflicting debilitating damage to the economy.

Cyber Security Research and Development ActSpecific research activities aimed at securing

cyberspace are identified in the Cyber SecurityResearch and Development Act of 2002 (P.L. 107-305). The law calls for significantly increasedFederal investment in computer and networksecurity R&D to improve vulnerability assessmentand technology and systems solutions; expand andimprove the pool of information securityprofessionals, including researchers, in the U.S.workforce; and better coordinate informationsharing and collaboration among industry,government, and academic research projects.

The Act also calls for basic research oninnovative approaches to the structure of computerand network hardware and software that are aimedat enhancing computer security. Cited researchareas include: authentication and cryptography;computer forensics and intrusion detection;reliability of computer and network applications,

middleware, operating systems, andcommunications infrastructure; and privacy andconfidentiality.

INFOSEC Research Council (IRC) Hard Problem List

In 1999, the IRC, a group of Federal researchmanagers representing agencies involved ininformation security (INFOSEC) R&D related totheir missions, issued a draft list of the mostdifficult INFOSEC research challenges, or “hardproblems,” they then faced. In November 2005, theIRC released an updated Hard Problem List.

The new hard problem list and therecommendations of the PITAC cyber securityreport are compared to the technical topicsidentified in this document in “Cyber Security andInformation Assurance R&D Priorities:Comparison with PITAC and IRC” on page 20.

Strategic Federal Objectives

This Federal Plan for Cyber Security and InformationAssurance Research and Development responds to theimperatives in the calls for Federal action. The Planprovides a cross-agency assessment of currentFederal R&D activities and priorities and a set ofstrategic objectives for Federal cyber security andinformation assurance R&D to serve as baselineinformation for improved agency activities andmulti-agency coordination. The following strategicobjectives are derived from a review of policy andlegislative drivers and analyses of cyber securitythreats and infrastructure vulnerabilities as well asFederal agency mission requirements:

1. Support research, development, testing, andevaluation of cyber security and informationassurance technologies aimed at preventing,protecting against, detecting, responding to, andrecovering from cyber attacks that may have large-scale consequences.

2. Address cyber security and information assuranceR&D needs that are unique to criticalinfrastructures.

14

Page 31: About the National Science and Technology Council - NITRD

15

3. Develop and accelerate the deployment of newcommunication protocols that better assure thesecurity of information transmitted over networks.

4. Support the establishment of experimentalenvironments such as testbeds that allowgovernment, academic, and industry researchers toconduct a broad range of cyber security andinformation assurance development and assessmentactivities.

5. Provide a foundation for the long-term goal ofeconomically informed, risk-based cyber securityand information assurance decision making.

6. Provide novel and next-generation secure ITconcepts and architectures through long-termresearch.

7. Facilitate technology transition and diffusion ofFederally funded R&D results into commercialproducts and services and private-sector use.

Development of BaselineInformation

The Plan’s baseline information was developedthrough several interrelated activities undertaken bythe CSIA IWG agencies. The following sectionsdescribe these activities.

Cyber Security and Information Assurance R&DCategories and Technical Topics

The first step in establishing the baselineinformation was developing a list of cyber securityand information assurance technical topics and anassociated categorization (see Table 1 on pages 18-19). There was general agreement among agenciesthat there is no unique or “correct” classification oftechnical topics in this domain. Legitimatearguments could be made for changes in the topicnames, the classification under broad categories, oreven in the categories themselves. Thus, the list andclassification of technical topics should be viewed asa convenient launching point for an analysis ofagencies’ cyber security and information assurance

R&D priorities, rather than a firm statement abouthow technical topics should be organized.

Prioritization of Technical TopicsIn the next step of the baseline development

process, the technical topics were ranked in priorityorder. Agencies were asked to identify their R&Dpriorities irrespective of their past, current, orplanned funding investments – i.e., based solely ongaps between the existing state of the art andanticipated requirements or desired capabilities, andthe level of importance of those gaps. In assessingtheir priorities, the agencies applied criteriadeveloped informally through CSIA IWGdiscussion that included such indicators as therelevance of the work to agency missions as well asbroader government needs, requirements, and risk;current, emerging, and anticipated threats andlevels of risk; and higher-level requirements drivenor informed by policy or legislation. The degree towhich private sector R&D was engaged in thesetopics was also taken into account in the criteria,reducing such topics’ level of priority. The aim ofthis criterion was to avoid driving governmentinvestments into topics in which the state of the artis advancing effectively without Federal funding.

The priority rankings were then aggregatedacross all of the agencies, resulting in a set ofinteragency technical priorities. These interagencytechnical priorities represent some degree ofconsensus because they were deemed of importanceacross a significant number of agencies. However,the interagency technical priorities should not beinterpreted as determining the highest priorities forall agencies. Some interagency technical prioritiesmay be of limited interest to some agencies, whilein other cases mission priorities for a given agencymay differ from those identified as interagencytechnical priorities. Any agency may have somemission-related technical topics that are ofparticularly high priority even if they are notpriorities for multiple agencies.

Page 32: About the National Science and Technology Council - NITRD

16

Investment AnalysisIn the third step of the baseline developmentprocess, an investment analysis was conductedusing information about programmatic investmentsgathered from the CSIA IWG agencies. Theinvestment information was categorized accordingto the taxonomy of technical topics. This produceda set of investment priorities, which could then becompared to the interagency technical priorities toidentify topics in which there might be investmentgaps relative to these priorities. Differences betweeninvestment priorities and interagency technicalpriorities are not unexpected and should not beviewed as problem indicators, since individualagencies may be investing in mission-drivenpriorities that are not considered to be interagencytechnical priorities. The objective of thiscomparison was not to identify and characterizesuch agency-specific priorities as unnecessary.Rather, the goal was to identify topics that areinteragency technical priorities and in which theremight be underinvestment.

It should be noted that the agency fundinginformation gathered in this process was pre-decisional and of varying granularity; it wascollected only to indicate Federal agency spendingemphases in cyber security and informationassurance R&D. Thus, the baseline derived fromthis information should be viewed as useful in theaggregate but not a comprehensive source ofdetailed investment data.

R&D Technical Topic PerspectivesIn the fourth and final step, agency

representatives with expertise in specific technicaltopics of cyber security and information assuranceR&D provided perspectives on the status of R&Din the topic, characterizing the topic’s technicalimportance, the current state of the art, and gaps incurrent capabilities that will require R&D advancesto close. The technical perspectives are provided inPart II of this report, which begins on page 31.

R&D Technical and FundingPriorities

Table 1 on pages 18-19 shows the topinteragency technical and funding priorities thatwere identified by the prioritization process, underthe associated broader categories. The table isintended to highlight areas where funding emphasisis needed, but this does not mean that funding isnot needed in other technical topics as well. Nor dothe top technical priorities identified represent allpossible cyber security and information assuranceR&D topics that are important to the Federalgovernment. The list was developed to identify the10 topics deemed most pressing within a largergroup of priorities, though more than 10 are listeddue to ties in the rankings.

Commentary on Analysis of PrioritiesThe cyber security and information assurance

R&D prioritization activity generated findings thatwill be useful in agency and multi-agencydiscussions and coordinated planning activities toimplement this Plan. Not surprisingly, the analysisshows alignment between R&D priorities andspending on near-term efforts focused onimproving the security of existing Federal ITinfrastructure in the face of existing threats andseeking ways to add security features for newcapabilities.

The technical topics that were both identified asinteragency technical priorities and ranked asinvestment priorities are authentication,authorization, and trust management; accesscontrol and privilege management; attackprotection, prevention, and preemption; wirelesssecurity; and software testing and assessment tools.All CSIA IWG agencies reported multipleprograms in attack protection, prevention, andpreemption and many are supporting work inaccess and authentication technologies and wirelesssecurity. Several agencies have programs in softwaretesting and assessment tools. A closely related topic

Page 33: About the National Science and Technology Council - NITRD

17

– automated attack detection, warning, andresponse – was among the top-funded prioritiesalthough it was not rated as a top technical priority.

The following topics are ranked as interagencytechnical priorities but are not among the topfunding priorities: large-scale cyber situationalawareness; secure process control systems; securityof converged networks and heterogeneous traffic;detection of vulnerabilities and malicious code; IT system modeling, simulation, and visualization;inherently secure, high-assurance, and provablysecure systems and architectures; composable andscalable secure systems; architectures for next-generation Internet infrastructure; and privacyissues.

Several reasons may explain why a topicidentified as an interagency technical priority is nota funding priority. Agencies may generally agreethat a topic is important but perceive that suchwork is not within their funding scope. They mayview work in certain topics as within the purview ofother agencies, or more appropriately addressed bythe private sector rather than governmentinvestments. Alternatively, a priority and fundingdisparity could reflect a time lag in fundingresponse to a topic that only recently emerged as aninteragency technical priority. In the Federal budgetcycle, agency budgets for a fiscal year are the resultof technical and budget planning two years earlier,so it takes time for a new interagency technicalpriority to appear as an agency funding priority. Or a disparity could simply indicate a lack of broadrecognition of a given technical topic’s importance.

Thus, the fact that a topic is a top technicalpriority and not a top funding priority does notmake the root cause for this incongruity evident.Understanding the issues associated with suchdisparities as well as identifying steps to remedy themare tasks most effectively managed through closeinteragency coordination. Further examination ofthese cases by the CSIA IWG is warranted as part ofits activities to implement this Plan.

None of the topics in the Foundations of CyberSecurity and Information Assurance category,which includes many topics focused on achievingfundamental advances in the engineering of moresecure IT systems, ranked as top technical priorities.While some agencies support such long-termresearch, the analysis shows that many agenciescurrently are emphasizing technical topicsassociated with current threats and vulnerabilities.However, that emphasis does not explain why noneof the Foundations topics rose to the level of a toptechnical priority. These topics are generallyimportant because of their role in supporting thedevelopment of other technologies.

Additional analysis is needed to ascertainwhether these topics are simply not as importantdespite their foundational implications, or whetherthey are more valuable than these results suggestbut are unrecognized as priorities. As thecoordinated, interagency roadmapping processmoves ahead, agencies will need to evaluate suchbaseline findings in light of this Plan’s objectivesand recommendations. The Plan’s firstrecommendation, for example, calls for a Federalfocus on strategic and longer-term R&D needs,including technological foundations for next-generation IT infrastructure.

A related observation based on the analysis isthat agencies in general are supporting a largenumber of discrete cyber security and informationassurance R&D activities, but these efforts arebroadly distributed across technical topics and inmany cases are limited in scale and scope. Improvedcoordination and interagency information sharingare needed to begin to leverage agency investmentsand the associated expertise embodied in theseactivities. Coordination and planning enableagencies to forge interagency goals that maximizeeach agency’s R&D contributions to results that nosingle agency could attain on its own.

Page 34: About the National Science and Technology Council - NITRD

18

Top Technical and Funding Priorities Federal Cyber Security and Information Assurance R&D

Categories and Technical Topics Technical Funding

1. Functional Cyber Security and Information Assurance

1.1 Authentication, authorization, and trust management1.2 Access control and privilege management1.3 Attack protection, prevention, and preemption1.4 Large-scale cyber situational awareness1.5 Automated attack detection, warning, and response1.6 Insider threat detection and mitigation1.7 Detection of hidden information and covert information flows1.8 Recovery and reconstitution1.9 Forensics, traceback, and attribution

2. Securing the Infrastructure

2.1 Secure Domain Name System2.2 Secure routing protocols2.3 IPv6, IPsec, and other Internet protocols2.4 Secure process control systems

3. Domain-Specific Security

3.1 Wireless security 3.2 Secure radio frequency identification3.3 Security of converged networks and heterogeneous traffic3.4 Next-generation priority services

4. Cyber Security and Information Assurance Characterization and Assessment

4.1 Software quality assessment and fault characterization4.2 Detection of vulnerabilities and malicious code4.3 Standards4.4 Metrics4.5 Software testing and assessment tools4.6 Risk-based decision making4.7 Critical infrastructure dependencies and interdependencies

CSIA R&D AREAS TOP PRIORITIES

✓ ✓

✓ ✓✓ ✓✓

✓ ✓

TABLE 1

Page 35: About the National Science and Technology Council - NITRD

19

5.1 Hardware and firmware security5.2 Secure operating systems5.3 Security-centric programming languages5.4 Security technology and policy management methods and

policy specification languages5.5 Information provenance5.6 Information integrity5.7 Cryptography5.8 Multi-level security5.9 Secure software engineering5.10 Fault-tolerant and resilient systems5.11 Integrated, enterprise-wide security monitoring and management5.12 Analytical techniques for security across the IT systems

engineering life cycle

5. Foundations for Cyber Security and Information Assurance

6. Enabling Technologies for Cyber Security and Information Assurance R&D

6.1 Cyber security and information assurance R&D testbeds6.2 IT system modeling, simulation, and visualization6.3 Internet modeling, simulation, and visualization6.4 Network mapping6.5 Red teaming

7. Advanced and Next-Generation Systems and Architectures

7.1 Trusted computing base architectures7.2 Inherently secure, high-assurance, and provably secure systems

and architectures7.3 Composable and scalable secure systems7.4 Autonomic systems7.5 Architectures for next-generation Internet infrastructure7.6 Quantum cryptography

8. Social Dimensions of Cyber Security and Information Assurance

8.1 Trust in the Internet8.2 Privacy

Technical Funding TOP PRIORITIES

Categories and Technical TopicsCSIA R&D AREAS

✓✓

✓✓

✓✓

Top Technical and Funding Priorities (continued)

Page 36: About the National Science and Technology Council - NITRD

20

Cyber Security and InformationAssurance R&D Priorities:Comparison with PITAC and IRC

Because of their direct relevance to Federalinteragency cyber security and informationassurance R&D priorities, the research topicsidentified as priorities in the PITAC cyber securityreport and the IRC Hard Problem List provideuseful points of comparison with the technical andfunding priorities presented in this Plan (Table 1).

The 2005 IRC list includes the following eighthard problems:

❖ Global-scale identity management❖ Insider threat❖ Availability of time-critical systems❖ Building scalable secure systems❖ Situational understanding and attack attribution❖ Information provenance❖ Security with privacy❖ Enterprise-level security metrics

Although they represent differing levels ofgranularity and categorization, the IRC list and thePITAC research priorities (cited on pages 13-14)are substantially aligned with the interagencytechnical priorities in this Plan. Specifically:

❖ The PITAC priority of authenticationtechnologies is both a top technical and a topfunding priority for the CSIA IWG agencies.This priority also maps to the IRC hard problemof global-scale identity management.

❖ The PITAC priority of secure softwareengineering and software assurance maps directlyto the technical topic of secure softwareengineering, identified as a top funding priorityby CSIA IWG agencies. Other CSIA technicaland funding priorities such as software testingand assessment tools and detection ofvulnerabilities and malicious code alsocontribute to software assurance. This area

corresponds closely to the IRC hard problem ofbuilding scalable secure systems, which includeselements of software engineering that includedesign, construction, verification, and validation.

❖ The PITAC priority of holistic system security isbroad in scope and does not map directly to asingle CSIA topic area, but is relevant to theCSIA topic of analytical techniques for securityacross the IT systems engineering life cycle,which is a funding priority for CSIA IWGagencies. This PITAC priority also can be linkedto other top technical CSIA R&D priorities suchas inherently secure, high-assurance, andprovably secure systems and architectures, andcomposable and scalable secure systems. Thesepriorities also map to the IRC’s building scalablesecure systems hard problem.

❖ PITAC’s monitoring and detection priority mapsto two CSIA R&D priorities: large-scale cybersituational awareness (both a top technicalpriority and a top funding priority), andautomated attack detection, warning, andresponse (a top funding priority). Thiscorresponds to the IRC’s hard problem ofsituational understanding and attack attribution.

❖ The PITAC priority of modeling and testbedsfor new technologies maps to multiple CSIAR&D priorities: cyber security and informationassurance R&D testbeds (a top funding priority)and IT system modeling, simulation, andvisualization (a top technical priority).

❖ The PITAC’s priority of metrics, benchmarks,and best practices maps directly to the IRC’shard problem of enterprise-wide security metrics.Although this area was not ranked as a top CSIAR&D priority via the information-gathering andanalysis process that led to the technical andfunding priorities identified in Table 1, theCSIA IWG recognizes the importance of andneed for metrics. A recommendation in this Plancalls for the development and use of metrics toimprove cyber security and informationassurance.

Page 37: About the National Science and Technology Council - NITRD

21

❖ Although privacy was not called out as a singletechnical area among the PITAC priorities, itwas mentioned as a subtopic within three of itspriorities (authentication technologies, holisticsystem security, and non-technology issues thatcan compromise cyber security). In contrast, theIRC did focus specifically on privacy, havingidentified security with privacy as one of theIRC’s hard problems. Similarly, privacy wasidentified as one of the CSIA IWG’s toptechnical priorities.

❖ Other PITAC research priorities and IRC hardproblems not identified by the CSIA IWG asinteragency R&D priorities are clearly mission-related priorities that are receiving emphasiswithin individual agencies. For example, theDHS focus on infrastructure protection isrepresented in a program aimed at securingfundamental Internet communication protocols,including the Domain Name System androuting protocols – squarely within the scope ofthe PITAC priority of secure fundamentalprotocols. Both DoD and DHS are fundingwork in recovery and reconstitution, whichcorresponds to the PITAC research priority ofmitigation and recovery methodologies. DoD,DHS, and intelligence community work inforensics, traceback, and attribution correspondsto the PITAC priority of cyber forensics.

❖ Of the 10 research priorities identified in thePITAC report, only non-technology issues thatcan compromise cyber security were not

considered top interagency priorities by theCSIA IWG. CSIA IWG representatives agreedthat these non-technology issues are important,but did not view them as rising to the level ofother topics in the interagency technical andfunding rankings.

❖ The two areas identified as hard problems by theIRC that were not viewed as interagency R&Dpriorities by the CSIA IWG are priorities withincertain agencies. DoD and the intelligencecommunity are both funding R&D in insiderthreat detection, which addresses the IRC hardproblem of insider threat. DoD and theintelligence community also have an interest inthe IRC hard problem area of informationprovenance, because of its direct relevance tomanagement of classified information.

It might be expected that the interagency R&Dpriorities identified in this Federal Plan would alignclosely with the IRC list, as both lists have emergedfrom the Federal R&D community and are derivedfrom the perspectives of Federal agencies aboutR&D challenges associated with carrying out theirmissions. The priorities identified by PITAC,however, were developed by a non-governmentgroup of subject-matter experts and thereforerepresent the perspectives of a different community.Thus, the degree of correspondence and alignmentamong the results of R&D prioritization activitiesconducted independently by the CSIA IWG, thePITAC, and the IRC is particularly noteworthy.

Page 38: About the National Science and Technology Council - NITRD
Page 39: About the National Science and Technology Council - NITRD

23

The technology trends outlined in this reportmake clear that the U.S. faces a long-termengagement with a new type of challenge to

its security and economic stability. Cyber threatsare asymmetrical, surreptitious, global, andconstantly evolving. Moreover, the pervasiveinterconnectivity of the IT infrastructure on whichall sectors of society now rely makes cyber attacksan increasingly attractive prospect for adversariesthat include terrorists as well as malicious hackersand criminals.

This Plan outlines a Federal R&D strategy forstrengthening the security and assurance of the ITinfrastructure. The specifics of this strategy arearticulated through the following findings andrecommendations:

1. Target Federal R&D investments to strategiccyber security and information assurance needs

Finding: The private-sector marketplace forcyber security and information assurancetechnologies is thriving, but new products andadvances are focused mainly on areas for whichlarge and profitable customer bases currently exist –principally preventing, protecting, defendingagainst, and responding to today’s cyber threats.

Recommendation: Federal cyber security andinformation assurance R&D managers shouldreassess the Nation’s strategic and longer-term cybersecurity and information assurance needs to ensurethat Federal R&D focuses on those needs andcomplements areas in which the private sector isproductively engaged. In general, agencies shouldensure that resources are available to support workin the top technical priorities, address technical andfunding gaps among the priority areas as well as inthe broader collection of technical topics, and help

develop the technological foundations for next-generation IT infrastructure, as described in thisPlan.

2. Focus on threats with the greatest potentialimpact

Finding: Today’s most prevalent cyber threatsare not the most significant threats to the Nation’scritical and IT infrastructures or to the economy,nor will they necessarily remain the most prevalentthreats in the future. However, the constant hackerattacks – often closer to nuisances than true threats– that consume IT managers’ daily attention andsecurity budgets pervasively skew R&D effortstoward defenses against routine low-level attacks.Because of the lower relative probability of severeand highest-impact attacks, such strategic threatsare not adequately being addressed at the research,development, or deployment levels.

Recommendation: Although cyber security andinformation assurance technologies developed bythe private sector will undoubtedly evolve alongwith threats, this evolution can be significantlyaccelerated by laying sound technologicalfoundations through R&D efforts. Federal agenciesshould focus cyber security and informationassurance R&D investments on high-impact threatsas well as on investigation of innovative approachesto increasing the overall security of IT systems.

3. Make cyber security and informationassurance R&D both an individual agency andan interagency budget priority

Finding: As budgets become constrained, it isimportant to focus on recognized priorities in orderto maximize the impact of existing fundingresources. Such a focus is particularly valuable in

FINDINGS

AND RECOMMENDATIONS

Page 40: About the National Science and Technology Council - NITRD

24

R&D in information technologies, where overalladvances require gains in many scientific disciplinesand component technologies.

Recommendation: Agencies should considercyber security and information assurance R&Dpolicy guidance (e.g., the joint memorandum fromOMB and OSTP [discussed on page 13] thatidentifies cyber security as an interagency R&Dpriority) as they address their mission-related R&D.Agencies should also be aware of the interagencycyber security and information assurance R&Dpriorities identified in this report, and should giveappropriate weight to these areas in budgetformulation and technical program planning.

Recommendation: To achieve the greatestpossible benefit from investments throughout theFederal government, cyber security and informationassurance R&D should have high priority forindividual agencies as well as for coordinatedinteragency efforts.

4. Support sustained interagency coordinationand collaboration on cyber security andinformation assurance R&D

Finding: Cooperative interagency activitiesthrough the CSIA IWG enabled the developmentof this Plan. Sustained coordination andcollaboration among agencies will be required toaccomplish the goals identified in the Plan.Ongoing coordination can expand communicationabout shared cyber security and informationassurance issues across disparate agencies, ensurethat there is minimal duplication of R&D effortsacross agencies, and help leverage agencies’ expertiseand strengths to achieve common goals.Collaborative activities such as testbeds anddemonstrations can maximize the gains from R&Defforts. For example, several agencies can developcooperative R&D plans to address complementaryparts of the research agenda, and joint funding maymake it possible to address common needs forwhich no single agency has sufficient resources.

Recommendation: Agencies should designaterepresentatives to participate in development of theinteragency R&D roadmap proposed inRecommendation 7 and other interagency cybersecurity and information assurance R&D activities.Agencies should participate in interagency R&Dcoordination and collaboration on an ongoingbasis. Agency leadership at high levels also has animportant role to play and should formally and/orinformally support cooperative activities thatinvolve multiple agencies. Such cooperation isparticularly desirable in the cyber security andinformation assurance domain, where the goal isimproved security procedures, tools, and techniquesthat can have broad impact.

5. Build security in from the beginning

Finding: Many of today’s IT infrastructurevulnerabilities are the result of bugs and flaws in ITsystems’ software and hardware. In addition, muchof the current infrastructure was not built withsecurity as a core requirement. It was initiallydeveloped in a trusted community where today’sthreats did not apply. Now it is being used in waysthat were not originally envisioned, but that requirea greater level of trust than can be provided in theabsence of security. The current standard approachto security relies on patching vulnerabilities anddeploying a large assortment of securitycountermeasures aimed at known types of attacks.While this approach functions with varying degreesof effectiveness as a reactive mitigation strategy, it isnot an effective long-term path to a fundamentallymore secure infrastructure.

Recommendation: The Federal cyber securityand information assurance R&D portfolio shouldsupport fundamental R&D exploring inherentlymore secure next-generation technologies that willreplace today’s patching of the current insecure ITinfrastructure.

Page 41: About the National Science and Technology Council - NITRD

25

6. Assess security implications of emerginginformation technologies

Finding: Both new information technologiesand emerging research areas can be expected tointroduce novel security issues and vulnerabilities inthe IT infrastructure. Moreover, it is likely that asnew capabilities are added to the existing ITinfrastructure, the difficulty of fixing somevulnerabilities will be exacerbated.

Recommendation: The Federal governmentshould assess the security implications and thepotential impact of research results in newinformation technologies as they emerge, includingin such fields as optical computing, quantumcomputing, and pervasively embedded computing.Given the pace of technological change in the ITdomain, this analytical capability should be anintegral component of Federal cyber security andinformation assurance R&D planning andcoordination activities.

7. Develop a roadmap for Federal cybersecurity and information assurance R&D

Finding: While scientific advances can occurunexpectedly and serendipitously, progress in areasof strategic importance must be accelerated throughconcerted attention and planned and coordinatedefforts. Accelerating development of new cybersecurity and information assurance technologies forthe Nation’s IT infrastructure will requireagreement among Federal R&D agencies oninteragency technical priorities and coordinatedactivities to address them. This Plan providesbaseline information about current Federal cybersecurity and information assurance R&D to serve asthe starting point for the necessary multi-agencycoordination.

Recommendation: Federal agencies – workingtogether and in collaboration with the private sector– should use this Plan’s technical priorities andinvestment analyses to develop a roadmap of cybersecurity and information assurance R&D priorities.

This effort should emphasize coordinated agencyactivities that address technology and investmentgaps and should accelerate development of strategiccapabilities. Agencies should adopt the collaborativeroadmapping process in an ongoing way as a meansto strengthen Federal research in cyber security andinformation assurance, to intensify the R&D focuson high-priority areas, and to leverage agencyinvestments more effectively in support of strategicgoals.

8. Develop and apply new metrics to assesscyber security and information assurance

Finding: It is widely acknowledged in the ITindustry and the national research community thata major research challenge is posed by the lack ofeffective methods, technologies, and tools to assessand evaluate the level of component, system, andnetwork security. The baseline analysis of Federalinvestments found that, while the technical topic ofsoftware testing and assessment tools is both fundedand ranked as a top R&D priority, the topic ofmetrics is not in either the top funding or toppriority rankings.

Recommendation: As part of roadmapping,Federal agencies should develop and implement amulti-agency plan to support the R&D for a newgeneration of methods and technologies for cost-effectively measuring IT component, system, andnetwork security. As more exacting cyber securityand information assurance metrics, assessmenttools, and best practices are developed throughR&D, these should be adopted by agencies andapplied in evaluating the security of Federalsystems, and should evolve with time.

9. Institute more effective coordination withthe private sector

Finding: Much of the Nation’s IT infrastructureand interconnected critical infrastructures is ownedand operated by the private sector. Furthermore,both private as well as public (i.e., government)sectors rely broadly on mainstream commercial-off-the-shelf technologies to build out and secure their

Page 42: About the National Science and Technology Council - NITRD

26

respective parts of the IT infrastructure, making theeffective transition of technologies from R&D intowidely available products a key issue. Addressingthese needs will require ongoing communicationand coordination between the public and privatesectors to maximize the gains from each sector’sactivities.

Recommendation: The Federal governmentshould review private-sector cyber security andinformation assurance practices and countermeasures to help identify capability gaps in existingtechnologies, and should engage the private sectorin efforts to better understand private-sector viewson cyber security and information assurance R&Dneeds and priorities. Improved awareness in theFederal government and the private sector of theother’s views on R&D needs, priorities, andinvestments will enable both research communitiesto develop and pursue complementary R&D effortsthat meet strategic national needs, while at the sametime making the best use of limited fundingresources for cyber security and informationassurance R&D.

Recommendation: Federal agencies supportingcyber security and information assurance R&Dshould improve communication and coordinationwith operators of both Federal and private-sectorcritical infrastructures with shared interests.

Recommendation: Federal coordination effortsshould encompass development of informationexchanges and outreach activities that accelerate

technology transition as an integral part of Federalcyber security and information assurance R&Dactivities. Because widespread use of effectivesecurity technologies is in the national interest,obstacles to adoption and deployment of the resultsof R&D activities should be addressed.

10. Strengthen R&D partnerships, includingthose with international partners

Finding: From its origins nearly 40 years ago inFederally funded R&D programs to meet Federalneeds, the Internet has grown into a remarkableglobal infrastructure used by nearly a billion people.As this Plan emphasizes, however, the Internet alsois interconnected with some of the Nation’s mostsensitive physical and IT infrastructures.

Recommendation: Given the scale, complexity,and diversity of this multifaceted fabric ofconnectivity, the Federal government should foster abroad partnership of government, the IT industry,researchers, and private-sector users to develop, test,and deploy a more secure next-generation ITinfrastructure. The Federal government shouldinitiate this partnership by holding a nationalworkshop to solicit views and guidance on cybersecurity and information assurance R&D needsfrom stakeholders outside of the Federal researchcommunity. In addition, impediments tocollaborative international R&D should beidentified and addressed in order to facilitate jointactivities that support the common interests of theUnited States and international partners.

Page 43: About the National Science and Technology Council - NITRD

27

The IT infrastructure of the United Statestoday is essential to the functioning ofgovernment, private enterprise, and civil

society, including its critical systems for water,energy, transportation, and public safety. Federalleadership is both warranted and needed toencourage development of long-term goals andtechnical strategies for improving the overallsecurity of this vital national interest.

The need for Federal leadership is underscoredby the cyber security conditions described in thisreport. In summary:❖ The increasing availability of techniques to

attack the economy through the ITinfrastructure provide an asymmetric, low-costadvantage to adversaries of all kinds around theglobe.

❖ Ubiquitous vulnerabilities in today’s ITinfrastructure and a rapidly evolving spectrum ofthreats tie up available resources in a recurringcycle of defensive patching that does notstrengthen the infrastructure as a whole.

❖ The threats and vulnerabilities of tomorrow maybe substantially different from today’s.

❖ The Nation’s critical physical infrastructures areconnected to and rely upon the ITinfrastructure, and thus are also liable to sufferimpacts from cyber attacks.

❖ Integration of emerging technologies into the ITinfrastructure increases the breadth of and accessto vulnerabilities.

❖ The degree of interconnectivity with the ITinfrastructure and among critical infrastructureswill continue to rise in the years ahead.

❖ The Federal government cannot unilaterallydeploy countermeasures across the existing ITinfrastructure, nor can it unilaterally develop anddeploy a more secure infrastructure. Effectivesolutions will come only through a combinationof R&D breakthroughs and cooperation amongall stakeholders.

This Plan outlines a Federal role in cybersecurity and information assurance R&D that:❖ Ensures that cyber security and information

assurance R&D is a strategic Federal priority❖ Recognizes that R&D in defensive measures for

the current IT infrastructure, while a short-termnecessity, is not a substitute for strengtheningthe infrastructure in more fundamental ways

❖ Supports longer-term R&D to develop the next-generation technologies needed to build in,rather than bolt on, security throughout ITinfrastructure architectures

❖ Sustains interagency collaboration to maximizethe gains from Federally funded R&D

❖ Fosters partnerships between the Federalgovernment and private-sector stakeholders inIT infrastructure security.

The Nation needs next-generation ITinfrastructure R&D to produce the breakthroughsfrom which new cyber security paradigms will takeshape. With this Federal Plan in place, the next stepis to undertake the multi-agency efforts itrecommends.

CONCLUSIONS

Page 44: About the National Science and Technology Council - NITRD
Page 45: About the National Science and Technology Council - NITRD

Part II:

Technical Perspectives on

Cyber Security and Information Assurance R&D

Page 46: About the National Science and Technology Council - NITRD
Page 47: About the National Science and Technology Council - NITRD

31

The R&D topics in this category address technologiesand capabilities that minimize the impact ofcompromises or potential compromises of data,networks, and systems, or that enable them toprevent, detect, resist, or respond to attacks. Topics inthis category are:

❖ Authentication, authorization, and trustmanagement

❖ Access control and privilege management❖ Attack protection, prevention, and preemption ❖ Large-scale cyber situational awareness❖ Automated attack detection, warning, and response❖ Insider threat detection and mitigation❖ Detection of hidden information and covert

information flows❖ Recovery and reconstitution❖ Forensics, traceback, and attribution

1.1 Authentication, Authorization,and Trust Management

DefinitionAuthentication is the process of verifying the identityor authority of a network or system user (which canbe a human user or a computer-based process ordevice) through a secure means such as digitalsignatures, passwords, tokens, or biometric features.Authorization, which takes place after authentication,refers to the privileges granted to an authenticateduser who has requested access to services or resources.(Section 1.2 discusses access control in greater detail.)Authentication and authorization are interdependent;authorization to use a network or system resourcefrequently includes establishing the identity of theuser requesting access (e.g., identity-basedauthentication) or verifying that a trusted third partyhas certified that the user is entitled to the accessrequested (e.g., credential-based authentication).Privilege is a security attribute shared by users whoseidentities have been authenticated. Cross-domaincredentialing allows distinct systems, connected acrossa network, to provide access based on the secureidentification procedure performed by one of theother networked systems. Trust management consistsof making assessments of sets of credentials todetermine whether they constitute adequate evidencefor authorization.

TECHNICAL PERSPECTIVES

ON

CYBER SECURITY AND INFORMATION ASSURANCE R&D Part II provides technical perspectives on the cyber security and information assurance R&Dtopics identified in Part I. The R&D topics are grouped into eight broad categories. Eachtechnical perspective, prepared and reviewed by agency officials with expertise in the topic,describes the topic and its importance, the state of the art, and gaps in current capabilities.

1. FUNCTIONAL CYBER SECURITY AND INFORMATION ASSURANCE

Page 48: About the National Science and Technology Council - NITRD

32

ImportanceAuthentication is fundamental to all informationsecurity because it connects the actions performed ona computer to an identified user that can be heldaccountable for those actions. The expanding meansavailable for accessing networks make securitybreaches and uncontrolled user access a growingconcern. As enterprise IT systems continue to grow incomplexity and number of users, authorizationtechnologies that enable authenticated users to beassigned varying levels of system access privileges willplay an increasingly critical role in securitymanagement.

State of the ArtAuthentication of a user is based on one or more ofthree factors: a physical attribute (e.g., fingerprint orbiometric data), an artifact (e.g., an automatic tellermachine [ATM] card or cryptographic token), and/ora data key (e.g., a password). Each has advantages anddisadvantages. The best-known and most commonauthenticators are conventional static passwords.Compromised static passwords, however, are acommon vulnerability because users are careless aboutkeeping their passwords secret, password securitypolicies (such as mandatory format rules and periodicchanges) are difficult to enforce, and maliciousattackers have technological and social tools fordiscovering and accessing passwords. The use ofmulti-factor authentication methods may increaseassurance. For example, an ATM might require bothan ATM card and a password or personalidentification number to provide a higher level ofassurance than is provided by either factor alone.

Biometric technologies for authentication usemeasurements for identifying people – for example,their fingerprints, voice, retinal scans, or evenhandwriting – that can be used in IT authentication.But biometric data raise privacy issues that may insome instances limit their usage. Moreover, whilebiometric authentication can be used to providestronger assurance of identity beyond that achievablewith static passwords, biometrics are also susceptibleto compromise. For example, recent experiments withartificial fingers have shown that fingerprintrecognition devices can be fooled.

Capability GapsThe current technologies described above all havelimitations that frustrate efforts of system securitymanagers to increase overall security levels fornetworks, systems, and information. Next-generationconcepts that both streamline and hardenauthentication, authorization, and trust managementtechnologies and tools are needed to help mitigatevulnerabilities associated with changing networkdynamics and increased security threats. SpecificR&D needs include:

Device authentication: Device authenticationrequires equipping devices with characteristics thatcan be reliably recognized. For devices and associatedprocesses that generate requests, authentication usingcryptographic protocols may be required. Some ofthese protocols have been developed, but there hasbeen little experience with deploying them andbuilding systems that make good use of them.

Scalable authentication: Federated identities are acapability that enables organizations to share trustedidentities across the boundaries of their networks –with business partners, autonomous units, and remoteoffices. These technologies offer the prospect ofscalable authentication needed for scalable trustmanagement. However, there are continuingchallenges in defining common authenticationidentities and, more important, in the forms ofauthorization that the inter-domain authenticationwill support. This problem has been partiallyaddressed in some of the most common applicationareas such as the use of credit cards for electroniccommerce on the Internet. However, scalableauthentication, or global-scale identity management,remains a challenge (e.g., see the Hard Problem List,INFOSEC Research Council, November 2005, forelaboration).

Functional Cyber Security

Page 49: About the National Science and Technology Council - NITRD

33

1.2 Access Control and Privilege Management

DefinitionAccess control and privilege management begin withthe administrative and mechanical process of defining,enabling, and limiting the operations that users canperform on specific system resources. The permissionor limitation of operations is based on the businessrules or access policies of the organization.

Access control policies are enforced through amechanism consisting of a fixed system of functionsand a collection of access control data reflecting theconfiguration of the mechanism. Together, these mapa user’s access request to the decision of whether togrant or deny access. The access control data include aset of permissions, each indicating a user’sauthorization to perform an operation (e.g., access,read, write) on an object or resource. Permissions arenot individually specified. They are organized interms of, and mapped through administrativeoperations or a predefined set of rules on to, a set ofuser, subject (process), and resource attributesassociated with a specific type or class of policy.

For example, under an access control managementapproach called Role-Based Access Control (RBAC),permissions are defined in terms of roles that areassigned to users and privileges that are assigned toroles. Other approaches include label-based accesscontrol mechanisms that are defined in terms of labelsapplied to users, processes, and objects, anddiscretionary access control mechanisms that aredefined in terms of user identifiers, user groups, andaccess control lists.

ImportanceAlthough access control is often specified in terms oflimitations or protections, the ability of anorganization to enforce access control policy is whatultimately enables the sharing of greater volumes ofdata and resources to a larger and more diverse usercommunity.

State of the ArtVarious security mechanisms now exist for enforcingsecure access within host operating systems and across

heterogeneous bodies of data. In an attempt tostreamline the management of access control, RBACmodels and more recently an RBAC standard havebeen developed. RBAC offers administrative efficiencyand the capability to intuitively administer andenforce a wide range of access control policies.

In RBAC, permissions are associated with roles androles are assigned to users in order to grant userpermissions corresponding to those roles. Theimplementation of this basic concept greatly simplifiesaccess control management. Roles are centrally createdfor the various job functions in an organization, andusers are assigned roles based on criteria such as theirpositions and job responsibilities. Users can be easilyreassigned roles. Roles can be granted newpermissions as new applications and systems areincorporated, and permissions can be revoked fromroles as needed. For example, if a user moves to a newfunction within the organization, the user can beassigned to the new role and removed from the oldone with associated privileges updated automatically.In the absence of RBAC, the user’s old privilegeswould have to be individually identified and revoked,and new privileges would have to be granted.

Although RBAC represents a clear improvement oversimple table lookup models of the access controlmatrix (data structures such as access control lists), theRBAC model does not solve all access control andprivilege management problems. Discovering anddefining roles and mapping roles to enterpriseresources and applications, commonly referred to asrole engineering, are costly and difficult. Although thedevelopment of best practices and tools to ease thetransition to RBAC would be helpful, thesecapabilities provide only an interim solution to theresearch objectives described below. Ultimately, accesscontrol should be redefined and re-engineered fromthe ground up to reflect the increasing scale andcomplexity of networks and systems of systems. Thegoal should be a redefinition that preserves accesscontrol advancements while providing a generalizedcontext to accommodate well-known and ad hocaccess control policies, is easy to deploy and manage,and is safe in its configuration.

Functional Cyber Security

Page 50: About the National Science and Technology Council - NITRD

34

Capability Gaps To move toward the next generation in access controland privilege management technologies, advances inthree separate but related R&D areas are needed: 1) scalable access control data management methodsand tools; 2) flexible access control mechanismscapable of enforcing a wide variety of access controlpolicies; and 3) methods and techniques for definingsafe and secure access control configurations.

Scalable access control data management: Manyorganizations have hundreds or even thousands ofsystems, hundreds to hundreds of thousands of users,and thousands to millions of resources that must beprotected. Managing access control data across thesesystems, users, and resources is a monumental taskand perhaps the most expensive and error-prone of allsecurity disciplines.

Identity-based access control models work well forsmall workgroups. But as the number of groups andusers and the number and variety of resources theyneed to access grows to an enterprise- and cross-enterprise scale, access control information stored inapplications, databases, and file systems grows so largethat managing and controlling access changes canoverwhelm even the most knowledgeableadministrators. Visualizing and reasoning about avirtual ocean of access control data becomeimpossible. For example, many enterprises are unableto make even the simplest queries, such as whatsystem accounts exist for a given user. Consequently,organizations have resorted to implementing pooradministrative practices such as account sharing andcloning of permissions, resulting in permissionsbecoming over-distributed and difficult to manage.

Flexible access control mechanisms: One size doesnot fit all access control policies. Access controlmechanisms are as diverse as the types of businesspractices and applications that need to enforce them.An access control mechanism that meets the policyrequirements within one market domain may beinappropriate in another.

Effective access control mechanisms provide a contextfor policy configuration, embodiment, and

enforcement. Policy configuration refers to theadministrative operations of creating and managingaccess control data. Embodiment refers to the storageof access control data that reflect the policy.Enforcement applies access control data so that usersand their processes adhere to the access control policy.Since the mid 1970s, security researchers have soughtto develop access control models as abstractions ofaccess control systems. When implemented, themodels provide a generalized context that supports awide collection of policies, while adhering to anagreed-upon set of security principles such as leastprivilege (restricting a user to the minimum privilegesneeded to complete authorized tasks) and separationof duty (assigning roles and privileges such that nosingle user can perform multiple sensitive tasks).Revocation (removing privileges previously granted toprincipals) is also a key feature of these models.

The process for users to specify rich policies remainschallenging. This is partly a user-interface problemand partly a problem of designing an intuitive modelthrough which security configuration options can beconveyed to users. Some progress has been made indesigning flexible mechanisms, though challengesremain (e.g., implementing least privilege orrevocation on a wide-scale basis is difficult). Thesemechanisms have not yet been widely deployed.

Safety: In the context of access control, safety is theassurance that an access control configuration will notresult in the leakage of a privilege to an unauthorizeduser. Safety is fundamental to ensuring that the mostbasic access control policies can be enforced.Unfortunately, there is a tension between the need forsafety and the desire for flexibility. The safety of anaccess control configuration cannot be specified usinga general access control model. Consequently, safety isachieved either through the use of limited accesscontrol models or the verification of safety viaconstraints. Currently, almost all safety-criticalsystems use limited access control models becauseconstraint expression languages are too complex foreasy administrative use. However, researchers havedetermined that most constraints belong to one of afew basic types (e.g., static, dynamic, or historical).

Functional Cyber Security

Page 51: About the National Science and Technology Council - NITRD

35

Therefore, a key research goal is to develop ways toformulate constraints that allow the safety of accesscontrol configurations to be ensured, while havingthese constraints be flexible enough to supportpractical applications.

1.3 Attack Protection, Prevention,and Preemption

DefinitionAn attack is an attempt to gain unauthorized access toa network’s or a system’s services, resources, orinformation, or to compromise a network’s or asystem’s integrity, availability, or confidentiality.Network or system owners can adopt practices andtechnologies that improve resistance to attacks or thatprevent attacks from disrupting communications oroperations, or from compromising or corruptinginformation.

ImportanceAttack protection, prevention, and preemption areessential functional cyber security capabilities. Theirgoal is to provide an enterprise-wide capability tointercept a malicious attack, thereby preventingdisruption, compromise, or misappropriation ofnetworks, systems, or information. Robust attackprotection, prevention, and preemption capabilitieshelp mitigate threats and reduce the ability ofadversaries to exploit vulnerabilities.

There are two different attack protection, prevention,and preemption strategies. The proactive strategyshields healthy network or system components orservices to prevent them from becomingcontaminated, corrupted, or compromised. Thereactive strategy temporarily isolates compromisednetwork or system components or services to preventthem from contaminating, corrupting, orcompromising healthy assets. To be effective, both theproactive and the reactive security capabilities need tobe deployed at all levels of enterprise systems.

In addition, attack protection, prevention, andpreemption capabilities should be governed by aflexible, adaptable concept of operations. Not all

attacks have the same scope or operational impact.Accordingly, the configuration and operation of theattack protection, prevention, and preemptioncapability should change in accordance with attackseverity and intent (i.e., the approach must beadaptable to the nature of the attack and the assetsbeing attacked).

State of the ArtA variety of laws, regulations, and/or institutionalpolicies require agencies and other organizations to beable to respond to security incidents, preventdisruption to normal operations, and isolatecompromised networks and systems. Many currentcommercial offerings are primarily limited to reactiveintrusion-detection tools using signature- and rule-based algorithmic techniques, which use presetidentification rules to distinguish authorized fromunauthorized access. These tools are labor-intensive touse, require constant updating, and provide onlylimited protection. Even though updates are releasedmuch more quickly today than in the past, the resultis an arduous configuration control and patchmanagement task.

For example, major vendors are constantly issuingupdates and patches to operating systems orapplications to fix security holes. In some instances,these updates and patches reopen existingvulnerabilities or create new ones while fixing thetargeted problem. Many organizations, such as thoseoperating safety-critical infrastructure systems, havepolicies that require all upgrades and patches to bethoroughly tested before being deployed tooperational systems. But hackers now are also able toreverse-engineer patches to discover the vulnerabilitiesand rapidly launch attacks that exploit them beforethe patches can be widely deployed. This becomes arecurring cycle as new upgrades and patches arereleased more frequently and reverse engineeringmethods used by hackers improve.

Capability GapsAmid these security conditions, reactive capabilitiesand manual responses are inadequate. Automatedresponses that operate in milliseconds and emphasizepreemption and prevention are needed, along with

Functional Cyber Security

Page 52: About the National Science and Technology Council - NITRD

36

next-generation systems that are fundamentally morerobust and resilient. Furthermore, organizations needto abandon the view that any single product cansecure its IT infrastructure. Rather, the focus shouldbe on developing an integrated set of tools andtechniques that provide a comprehensive, layered,enterprise-wide attack protection, prevention, andpreemption solution.

Proactive behavior-based systems may offer the bestoption for developing the next generation of attackprotection, prevention, and preemption capabilities.These systems will not depend on signatures or rulesto identify attacks. Proactive behavior-based toolsidentify precursor events early in the attack timeline.These systems, when the technologies mature, willprovide the capability to identify and preemptunknown and novel attacks. Some research has beendone in this area, and early attempts at behavior-based responses are starting to emerge in commercialproducts. This work should be continued with the

goal of making robust products available, and itshould be expanded to include the capabilitieshighlighted below.

Protection is needed at all layers of a protocol stack,such as the seven-layer International StandardsOrganization (ISO)/Open Systems Interconnect(OSI) Technical Reference Model (TRM) (see boxbelow). Current attack preemption R&D primarilyaddresses Layer 3 (network layer) attacks generated byoutsiders. Additional protection, prevention, andpreemption features and functionality that are neededinclude a host-based intrusion prevention capabilitythat is independent of the platform, operating system,and applications.

Related research is needed to increase and verify therobustness and resilience of networks, systems, andcomponents to withstand attacks, especially unknownor novel attacks. Work is also needed to improve theability of networks, systems, and components to

Functional Cyber Security

Layer 1 – PhysicalThis layer conveys the bit stream – electrical impulse, lightor radio signal – through the network at the electrical andmechanical level. It provides the hardware means of sendingand receiving data on a carrier, including defining cables,cards, and other physical aspects.

Layer 2 – Data LinkAt this layer, data packets are encoded and decoded intobits. It furnishes transmission protocol knowledge andmanagement and handles errors in the physical layer, flowcontrol, and frame synchronization. The data link layer isdivided into two sublayers: the Media Access Control layerand the Logical Link Control layer.

Layer 3 – NetworkThis layer provides switching and routing technologies,creating logical paths, known as virtual circuits, fortransmitting data from node to node. Routing andforwarding are functions of this layer, as well as addressing,internetworking, error handling, congestion control, andpacket sequencing.

Layer 4 – TransportThis layer provides transparent transfer of data between endsystems, or hosts, and is responsible for end-to-end errorrecovery and flow control. It ensures complete data transfer.

Layer 5 – SessionThis layer establishes, manages, and terminates connectionsbetween applications. It sets up, coordinates, and terminatesconversations, exchanges, and dialogues between theapplications at each end. It deals with session andconnection coordination.

Layer 6 – PresentationThis layer provides independence from differences in datarepresentation (e.g., encryption) by translating fromapplication to network format, and vice versa. It works totransform data into the form that the application layer canaccept, and it formats and encrypts data to be sent across anetwork, providing freedom from compatibility problems. Itis sometimes called the syntax layer.

Layer 7 – ApplicationThis layer supports application and end-user processes.Communication partners are identified, quality of service isidentified, user authentication and privacy are considered,and any constraints on data syntax are identified. Everythingat this layer is application-specific. This layer providesapplication services for file transfers, e-mail, and othernetwork software services. Tiered application architecturesare part of this layer.

ISO/OSI Technical Reference Model Layers

Source: Cisco Systems, Inc.

Page 53: About the National Science and Technology Council - NITRD

37

dynamically reconfigure themselves in order topreempt or minimize the damage from an attack.

1.4 Large-Scale Cyber Situational Awareness

Definition Cyber situational awareness can be defined as thecapability that helps security analysts and decisionmakers:

❖ Visualize and understand the current state of theIT infrastructure, as well as the defensive postureof the IT environment

❖ Identify what infrastructure components areimportant to complete key functions

❖ Understand the possible actions an adversary couldundertake to damage critical IT infrastructurecomponents

❖ Determine where to look for key indicators ofmalicious activity

Cyber situational awareness involves thenormalization, deconfliction, and correlation ofdisparate sensor data, and the ability to analyze dataand display the results of these analyses. Situationalawareness (SA) is an integral part of an informationassurance (IA) common operational picture. Such apicture provides a graphical, statistical, and analyticalview of the status of computer networks and thedefensive posture.

Importance Situational awareness is the key to effective computernetwork defense. A robust situational awarenesscapability is necessitated by the highly interconnectednature of information systems and computernetworks, the degree to which they share risk, and thecoordination and synchronization requirements ofresponse efforts.

Analysts and decision makers must have tools enablingtimely assessment and understanding of the status ofthe networks and systems that make up the ITinfrastructure. This situational understanding must bepresented at multiple levels of resolution: 1) a top-level,global indication of system health; 2) exploration of

various unfolding threat scenarios against variouscomponents of the system; and 3) more local-leveldetails of recognizable or previously unseen anomalousactivities.

State of the ArtMost current SA technologies perform limitedcorrelation and fusion of primarily low-level networkand node-based sensor data. These technologies donot provide a picture of the broader state of health oflarger network systems. Status information tends to belocalized and provides limited insight into the impacton business and mission processes or interactionsamong the various components of the larger ITinfrastructure. As a result, attempts at correctiveaction are difficult to coordinate to assure thatresponse actions do not propagate undesirable effectsonto critical elements of the infrastructure.

Current visualization schemes focus on presentinglarge amounts of sensor data in formats that supportattempts by human analysts to often manuallyperform the necessary fusion. They do not adequatelyenable effective visualization and understanding ofpotentially unfolding malicious or anomalous activityin the broader context of the IT infrastructure.

Capability GapsDespite their strategic and operational significance,current SA capabilities are immature. Security analystsmust analyze large volumes of data from sensors andnetwork management systems. In the absence ofimproved capabilities, emerging technologies willpresent more information than can reasonably beanalyzed with existing capabilities. A particularlydifficult problem is finding trends and patterns inattacks or probes (i.e., electronic attempts tocircumvent network and system protections or toidentify weak points in an information system) thatmay be occurring. New technologies are needed tohelp security analysts deconflict, correlate, andunderstand large volumes of data to support informeddecision making.

Research is also needed to determine how best todesign the human-computer interface to portray manyaspects of SA to fit analysts’ cognitive processes. Inparticular, visualizations need to go beyond current

Functional Cyber Security

Page 54: About the National Science and Technology Council - NITRD

38

efforts that are focused on understanding largevolumes of low-level sensor data. Methods are neededto model and present to decision makers multiple,possibly competing, scenarios and hypotheses ofunfolding potential attacks, in some cases withsufficient warning to preempt these attacks if possible,or at least minimize damage and support rapid,effective response and restoration.

Generation of situational awareness andunderstanding must be based on fusion of a broadrange of cyber sensor data and traditional informationsources, including open source data. The largeamounts of such multi-source data must be filtered,transformed, fused, and correlated to provideinsightful and actionable information to analysts anddecision makers. External and internal contextualinformation about the system also is needed to enableunderstanding of observed abnormalities, whethermalicious or otherwise. Current capabilities must beexpanded to capture the broader context in whichmalicious or anomalous activities may be occurring.

A key proving ground for a robust SA capability is insustaining the overall functionality of the Nation’s ITinfrastructure. The IT infrastructure can be expectedto increasingly support various ongoing and plannedFederal and private-sector activities around the globe.These network-centric processes will depend onassured availability, integrity, and confidentiality ofthe infrastructure’s networks, systems, andinformation. A sophisticated SA capability will beneeded to determine where and when the requiredoperational reliability has been, or may be, adverselyaffected.

Systems that provide SA capability must themselvesbe designed to resist subversion or manipulation.Such protection is essential to keep adversaries fromusing an SA system to directly or indirectly triggerinappropriate, and perhaps harmful, responses todetected anomalous activities, or to hamper recoveryactivities following attack.

1.5 Automated Attack Detection,Warning, and Response

DefinitionAutomated attack detection, warning, and responsecapabilities enable systems and networks to recognizethat they are under attack, respond defensively, andalert human operators. Today’s static signature- andrule-based technologies can detect certain types ofnetwork disturbances and can respond by alertinghuman operators. But these technologies generallycannot recognize novel forms of attack, and they havelimited abilities to automatically act to defend thesystem and make repairs to keep it functioning.Automated attack detection requires next-generationtools based not only on predefined signatures but alsoon technologies based on dynamic learningtechniques. These techniques must be integrated andsensors distributed at the host and network layers inorder to provide coverage of both outsider and insiderthreats. Automated responses should include not onlywarnings but defensive actions that occur within thepropagation time of an attack in order to mitigate it.

ImportanceThe effects of a wide-scale cyber attack could bedevastating, especially if coupled with a physicalattack. Static intrusion detection and preventionmechanisms that reside at network boundaries maynot always be capable of stopping malicious code andworm attacks that can gain a foothold from withinthe network and spread rapidly. Organizations shouldadopt security strategies that deploy mechanisms at alllevels of a network or system. But next-generationtools that can deal dynamically with ever-moresophisticated attack technologies are also needed. Theneed for new automated attack recognition andwarning technologies spans threats from scriptedattacks through ad hoc hacking to Trojan horses,viruses, self-replicating code, and blended threats.

Today, the spread of a new Internet worm results inseveral tiers of reaction: 1) knowledgeable networkoperators try to block the worm by configuringswitches, routers, and firewalls; 2) an updatedsignature is created to stop the worm via antivirus andintrusion prevention systems; and 3) a patch is

Functional Cyber Security

Page 55: About the National Science and Technology Council - NITRD

39

created to fix the underlying vulnerability. Effectiveresponse times range from hours to weeks today butwill need to be under a second in order to deal withattacks such as flash worms. Only sophisticatedautomated response techniques can provide suchspeedy protection.

State of the ArtA large percentage of the network security market stillrelies on signature- and rule-based systems. Themigration from those systems to anomaly-detectionand dynamic self-learning systems is just beginning.One new capability involves high-level security eventmanagers, technologies that correlate alerts frommultiple sensors and network logs with the goal ofproviding network alerts with fewer false positives andminimizing the amount of data the network operatormust examine. While the commercial sector has beenslow to target automated attack response, Federalresearch is developing the capability to automaticallydetect and respond to worm-based attacks againstnetworks, provide advanced warning to enterprisenetworks, study and determine the worm’spropagation and epidemiology, and provide off-linerapid response forensic analysis of malicious code.

Capability GapsSecurity-event management systems, which provideIT operators with a synthesized real-time overview ofnetwork and system activity, mark a step forward insituational awareness. But they represent the high endof the current state of the art and few address therequirement for automated response. SignificantR&D will be needed to move automated attackdetection and warning technologies beyond rules andsignatures. Future generations of these technologiesneed to be preemptive rather than reactive, butautomating system defense behaviors will requireadvances in machine learning technologies. Theseadvances must be accompanied by new techniquesand tools that help network operators understand andinteract with the automated decision making process.

For example, a defensive computer system that canblock or quarantine a suspected attack will have to bea highly trusted system, and even then, strongmethods must also be put in place to enable operatorsto quickly reverse any actions taken by a system that

they deem inappropriate. The combination of humanand machine learning through making correct (truepositive and true negative) decisions will reinforce theunderlying intelligence. Weighting mechanisms arelikely to be required so that the defense system willevaluate multiple factors, including the likelihood andpotential impact of attack, and will be more or lesslikely to isolate systems depending on their criticalityand the degree of certainty of an attack.

Defining and characterizing defensive courses ofaction is another capability gap. When a system isattacked, there is very little time for humans to react.To minimize the potential damage and stop orquarantine the cyber attack, there is a need fordecision support systems that can rapidly providedefensive courses of action and alternatives to networkdefenders.

1.6 Insider Threat Detection and Mitigation

DefinitionAn insider threat can be defined as the potentialdamage to the interests of an organization by a personwho is regarded as loyally working for or on behalf ofthe organization. Within the IT environment ofnetworks, systems, and information, an organization’sinterests can be embodied in both implicit andexplicit security policies; in this environment, aninsider threat can be more narrowly defined as thepotential violation of system security policy by anauthorized user. Although policy violations can be theresult of carelessness or accident, the core concern isdeliberate and intended actions such as maliciousexploitation, theft, or destruction of data, or thecompromise of networks, communications, or otherIT resources. Detection involves differentiatingsuspected malicious behavior from normal as well asunusual yet acceptable behavior. Mitigation of theinsider threat involves a combination of deterrence,prevention, and detection.

ImportanceThe intelligence community (IC) and DoDcommunities are environments in which access toclassified information is available to appropriately

Functional Cyber Security

Page 56: About the National Science and Technology Council - NITRD

40

cleared members. One of the most harmful anddifficult to detect threats to information security is thetrusted insider who uses privileges in a maliciousmanner to disrupt operations, corrupt data, exfiltratesensitive information, or compromise IT systems. Lossof intelligence operations and/or information willultimately compromise the Nation’s ability to protectand defend itself against future attacks and tosafeguard military and intelligence assets workingabroad. In fact, some of the most damaging cyberattacks against the IC have been launched by trustedinsiders. Such attacks will become an increasinglyserious threat as increased information sharing resultsin greater access to and distribution of sensitiveinformation. The private sector, where corporationsmaintain valuable and highly sensitive proprietaryinformation, and where banking institutions managethe flow of and access to electronic funds, sharesimilar concerns over insider activity.

State of the ArtTechniques to mitigate the insider threat focus onmonitoring systems to identify unauthorized access,establish accountability, filter malicious code, andtrack data pedigree and integrity. While an array ofpartial measures exists for countering the insiderthreat, these measure are limited in scope andcapabilities. Among the challenges that add to thedifficulty of this problem are:

❖ The scale and diversity of the computinginfrastructure, in terms of numbers and types ofplatforms, missions supported, infrastructurearchitectures and configurations, and worldwidegeographic distribution

❖ The size, variety, and fluidity of the workforce ingeneral and, in the case of military missions, theneed to interface with allied and ad hoc coalitionpartners

❖ The variety of highly complex computer securityenvironments that range from unclassified systemsto classified networks, and from private sectorsystems and networks that support business andelectronic commerce to critical infrastructureprocess control systems

❖ Policy discovery, which is the process by which thekinds of access permitted to insiders is determined.Such policies are difficult to formulate.

The trusted insider operates within this largeinterconnected world of information systems relativelyunchecked and unmonitored beyond the basicsecurity mechanisms used primarily to detectuntrusted outsiders and prevent them frompenetrating and exploiting information assets. Thesefactors make insider threat a complex problem that isbeyond the scope of commercially available tools.

Capability GapsBoth prevention and detection of malicious insideractivity can be made more effective through use ofmodels that capture and predict the knowledge,behavior, and intent of insiders. The subjectivity ofuser behavior makes it difficult to distinguishacceptable, or authorized, behavior from unauthorizedbehavior regardless of whether the user is consideredtrusted or untrusted. The analysis becomes even moredifficult in the complex environment described above.Thus, a capability that can reliably model anddifferentiate between normal, unusual but acceptable,and unacceptable user behavior in a complex setting,and that can detect and react to the identifiedmalicious insider activity, will provide a solidfoundation for successful mitigation of the insiderthreat.

Accurately and quickly identifying malicious insideractivity within large volumes of electronic records(e.g., network access and audit logs) also requireseffective behavioral models. In an illustrative scenario,a network of sensors calibrated based on such modelswould monitor and record user activity throughoutthe complex enterprise. Analytical tools could then beused to sift through the various data stores, correlatingthe information and presenting it in a meaningfulformat. The correlation techniques could use themodels to sort through and map the myriad pieces ofinformation into a complete picture of the insideractivity, which could be presented via datavisualization to interested parties.

For addressing insider threats, more advancedmethods of document control and management – the

Functional Cyber Security

Page 57: About the National Science and Technology Council - NITRD

41

ability to provide an unalterable accounting ofdocument access and dissemination – constitute amajor Federal capability gap, given the sensitivity ofmany types of Federal information and the increasingdemands for information sharing. The intelligenceand DoD communities, which routinely handle manylevels of data and document sensitivity, haveparticularly acute concerns in this area. Documentcontrol and integrity must be applied to the entiredocument as well as to labeled portions in a mannerthat is resistant to tampering and circumvention, andmust be managed in accordance with data sensitivityand originator controls. The document managementcapability must ensure that appropriate controlrequirements are translated into an implementablesecurity policy that is applied to the document andpreserved from cradle to grave.

A capability is also needed to prevent documenttampering and to maintain the integrity of theinformation, the associated sensitivity labels, and anydissemination controls contained within thedocument. One approach to dissemination controls iscalled labeled paths, in which knowledge of the pathof access from point of creation to destination can beused to determine if all accesses along the path arepermitted. An enhanced digital rights managementregime should be developed that enforces control andaccountability over information in accordance with aspecified security policy whenever an individualattempts to read, write, modify, print, copy,distribute, or destroy information or the associatedsensitivity labels and dissemination controls. Thiscapability would allow fine-grained customization of auser’s control over all or part of the document inaccordance with a specific security policy.

This general concept of limiting exposure to insiderthreats applies more generally as well. Policies thatembody the principle of least privilege would make itless likely that insiders would have privileges thatenable them to conduct activities they should not bepermitted to conduct. However, applying this conceptrequires technical means for translating the principleof least privilege into practice via configuration andenforcement of access control and other securitypolicies at the implementation level.

1.7 Detection of Hidden Information

and Covert Information Flows

DefinitionSteganography, derived from the ancient Greek wordsfor “covered writing,” is the art and science of writinghidden messages in such a way that no one apart fromthe intended recipient knows of the existence of themessage. Detection of covert information flows relieson the ability to detect information hidden within astream of information that is transmitted from onesystem to another.

Steganographic data are hidden or embedded throughthe use of mathematical techniques that addinformation content to digital objects – usuallyimages, video, and audio, but also other digital objectssuch as executable code. When sophisticatedtechniques are used, little or no degradation in qualityor increase in data size in the resulting object isperceptible. A steganographically embedded messagemay also be encrypted. There is no universallyapplicable methodology for detecting steganographicembeddings, and the few general principles that existtend to be ad hoc. In cyberspace, steganographyprovides a capability for transmitting informationundetected.

Steganalysis is the examination of an object todetermine whether steganographic content is present,and potentially to characterize or extract suchembedded information. Watermarking refers toembedding content that conveys some informationabout the cover object (e.g., copyright information).In this context, digital data forensics is the use ofscience and technology to determine informationabout how a digital object was formed or modified,even in the absence of the original object.

ImportanceInternational interest in R&D for steganographictechnologies and their commercialization andapplication has exploded in recent years. Thesetechnologies pose a potential threat to U.S. nationalsecurity. Because steganography secretly embedsadditional, and nearly undetectable, information

Functional Cyber Security

Page 58: About the National Science and Technology Council - NITRD

42

content in digital products, the potential for covertdissemination of malicious software, mobile code, orinformation is great. Hundreds of steganographicsoftware tools are readily available either commerciallyor as freeware and shareware downloads from theInternet. The affordability and widespread availabilityof these tools makes steganography an enablingtechnology for U.S. adversaries. The threat posed bysteganography has been documented in numerousintelligence reports.

State of the ArtR&D advances in steganalytic solutions have led tooperational prototypes for evaluation and use by theDoD and intelligence communities. Vulnerabilities insteganographic tools and techniques have been madepublic. Techniques that detect certain classes ofembedders have been developed, which detect thepresence of and estimate the amount of embeddedcontent. Blind universal steganalyzers have begun todetect and classify cover images in which content hasbeen embedded using one of several embeddingtechniques. The effectiveness of steganographic keysearches has been demonstrated for a few embeddingmethods, and limited parallelization of key searchcapabilities has begun. Once a steganographic key hasbeen found, the embedded content can be extractedfrom the cover object and exploited. Many of theseadvances have addressed digital image cover objects.

Covert information flows can be achieved by hidinginformation within a legitimate flow of information,or even by manipulating attributes (e.g., timing) of aninformation flow. For example, a security researcherhas demonstrated the ability to use DNS requests tocreate covert channels for information flows. Covertinformation flows can be extremely difficult to detect,when information flows are being monitored forcovert channels. More importantly, many legitimateinformation flows (including DNS requests) typicallyare not monitored at all for the embedding of covertinformation.

Capability GapsTargeted steganalysis techniques are useful only for asingle embedding algorithm or class of algorithms. Afull spectrum of such techniques is required toeffectively combat the use of steganography by

adversaries. More effective steganalysis is needed tocounter the embedding algorithms that continue to bedeveloped and improved, such as matrix, model-based, and wet paper code embedding. Blinduniversal steganalyzers need to be fully evaluated,shown to be scalable, and deployed. Key searchalgorithms for realistic key lengths require substantialcomputational power. To be more widely useful,steganalysis capabilities must be made more efficientthrough the use of specially programmed high-performance computing platforms. Steganalysis is lessadvanced for audio, video, documents, and otherforms of data than for static digital images.

Advanced methods for detecting covert channels forinformation flows need to be developed and, whenrisks warrant a high level of security, these capabilitiesneed to be deployed to monitor a variety of differenttypes of legitimate information flows to identifycovert communications. Resources to evaluate,integrate, and deploy the numerous basic researchadvances are limited and should be enhanced.

1.8 Recovery and Reconstitution

Definition Recovery and reconstitution refer to the capabilitiesneeded in the wake of a cyber attack to restore thefunctionality and availability of networks, systems,and data. Recovery and reconstitution methods mustbe adequate to cope with the consequences of cyberattacks that are carried out quickly, cause extensivedamage, and propagate in uncontrolled ways.

Importance Recovery and reconstitution must be addressed andimplemented in all aspects of a system – networks,operating systems, middleware, applications, and data.Capabilities for timely recovery and reconstitution areespecially important in mission-critical systems, whichmust be able to degrade gracefully, meaning that theymust be able to survive a cyber attack even if damagedand recover to an operable state that sustains mission-critical functions. Systems must be made self-healingand self-restoring to as great a degree as possible. Self-restoring means that, as portions of a system fail, anew system is dynamically formed by rerouting

Functional Cyber Security

Page 59: About the National Science and Technology Council - NITRD

43

critical traffic and migrating critical data toundamaged nodes. The recovery and reconstitutionaspects of dynamic response depend on accurate,timely detection of cyber attacks. The spreading ofmalicious code across a network needs to be stopped,for example, and damaged nodes need to be recoveredwhile residual malicious code is eradicated. Thistechnical area is closely linked to large-scale cybersituational awareness, which provides the informationrequired to perform recovery and reconstitution.

State of the ArtCurrent technologies for recovery and reconstitutionare limited. The most common recovery andreconstitution techniques are redundant processing,physical backups, and the use of special serviceproviders to implement recovery capabilities fororganizations. These procedures focus on systemfaults, failures, and accidents, not purposeful,malicious cyber attack. Technologies in use today areable to return databases, applications, and data to anoperational state after non-malicious faults or failures.Research in self-regenerating systems is investigatingtechnologies to enable systems that have beenexploited to restore themselves autonomously, but thetechniques are still in their infancy.

Capability GapsRecovery techniques tend to be aimed at data recoveryrather than at reconstituting large-scale systems ornetworks. Research is needed to better understand theextent to which today’s Internet would recover fromattacks, particularly wide-scale attacks, and how suchrecovery can be accomplished.

To be effective, recovery and reconstitution must berapid and must be guided by accurate and timelyinformation. Damage-assessment technologies areneeded that can quickly provide network defenderswith an accurate snapshot of the overall enterprise,what has been attacked, where the damage is, whattype of damage has been incurred (whether the attackis against the confidentiality, the integrity, or theavailability of the system), and what parts of thesystem have been affected. Defenders also need robustdecision-support systems that can rapidly presentpossible defensive courses of action. In some cases,

autonomic (self-managing) system responses directedat recovery and reconstitution potentially couldmaintain a basic level of operation while furtheranalysis of the cyber attack is being conducted.

While there might be enough network redundancy toallow continued communications, an attack mightreach deep into a networked system. Techniques areneeded to assess whether data are damaged and, if so,the extent and impact of that damage. Damaged datarequire recovery to an earlier undamaged statefollowed by reconstitution. Applications may need tobe reloaded to ensure that malicious code has beeneradicated. Remediation may also be needed toeliminate vulnerabilities that enabled the attack in thefirst place. Rapid post-attack reconstitution of ITsystems requires the ability to create checkpoints thatcapture the state of a large-scale system and to notonly retrieve undamaged data (as mentioned above)but also to roll back damaged systems to earlierfunctional (uncompromised) states. Such rapidreconstitution is an alternative to rebuilding the entiresystem from scratch and is vital for mission-criticalsystems.

1.9 Forensics, Traceback, and Attribution

DefinitionForensics, traceback, and attribution are functionsperformed in the process of investigating cyberanomalies, violations, and attacks. They help answersuch basic investigative questions as: what happenedto the computer, system, or network; where an attackoriginated; how it propagated; and what computer(s)and person(s) were responsible. Cyber forensics can bedefined as the application of scientifically provenmethods to gather, process, interpret, and useevidence to provide a conclusive description of a cyberattack; this evidence enables operators to restoresystems, networks, and information after an attack.Forensic analyses help operators correlate, interpret,understand, and predict adversarial actions and theirimpact on system, network, and IT infrastructureoperations; and provide evidence for a criminalinvestigation.

Functional Cyber Security

Page 60: About the National Science and Technology Council - NITRD

44

The goal of traceback capabilities is to determine thepath from a victimized network or system throughany intermediate systems and communicationpathways, back to the point of attack origination. Insome cases, the computers launching an attack maythemselves be compromised hosts being controlledremotely from a system one or more levels fartherremoved from the system under attack. Attribution isthe process of determining the identity of the sourceof a cyber attack. Types of attribution can includeboth digital identity (computer, user account, IPaddress, or enabling software) and physical identity(John Doe was the hacker using the computer fromwhich an attack originated). Attribution can alsosupport a new model of authorization usingaccountability as a basis for deciding which operationsor resources to trust.

ImportanceThe prospect of accountability under law is a keydeterrent of common crime. Unfortunately, incyberspace accountability is nearly nonexistent. Publicaccess points often allow anonymous access to theInternet. Even when some form of identification isused at the initial point of access, users can thenproceed to move about with relative anonymitythrough the use of a variety of tools that includeanonymization services or data and communicationpath obfuscators. Due to the lack of authenticationcapability associated with Internet communicationprotocols (or the frequent lack of implementedauthentication when the capability exists), it is oftenpossible to spoof (i.e., forge or manipulate) theapparent source of hostile communications, allowingmalicious actors to keep their location and identityhidden. Moreover, hackers often hide their tracks byhacking and communicating through numerouscompromised machines, making it difficult todetermine the host from which certain Internet trafficoriginates.

This issue is exacerbated by the multinational natureof the Internet, which allows these network hops tobe routed through compromised hosts located incountries that may not have strong relationships withU.S. law enforcement, and may not cooperate with

investigative efforts. In addition, it remains difficult toassociate a digital identity with a specific humanbeing. Forensics, traceback, and attribution can helpmitigate the shortcomings of the Internet’s design bydenying attackers anonymity and safe haven.

State of the ArtCurrent commercial investment in computer forensictools is focused on the needs, practices, andprocedures of law enforcement. Typically, lawenforcement is incremental and conservative inadopting new IT processes and capabilities. Officialsrequire vetted processes, repeatable procedures, andhigh assurance of the integrity of any collected data.Such law enforcement processes and procedures arenot easily adaptable to new technologies. Cyberinvestigations are often conducted after the fact onsystems that have been turned off, so that crucialinformation still in system memory (such as malware,running processes, and active network sessions) maybe lost. Because evidence preservation and analysiscan take 90 days or more, digital trails on the Internetrevealed by the analysis may already be cold. For themost part, the current generation of investigative toolssubstantially lags behind the capabilities that the lawenforcement community would like to have.

Today, many investigative procedures in computernetwork defense begin with ad hoc inquiries intocomputer or network states for the purposes of rapidsituational awareness and development of courses ofaction to contain ongoing cyber attacks. Most analysisis performed by human operators, includingtimelining and event reconstruction from network-and host-based intrusion detection systems (IDS).IDS reports about possible intrusion activity andsources, however, are not automatically validated.Rather, human investigation is needed to distinguishlegitimate events from intrusions. Many forensic IDSand enterprise security tools are not forensic in thescientific sense but support a human cognitiveinvestigative process.

Traceback capability is limited by the ability ofattackers to spoof source IP addresses. Some standardnetwork information sources (such as traceroute and

Functional Cyber Security

Page 61: About the National Science and Technology Council - NITRD

45

DNS registries) can often trace a path back to a hostInternet service provider (ISP). Router netflow (ametering technology for network measurements)information, when available, can also be useful.Geographic location information may be accurate atthe country or state level but may not be practicalwith satellite-based ISPs.

Dynamic IP address assignment and spoofing makeattribution a significant technical challenge.Generally, only with cooperation from the attacker’sISP might the attacker be identified. Many times,however, the evidence to attribute the attack to anindividual remains inconclusive. Open wireless accesspoints, Internet cafes, and similar venues that allowInternet access without positive identification andauthentication further exacerbate this problem.

Capability GapsThe following IT capabilities, needed for forensics,traceback, and attribution for both law enforcementand network defense, do not yet exist and requireR&D:

❖ The ability to track individuals or computers asthey access the Internet from various ISPs and IPaddresses, and particularly over non-cooperativenetworks, to address cases in which networkowners or service providers are unwilling tocooperate or where networks themselves have beencompromised

❖ Live investigation and preservation of digitalevidence on a target computer performed remotelyover the Internet. This would enable investigatorsto react to attacks in real time and preservepotential evidence still in memory.

❖ Network forensics, especially in the discovery,investigation of, and response to stepping-stoneattacks. Substantial network information may needto be available for traceback to be reliable.

❖ Techniques for sampling evidence in ways thatprovide confidence in the results. These techniqueswould constitute a valuable decision support toolfor computer network defense analysts.

❖ The ability to determine analytical confidencefactors that can predict the error rate associated

with having processed only a portion of availableevidence (for example, the finding that somesubset of selected evidence yields a particularpercent confidence level in the analysis). Thiswould be a valuable decision support tool forcomputer network defense analysts.

Functional Cyber Security

Page 62: About the National Science and Technology Council - NITRD

46

The R&D topics in this category are focused onimproving the inherent security of the informationinfrastructure, ranging from protocols on which theInternet relies to critical infrastructure systems thatdepend on a secure information infrastructure tooperate. Topics in this category are:

❖ Secure Domain Name System❖ Secure routing protocols❖ IPv6, IPsec, and other Internet protocols❖ Secure process control systems

2.1 Secure Domain Name System

Definition The Domain Name System (DNS) is a globallydistributed database that provides two-way mappingsbetween domain or host names (for example,www.whitehouse.gov) and IP addresses (for example,63.161.169.137). Nearly all Internet communicationsare initiated with a DNS request to resolve a name toan IP address. Although it is arguably one of the mostcritical components of the Internet’s architecture, thecurrent DNS is not secure and lacks authenticationand data integrity checks. Protocol exchanges in thesystem (e.g., resolver-to-server and server-to-server)are subject to malicious attacks. An example of suchan attack is “zone hijacking” in which third partiesimpersonate entire DNS zones and redirect networktraffic to their own machines for malicious purposes.

ImportanceBecause the DNS is at the core of most Internetcommunications, developing technologies that makeit more difficult to undermine the DNS infrastructurecould mitigate some existing vulnerabilities. TheDomain Name System Security Extensions(DNSSEC) provide origin authentication and dataintegrity checks for DNS lookups. This isaccomplished by adding digital signatures and publickeys to the DNS. When an Internet application sendsa DNS query for a host name, it can request thatDNSSEC security information be returned with theresponse.

A secure DNS would comprise a signed DNS treeand one or more “trust anchors.” Trust anchors arepublic keys associated with DNS zones high up in theDNS hierarchy such as .gov, .mil, .com, or the root“.” that serve to create chains of trust at lower levels ofthe hierarchy by digitally signing keys for domainsunder them: The public key for the domainwhitehouse.gov would be signed by the key from .gov.If a host trusts the .gov key, it can verify and trust anewly learned whitehouse.gov key, and consequently,the whitehouse.gov domain information.

Beyond providing name-to-address resolution, theDNS infrastructure is being expanded to addresssecurity and robustness issues. Examples of these usesinclude adding mail authentication information aspart of current anti-spam proposals and adding publickeys and digital certificates in support of other securecommunication protocols. With this expanding rolecomes an increased need to assure the authenticity ofthe DNS responses and an increased possibility thatthe DNS itself will be targeted for attacks. As endsystems become more secure, attackers wishing todisrupt Internet-based communications may discoverthat it is easier to achieve their goal by subverting theunderlying protocols such as DNS on which the endsystems rely. In addition, as the DNS is expanded toserve other security-related requirements (e.g., anti-spam techniques), it may become even more of atarget for malicious attacks by those who seek tosubvert these efforts.

State of the ArtUnder the auspices of the Internet Engineering TaskForce (IETF), the latest versions of the DNSSECspecifications are being completed and earlyimplementations of DNSSEC-capable servers areemerging. Many DNSSEC technical challenges areassociated with deployment and operational issues.Key organizations – including the InternetCorporation for Assigned Names and Numbers forthe root for the DNS tree, large registries for domainssuch as .com and .net, and some country coderegistries – are examining the potential for DNSSECdeployment in major sub-trees of the DNS. These

2. SECURING THE INFRASTRUCTURE

Page 63: About the National Science and Technology Council - NITRD

47

initial evaluations of DNSSEC’s viability for large-scale deployment have resulted in the identification ofseveral areas for further R&D.

Capability Gaps Zone enumeration and privacy: DNS informationcan be, and is, used for malicious purposes. Spammersuse DNS registry information to compile lists of e-mail addresses and attackers can try to map anenterprise’s network resources by looking at the DNS.Because it is public, DNS information is inherentlydifficult to protect from misuse, but severalimplementations can block certain types of potentiallymalicious or probing lookups. Unfortunately, theinitial design of DNSSEC mechanisms to supportauthenticated negative responses (i.e., a reply statingthat the requested name does not exist) also makes iteasy to circumvent these safeguards and to enumerateall records in a given DNS domain. Concerns overthis side effect range from the possibility that it makesit even easier to find information that is alreadygenerally accessible, to the legal worry that theDNSSEC mechanisms may make privacy protectionmore difficult. The authenticated negative responseparts of the DNSSEC specification will require a newround of requirements analysis, design,standardization, and implementation.

“Last-hop” issues: While significant effort has beendevoted to developing DNSSEC protocols andprocedures for use among DNS servers, howapplications and host DNS clients (i.e., resolvers)interface with, control, and respond to a DNSSEC-enabled infrastructure is largely undetermined. Therelative lack of involvement and inputs from the hostand application development communities representsa challenge to overall adoption, deployment, and useof DNSSEC. To address these “last-hop” issues,industry must draft requirements, specifications, andbasic control processes for application interfaces, aswell as certain protocol extensions and policy andmanagement mechanisms to define the ways in whichapplications and hosts interact with DNSSEC.

Administrator tools and guidance: Even when thespecifications are completed, work will remain todeploy DNSSEC and foster its adoption. Manynetwork operators do not perform DNS

administration tasks full time and may not havesufficient expertise to deploy DNSSEC correctly.Tools and guidance documents are needed to assistnetwork administrators in the new tasks of deployingand maintaining a DNSSEC domain. The issues thatmust be addressed include: public key management,DNS domain signing, and new and increasedregistrant-registry communications.

Performance analysis: The DNSSEC extensions willmake DNS transactions more complex. DNS serversand clients will have to perform cryptographicoperations and judge the validity of responses inaddition to performing DNS name lookups. Somenetwork operators argue that DNSSEC will slowDNS to the point of making it impractical to deployon a large scale. The validity of this claim has not yetbeen determined. To understand and resolve suchconcerns, a measurement basis must be designed andimplemented for DNS and DNSSEC technologies.This measurement basis would be made up of modelsof both the contents of the DNS tree and the trafficseen at various levels in the tree, and test andmeasurement tools capable of exercising andevaluating specific implementations or partialdeployments using such models. Developers andadministrators could then use these tools andreference data sets to test various DNS configurationsand to gauge the relative performance impact ofDNSSEC technologies.

Securing the Infrastructure

Page 64: About the National Science and Technology Council - NITRD

48

2.2 Secure Routing Protocols

DefinitionRouting infrastructures – made up of the equipment,protocols, data, and algorithms that compute pathsthrough interconnected network devices – organizedisparate collections of connected devices into viableend-to-end paths over which all network data flow.They reside in multiple layers of network architectures– from Layer 2 systems that interconnect local areanetwork switches, to Layer 3 systems that perform IProuting, to content- and context-based systems atLayer 4 and above.

ImportanceIP routing (Layer 3) protocols interconnect publicand private networks. The IP routing infrastructurecomprises tens of thousands of individual routingdomains employing numerous protocols andtechnologies operating as a hierarchicalinterdependent global distributed system. Routinginfrastructures are among the least protectedcomponents of the overall IT infrastructure.

Many large-scale routing systems are not highlyrobust because there are inherent trade-offs betweenresponsiveness and stability. To date there have beenfew focused attacks on routing infrastructures.However, as hosts and applications are hardened inresponse to common attacks on networks, networkattackers may increasingly focus their attention on theunderlying routing control systems.

Currently deployed Internet routing protocols arevulnerable to several classes of malicious attack. Theconceptually simplest attack is the compromise andcontrol of routers. The routing protocols and theassociated router resources are also vulnerable toattack from remote nodes. These attacks focus on theresources of the router’s control plane, the peeringrelationships between connected routers, and/or thedata that the routing protocol exchanges betweenrouter peers. Attacks can also focus on the lower-layerresources (e.g., physical links, and lower-layerprotocols) associated with the routing infrastructure.

Successful attacks on routing protocols can result inloss of connectivity (e.g., black holes, partitions),

eavesdropping and theft of data, sub-optimal routing,or routing system disruption. All common currentlydeployed routing protocols are vulnerable to theseattacks. As additional routing system services such astraffic engineering and QoS-sensitive routing areimplemented, the potential for disruptions by attackson the routing infrastructure increases.

State of the ArtIP routing technologies and protocols vary greatlydepending upon their application. The most widelyused is unicast routing among fixed, non-mobilehosts, which employs a two-level hierarchy ofprotocols. Interior Gateway Protocols (IGPs) are usedwithin a single administrative or management domain(called an autonomous system) that typically hassparse connectivity but little control of topologies.IGPs typically exploit all possible paths for optimalresponsiveness with little concern for policy and trust.Inter-domain protocols, known as Exterior GatewayProtocols (EGPs), route traffic between autonomoussystems. EGPs typically enforce policy (i.e., using onlypolicy-feasible paths) and emphasize global stability.The primary EGP deployed today is the BorderGateway Protocol (BGP).

Other routing protocols such as multicast orbroadcast protocols for one-to-many and one-to-allcommunications are emerging. While theirdeployment in the commercial Internet is limited,they play important roles in various private andspecial-purpose networks. Protocols for wireless,mobile, and ad hoc networks are also rapidly growingin importance and deployment; this class of protocolsmakes different assumptions about the composition ofnetworks and the trust relationships betweencomponents. Ad hoc routing assumes that there areno fixed infrastructure services to rely on, that allrouting relationships are ephemeral, and that thecomposition of the network is constantly changing.Mobile routing is based on the same assumptions,with the addition that the nodes, or entire networks,are constantly moving.

Capability GapsSecuring the routing infrastructure is a difficulttechnical problem. Complete, viable security solutionsfor most routing technologies have not yet been

Securing the Infrastructure

Page 65: About the National Science and Technology Council - NITRD

49

designed and standardized. Difficulties arise from theattributes and assumptions of routing systemsthemselves. The solutions must address protocols thatvary widely in their design and operation. A singleprotocol typically must address both peer-to-peer andmulti-party communications, single-hop and multi-hop messages, as well as mutable and immutable datacomponents.

Adding cryptographic protections to routingprotocols also poses difficult technical issues; thetopology of a given network is not always known andclock synchronization is difficult; multiple trustrelationship graphs exist (e.g., customer to serviceprovider, address administration, intra-domain vs.inter-domain); routing services must be able tobootstrap themselves and thus typically cannotdepend upon other components of the infrastructurefor their most basic start-up operations.

Additional security constraints are imposed bydynamic performance requirements and the need toaddress the trade-off between Internet stability andscalability. Global convergence and stability propertiesshould not be compromised by security mechanisms;however, these properties are poorly understood forthe largest routing systems (e.g., global BGP). Thereare also constraints on the platforms on which routingprotocols operate. Specifically, at the core of theInternet there are orders of magnitude difference inthe processing capabilities of the control and dataplanes, while at the mobile edge there may beconstraints on processing power and battery life. Inaddition, security mechanisms must include viablemeans for incremental and/or partial deployment,day-to-day operations and management, as well asfavorable risk and cost/benefit models.

There are no widely deployed secure routingprotocols in use today. The current state of the art inprotecting routing infrastructures uses basictechniques (e.g., passwords, TCP authentication,route filters, private addressing) that mitigate onlyrudimentary vulnerabilities and threats. The R&Dcommunity has been pursuing more completesolutions both at a theoretical level and throughspecific extensions to commonly used protocols. To

date, these proposed extensions have not achievedwidespread commercial implementation ordeployment, in part because they are perceived asoptimizing for security concerns at the cost of notadequately meeting scalability and performancerequirements and constraints.

Renewed interest in routing security has begun todevelop in the IETF and R&D communities. Newproposals for secure variants of BGP are attempting toprovide a better balance between security andperformance. Security of ad hoc routing protocolscontinues to be a major practical concern.

To expedite development, adoption, and use of securerouting technologies, several key R&D areas need tobe addressed, including:

❖ Risk analysis – understanding the potential risksassociated with security vulnerabilities and otherforms of focused, large-scale disruptions to therouting systems

❖ Secure protocol architectures – new designs for thedecoupling of routing and security functions thataddress separation of control and data planes andincorporation of programmable technologies in thedata plane (e.g., along the lines of DARPA’s activenetworks efforts)

❖ Flexible and survivable secure routing – flexibledesigns that address security as one component ofoverall viability and survivability of the routinginfrastructure. The designs should also addressenvironments in which reputation management isa continuum rather than a binary decision and inwhich security systems selectively and dynamicallyadapt mechanisms to trade off threat mitigationfor performance, scalability, and cost

❖ Efficient security mechanisms for routing – newcryptographic techniques to ensure theauthenticity, integrity, and freshness of routinginformation, and that perform more effectivelyand efficiently than those previously proposed

❖ Secure routing systems – system-level designs thatintegrate other security technologies (e.g.,intrusion and anomaly detection, firewalls) as partof the secure routing system

Securing the Infrastructure

Page 66: About the National Science and Technology Council - NITRD

50

❖ Secure self-organizing networks – classes of routingtechnologies that support wireless ad hoc andsensor networks as well as large-scale, peer-to-peer,and grid-like distributed systems. Self-organizingnetworks may not necessarily assume the existenceof any fixed infrastructure, and they pose securitychallenges associated with secure group formation,membership management, and trust managementbetween dynamic groups.

2.3 IPv6, IPsec, and Other Internet Protocols

A number of IT infrastructure-related protocols arebeing developed under the auspices of the IETF andthe Internet Research Task Force. These protocolsaddress not only security but the growth ofnetworking and the diversification of network uses.The following discusses the most significant of theseemerging protocols.

Internet Protocol version 6 (IPv6)

Definition and Importance of IPv6IPv6 was developed to enhance the capability of IPv4by providing a vastly increased address space, toprovide header space to meet security and otherrequirements, and to provide additional capabilityenhancements. The additional address space is neededto support expected large increases in the number ofnetworked devices due to Internet growth, sensorsand sensornets, and mobile network devices.

State of the Art of IPv6IPv6 is being implemented in testbeds by severalFederal agency networks, including DARPA’sAdvanced Technology Demonstration Network(ATDnet), DoD’s Defense Research and EngineeringNetwork (DREN), DOE’s Energy Sciences network(ESnet), and NASA’s NASA Research and EducationNetwork (NREN) and Integrated Services Network(NISN). In addition, numerous Federal researchnetwork exchange points, including AmericasPathway (AMPATH), Next Generation InternetExchange (NGIX)-East, NGIX-West, StarLight,

Pacific Wave, and Manhattan Landing (MANLAN),currently support IPv6. These testbeds and exchangesimplement IPv6 in dual-stack mode and also supportIPv4. Currently, IPv6 traffic on these networks andexchanges is for testing IPv6 services and capabilities.

Substantial IPv6 operational and applications traffic isnot expected on mainstream networks until asignificant proportion of nodes and applications areIPv6-capable. IPv4 and IPv6 are expected to coexistwithin the Internet for some time, throughmechanisms that include dual-stack routers, hosts,and other devices, as well as tunneledcommunications through pockets that are exclusivelyIPv4 or IPv6. IPv6 will not in itself fully eliminatesome of the existing obstacles to end-to-end securityprotection. For example, because Network AddressTranslation/Translator (NAT) boxes, which hinderthe use of Internet Protocol Security (IPsec), maycontinue to be used under IPv6, new firewall andother security technologies will need to be developedfor operation in IPv6 environments.

A dual-protocol Internet presents numerous securitypitfalls. Even enterprises that are running solely IPv4or IPv6 need to be aware of all possible combinationsbecause nodes within the network can individuallyenable one or both protocols; unexpected tunneledtraffic can travel through a firewall if the firewall rulesare not sufficiently robust and comprehensive.

OMB has directed that by June 2008, all Federalagency backbone networks must implement IPv6 (atleast in a dual-stack mode) and agency networks mustinterface with this infrastructure.

IPv6 Capability GapsThe immaturity of current IPv6 security tools resultsin high levels of risk for breaches of IPv6 security.Research is needed to provide a full suite of securitytools and support to make IPv6 as secure as currentimplementations of IPv4. Robust DNS security forIPv6 needs to be developed and implemented.Specific research needs include:

❖ Security threat and vulnerability models to assessthe security implications of widespread IPv6implementation

Securing the Infrastructure

Page 67: About the National Science and Technology Council - NITRD

51

❖ New scalable technologies to provide end-to-endsecurity

❖ Techniques and tools capable of managing theproliferation of addresses and devices facilitated byIPv6

❖ Scalable routing to handle the demands of theIPv6 address space

❖ Packet filtering for IPv6 at speeds comparable toIPv4 (e.g., line rate access control list processing)

❖ Tools and infrastructure for managing and testingIPv6, including a rigorous conformance andinteroperability testing infrastructure. Currentindustry standards would not support governmentprocurement requirements or regulations.

❖ Business cases, detailed timelines, and scenarios fordeployment and use of IPv6

A business case needs to be developed in support ofdeploying IPv6 functionalities on a scheduled basis. AMay 2005 GAO report on IPv6 lists some benefitsand risks as a start toward making that case. Such acase should address current functionality of IPv4(including IPsec functionality) and the availability oftools to support IPv6.

Internet Protocol Security (IPsec)

Definition and Importance of IPsecIPsec is a suite of protocols standardized through theIETF to provide security at the network layer (Layer 3). IPsec can provide confidentiality, integrityprotection, peer authentication, traffic analysisprotection, and replay protection. Its companionprotocol, Internet Key Exchange (IKE), negotiatesand manages the details of the IPsec protections andthe secret keys used to provide these protections.

State of the Art of IPsecIPsec and IKE are most frequently used to createvirtual private networks and protect mobile users whoneed to access protected business networks andresources from outside the protected network. Manyfirewalls and routers incorporate IPsec/IKEfunctionality and many operating systems have built-in IPsec/IKE clients. The protocols are currentlybeing updated to version 3 for IPsec and version 2 for

IKE. IPsec is a mandatory component of IPv6.Although it is optional for IPv4, it has been added tomany operating systems and to many gateways androuters. Numerous add-on IPsec clients are alsoavailable.

Capability Gaps of IPsecSecurity of the emerging IPsec/IKE standards: Thecurrent version of IKE (IKEv1) has undergone formalprotocol analysis and the current versions of IPsec andIKE have been subjected to considerable security andfunctional analysis. Because even minor changes tosecurity protocols can introduce security holes due tounexpected feature interactions or other unforeseenproblems, new versions of these protocols should alsobe rigorously tested prior to implementation anddeployment.

Use of certificates and smartcards within IPsec andIKE: Public key certificates are the recommendedmechanism for peer authentication within IPsec/IKE.However, they have been a source of numerousproblems, including lack of interoperability amongdisparate domains, failure of IKE negotiations as aresult of message fragmentation (due to the size ofcertificates that are sent as part of IKE messages), andtime-outs related to the certificate revocation listchecking process. For IPsec/IKE to be applied in awidespread, scalable, and secure manner, certificateproblems must be addressed. Further testing andresearch are needed to ensure that PKI can be usedwith IPsec in a scalable, secure, and interoperablemanner.

Host Identity Protocol (HIP)

Definition and Importance of HIPIP addresses perform two functions: unique endpointidentifier and routing locator. Functional overloadingcauses problems in such diverse areas as routeaggregation, host multi-homing, and networkrenumbering. During the development of IPv6,attempts were made to split the IP address into twoparts, each performing one of these functions, but nosatisfactory solution was found. The Host IdentityProtocol (HIP) is another attempt to separate thesefunctions. It introduces a new Host Identity (HI)

Securing the Infrastructure

Page 68: About the National Science and Technology Council - NITRD

52

name space, based on public keys, in the TCP/IPstack. This cryptographic identity can also be used toprovide authenticated, secure communications.

Capability GapsCurrently, the HIP base protocol works well with anypair of cooperating end hosts. However, to be moreuseful and more widely deployable, HIP needssupport from the existing infrastructure, including theDNS, and a new piece of infrastructure, called theHIP rendezvous server, which facilitates the use ofHIP for mobile hosts. HIP is considered to besufficiently promising that an Internet Research TaskForce research group has been chartered to explore itsglobal ramifications within the Internet architecture.

Network Address Translation (NAT)

NAT is employed in private networks to keep thehosts’ addresses secret for security and privacypurposes. It is also used in networks that haveexhausted their allocation of address space toimplement private addresses that may duplicateaddresses used elsewhere on the Internet. In this case,a pool of public, globally unique addresses is used forcommunications with destinations outside the privatenetwork. When such messages cross the NAT box,the private address of an outbound communication isconverted to a public address and the publicdestination address of an inbound communication isconverted to the corresponding private address. Whileeffective at addressing these issues, NAT complicatesIPsec security: it is compatible with some types ofIPsec functionality but incompatible with others,some of which have work-arounds. The existence ofNAT must be considered when implementing IPsecor any other type of end-to-end security.

Mobile Internet Protocol (MIPv4, MIPv6)

MIP allows transparent routing of IP data to mobilenodes on the Internet and includes separatespecifications for IPv4 and IPv6. Each mobile node isidentified by its home address, regardless of its currentpoint of attachment to the Internet. While away fromits home, a mobile node is also associated with a“care-of” address that provides information about its

current attachment point. The protocol provides forregistering the care-of address with a home agent. Thehome agent sends data destined for the mobile nodethrough a tunnel to the care-of address, and the dataare delivered to the mobile node at the end of thetunnel. MIPv4 is currently deployed on a wide basissuch as in cdma2000 networks.

For MIP to function correctly, several types ofsecurity protection are essential: home agents andmobile nodes must perform mutual authentication;replay protection is necessary to ensure the freshnessof update messages; and data may be encrypted toprovide confidentiality. MIPv6 mandates IPsec for theprotection of binding update messages, which directthe home node to forward data to the care-of address,between mobile nodes and home agents.

Multicast Communications Protocols

Multicast communications carry traffic from a singlesource host to multiple destination hosts. They areused for applications as diverse as video broadcasts,teleconferencing, distance learning, multi-player videogames, and news, stock market, and weather updates.While a multicast message is to be delivered tomultiple destinations, only one copy of the message istransmitted along a given network segment on itspath to these destinations. The processing and trafficlevels are less than if each recipient’s message weretransmitted individually.

Multicast traffic can require security protection,whose nature and strength vary based on the multicastgroup’s purpose, characteristics, and membership.Numerous secure multicast protocols have beenproposed. Some are applicable to any multicast groupbut have restricted computational feasibility andscalability; others are optimized for the characteristicsof a particular group. Some have been tested underwide-scale deployment; others are still experimental ortheoretical. A single secure multicast protocol that iscomputationally feasible and scalable for all groups,all senders, and all receivers remains a research goal.Additional work is needed to determine therequirements, applicability, scalability, and securitycharacteristics of the various approaches.

Securing the Infrastructure

Page 69: About the National Science and Technology Council - NITRD

53

2.4 Secure Process Control Systems

Definition Industrial process control systems (PCSs) performmonitoring and control functions in such diversecritical infrastructures as electrical power generation,transmission, and distribution; oil and gas transport;and water pumping, purification, and supply. Someof these systems, such as Supervisory Control andData Acquisition (SCADA) systems, typically spanlarge geographic areas and rely on a variety ofcommunication systems, compounding the difficultyof making them secure.

Importance Although some technologies to secure SCADAsystems and other PCSs exist, many organizations arenot using these technologies to effectively secure theiroperational systems. Until recently, there was littleperceived threat to these systems, in part because theirproprietary nature was viewed as making themdifficult to attack. The evolution of criticalinfrastructure system architectures from isolatedstand-alone proprietary systems to distributednetworked systems – coupled with the deregulationand market competition that have opened access tothird parties and increased integration of PCSinfrastructure with business networks – has led toincreased security exposure of PCSs.

In this environment, new security approaches areneeded that take into account the unique operatingrequirements of some PCSs such as sensitivity tocommunication latency, low bandwidth, small sizes ofend devices, real-time information flows, divergentmessage formats based on monitored events, dynamicmessage routing, reliability, fault tolerance, andsurvivability. Needed security capabilities includemethods, technologies, and tools for deriving securityrequirements and metrics, performing securityanalysis, designing security controls, and testing andevaluating the effectiveness of implemented controls.Because industry will bear most of the cost ofimproved security for PCSs, security solutions need tobe economically viable.

State of the Art Security solutions for PCSs historically have beenminimal and ad hoc. Even now, some systems havelimited security architectures with little or noadherence to computer security principles such asleast privilege or separation of duties. In manyenvironments, security has been added piecemealrather than designed in from the start, and no widelyaccepted metrics exist for measuring the security levelsof these systems. Although PCSs are often mission-critical, their security has often not kept pace withthat of e-commerce systems, in which commercialnecessity has driven rapidly improving methods andtools for processing transactions securely.

Capability GapsToday, R&D in security for PCSs is fragmented, withresearchers scattered across academia, research labs,and industry. A developmental effort will be necessaryto increase attention to this topic and integrateresearch skills spanning a number of technical R&Dareas to address the specialized security requirementsof PCSs. Capabilities are needed in:

Novel security properties: Research is needed todevelop understanding of the security properties andclasses of vulnerabilities that may be unique to PCSsand their implications for developing appropriatesecurity policies and enforcement mechanisms.

Security metrics: A prerequisite for improving thesecurity of PCSs is a comprehensive, accepted set ofmethods for measuring and comparing their securityand safety properties. Appropriate metrics for thesesystems should be developed cooperatively withindustry.

Testing and assurance: The benefits from securitysolutions can be realized only if they are implementedcorrectly and the resulting system is tested as a whole.System and software testing is expensive, typicallyconsuming half of system development budgets.Improved, cost-effective methods are needed. Bytaking advantage of the specialized characteristics ofPCSs, it should be possible to develop methods andtools that are more cost-effective than generalizedsoftware testing approaches.

Securing the Infrastructure

Page 70: About the National Science and Technology Council - NITRD

54

National testbed and testing program: Developmentof test methods based on test and evaluation criteriacan form the basis for security evaluations of PCSs bycommercial laboratories. Mission-critical PCSs willrequire more thorough testing such as the FAA’sregimen for certifying aviation software. A widelyendorsed infrastructure for validating securityevaluations and issuing security assurance certificatesfor PCSs would be beneficial.

Developing these capabilities will require expertise indiverse areas such as process control, computerhardware logic, network topology analysis, securityvulnerability assessment, security metrics, and securitytesting and evaluation methods. Academic institutionsmay have researchers in a few of these areas but maynot be able to put together teams with thecombination of skills needed over sustained multi-year periods. Federal leadership may be needed tofoster this R&D, given that high-assurance PCSs havenot diffused into broad use in the commercialmarketplace.

Securing the Infrastructure

Page 71: About the National Science and Technology Council - NITRD

55

The R&D topics in this category focus on specializedsecurity needs associated with particular classes oftechnologies within specific IT domains. Topics inthis category are:

❖ Wireless security (traditional wireless Internet aswell as mobile ad hoc networks)

❖ Secure radio frequency identification (RFID)❖ Security of converged networks and heterogeneous

traffic (data, voice, video, etc.)❖ Next-generation priority services

3.1 Wireless Security

DefinitionThis area involves measures to provide cyber securityand information assurance to users, devices, andnetworks that use radio frequency (RF) or infrared(IR) physical layers for communication. Thesemeasures include wireless network protection,intrusion detection, and analysis of and response tothreats. Wireless security technologies typically operateat Layer 1 (physical) and/or Layer 2 (data link) of theOSI model (see box on page 36) but are tightlyintegrated with higher-layer security mechanisms tocontribute to a holistic security solution.

ImportanceSecurity architectures for wired networks rely at leastto some degree on physical security to deny would-beintruders access to local networks and data. Withinthe walls of many organizations, wired network trafficis unencrypted and nodes may not be individuallyfirewalled because the physical security provided bydoor locks, cable shielding, guards, fences, and thelike is viewed as sufficient. However, RF and IRsignals pass through and across many of the physicalboundaries of wired networks, rendering physicalsecurity ineffective.

Wireless networks enable network topologies not evenconsidered in the wired networking world. Forexample, mobile ad hoc networks have no networkboundary or gateway in the traditional sense. Instead,each node can access and be accessed by many or allother network nodes, and also from outside networks.

Thus both traditional wireless networks and mobilead hoc networks have characteristics that rendertraditional perimeter-based security architectures suchas corporate firewalls and wired intrusion detectionsensors ineffective. The lack of physical security andnetwork boundaries in wireless networks posechallenges to computer and network security.

State of the ArtCurrent wireless security technologies focus mainly ondata confidentiality and frequently do not providerobust availability, integrity, authentication, non-repudiation, and access control. These technologiesare more suitable for benign environments in whichjamming and interference are not problems, somephysical security is present, and exposure of networkmanagement and topology information does not posea high risk. Data security in wireless networks isusually provided by Layer 3 (network) and abovetechniques, such as virtual private networks andsecure tunnels (e.g., secure shell and secure socketlayer). While these techniques provide strongencryption for higher-layer data streams, they do notaddress vulnerabilities at the physical and data-linklayers. This allows wireless network attacks such aswireless-specific intercept, DoS, man-in-the-middle,jamming, and spoofing. Wireless networks demandcross-layer situational awareness.

Capability GapsCommercial interests have fostered improved datasecurity in wireless networks but other wirelesssecurity capabilities are immature.

Additional protection mechanisms at the physical anddata-link layers are needed, including adaptiveantennas and coding techniques that are jam-resistantand can respond to threats in real time. ProtectedLayer 2 management protocols are needed toeliminate spoofing and DoS attacks. Wireless-specificintrusion detection capabilities using RF sensors areneeded to supply network monitoring systems withdata unique to the wireless network. In addition,wireless protection, detection, and responsetechnologies should be integrated with higher-layermechanisms across both wired and wireless network

3. DOMAIN-SPECIFIC SECURITY

Page 72: About the National Science and Technology Council - NITRD

56

domains. Network situational awareness tools need toinclude these features to provide a comprehensiveunderstanding of the entire network, includingexisting and anticipated threats.

3.2 Secure Radio FrequencyIdentification

Definition Radio frequency identification (RFID) tags, alsocalled smart tags, are poised to replace the bar code asa mechanism for rapid object identification. The mostcommon application of bar code technology is theUniversal Product Code (UPC), which has been usedfor several decades on consumer product labels. AnRFID tag consists of a microchip and a metallicantenna used to receive signals from RFID readersand emit responses. Active RFID tags include self-contained power sources and, as a result, generallyhave greater range, processing power, and datastorage. Although passive tags, which are poweredsolely by the energy in signals from readers, are lesscapable, they are expected to become far moreplentiful due to their lower cost.

ImportanceSmart tags can be scanned by RFID readers at a rateof several hundred tags per second, with neither lineof sight nor close proximity required. While RFIDhas numerous advantages over bar code technology, italso raises privacy and security concerns. Privacyissues arise from the ability of RFID to trackindividuals and inanimate objects, scan personalbelongings from a distance, or aggregate and correlatedata from multiple tag-reader locations. Types ofattacks include eavesdropping and unauthorizedscanning, traffic tracking and analysis throughpredictable tag responses, spoofing, DoS disruption ofsupply chains, and corporate espionage due to lack ofreader access control.

RFID technologies are expected to be increasinglyused in applications such as inventory control,logistics and real-time location systems,manufacturing, baggage handling, retailing, supplychain management, and transportation. Potentialgovernment applications range from RFID-enabled

passports to supply chains for military logistics andcommerce.

State of the ArtThe small size, power constraints, computationalprocessing constraints, and limited chip countassociated with RFID technologies precludecomplicated data processing and use of sophisticatedcryptographic algorithms. Per-unit cost is a barrier towidespread use, but trends of increasing processingcapabilities and decreasing costs are expected overtime.

Two types of risks are associated with the security ofRFID tags. The first is the possibility of DoS attacksagainst RFID tag readers that would render themincapable of tracking assets and inventory or readingproduct prices in point-of-sale applications. Criminalsmight use such an attack to make readers inoperablein order to hide criminal activity. The second andmore serious type of risk involves the basic securityfunctions associated with RFID tags and readers, suchas encryption of information and authentication ofRFID communication signals. Inadequate RFIDsecurity could result in unauthorized eavesdroppingon communication signals, unauthorized tracking ofassets, or spoofing of readers by intentionallymisleading tags. This could lead to unauthorizedaccess to sensitive information about individuals orsupply chains, price tampering, counterfeiting, theft,and other illegal activity.

Capability GapsAlthough technology developers are beginning toaddress RFID security requirements, additional workis necessary. Research is needed on lightweightcryptography in the context of power and processingresource constraints under which RFID tags operate.Given the small gate count and limited memory andcomputational capabilities in RFID tags, thecryptographic techniques available to RFID designersand developers are limited. Cryptographic standardsand reference implementations of RFIDcryptographic algorithms are needed to enable thedevelopment of interoperable technologies and acompetitive marketplace. Beyond simply encryptingtransmitted information, needed capabilities extend toauthentication of tags and readers to avoid

Domain-Specific Security

Page 73: About the National Science and Technology Council - NITRD

57

unauthorized scanning of tags, tracking of individualsor assets, or spoofing.

Research is needed on end-to-end security for thecomplete RFID life cycle since the privacy andsecurity issues raised by RFID technologies arepresent from the manufacturing stage until the tag isdestroyed. These issues are associated not only withthe tags themselves but also with the readers anddatabase management systems that store and processRFID information. Industry and government need towork together to develop technologies and policies tosecurely use RFID while maintaining confidentialityof sensitive data, protecting citizens’ privacy,addressing the needs of law enforcement, andcomplying with government rules and regulations.Work is also needed to quantify the benefits of RFIDtechnologies in various application domains such ascargo tracking and passport control.

3.3 Security of Converged Networksand Heterogeneous Traffic

Definition The telecommunications sector is undergoing atransition from traditional voice and voice-band (faxand modem) data communication over a publicswitched telephone network (PSTN) to a next-generation network (NGN) reflecting theconvergence of traditional telecommunications withIP-based communications. As existing and newservices become available on the NGN, it will benecessary to provide at a minimum the same level ofsecurity as the current PSTN. As the NGN evolvestoward a packet-based network, it will be necessary toprovide security for multiple broadband, QoS-enabledtransport technologies across different serviceproviders, independent of any specific access ortransport technology.

ImportanceNext-generation network services require securitypolicies that ensure consistent application of securitymeasures across a range of network types and accesstechnologies and across service provider networks.The foundation for research on NGN security shouldinclude comprehensive NGN models that provide a

structured framework for identifying needed securityservices, including gaps in current security standards,determining security services that need to bedeployed, and assessing the risks and benefits ofdeploying specific security technologies bysystematically evaluating security deployments.

State of the Art Although the evolution of the PSTN and IP-basedcommunication networks has already begun, theconverged networks that are expected to form the nextgeneration of telecommunication networks do notcurrently exist as they are envisioned. Capabilitiesavailable in today’s PSTN that will also be required aspart of the NGN security architecture include: accesscontrol, authorization, non-repudiation,confidentiality, communications security, dataintegrity, availability, and privacy. Existing approachesto providing these capabilities in IP-based networksinclude encryption and virtual private networks.Redundant communication paths are likely to continueto be used in the future, but are not sufficient to meetall of the NGN security requirements.

Capability Gaps Research is required in these NGN security issues:

❖ Large-scale identity management technologies foruse in addressing, rather than for location as in thetraditional PSTN

❖ Highly scalable authentication architectures andtechniques that make use of multipleauthentication factors (e.g., name or ID, password,subscriber identity module [SIM] card containinguser authentication information, smart card,physical or software token)

❖ Techniques to enable non-repudiation on a user-to-user basis, unlike existing capabilities that arefocused at the network rather than the user level

❖ Technologies that enable data integrity,confidentiality, and availability across control andmedia planes of the network and across all securitylayers

❖ Technologies that enable the above securityrequirements to be met while at the same timeassuring some degree of protection of privacy-sensitive information

Domain-Specific Security

Page 74: About the National Science and Technology Council - NITRD

58

NGNs will rely on existing technologies and newapproaches for providing services and applications,network management, signaling and control, andtransport. In addition to the initial development ofnew technologies, these new capabilities will need tobe transitioned to appropriate standards developmentcommunities in order to assure evolution towardglobally scalable and interoperable NGN networkarchitectures.

In addition to these generic requirements, securityrequirements associated with important applicationdomains must be addressed in NGNs. Suchrequirements include those associated with next-generation priority services in support of nationalsecurity/emergency preparedness (NS/EP)telecommunications (section 3.4, below) and the real-time requirements associated with security of processcontrol systems (section 2.4, page 53).

3.4 Next-Generation Priority Services

DefinitionTelecommunication priority services provide priorityaccess to telecommunications capabilities in supportof national security and emergency preparedness(NS/EP). Priority services in traditional wireline andwireless telecommunications enable users of theseservices to communicate in crises when the publicswitched telephone network (PSTN) may experiencehigh network congestion or diminished networkcapacity. Existing priority service architectures covercall origination, PSTN network access, transport,network egress, and call termination. As thetelecommunications industry increases the use of IP-based telephony, and as the use of other IP-basedcommunications (e.g., e-mail, IP-based videoconferencing, and other Internet-based informationexchange) become increasingly prominent in theNS/EP community, the need arises to support priorityservice in the IP domain.

Importance Stakeholders ranging from local first responders toFederal decision makers rely on NS/EP

telecommunications to communicate during crisessuch as emergencies, attacks, and natural or manmadedisasters, as well as during subsequent recovery andreconstitution efforts. The purpose of the NS/EPtelecommunications infrastructure is to:

❖ Respond to the NS/EP needs of the President andthe Federal departments, agencies, and otherentities, including telecommunications to supportnational security leadership and continuity ofgovernment

❖ Satisfy priority telecommunications requirementsunder all circumstances through use ofcommercial, government, and privately ownedtelecommunications resources

❖ Incorporate the necessary combination of hardness,redundancy, mobility, connectivity,interoperability, restorability, and security toobtain, to the maximum extent possible, thesurvivability of NS/EP telecommunications

State of the Art The Government Emergency TelecommunicationsService (GETS), established by the NationalCommunication System (NCS) in 1995, provides anationwide ubiquitous voice and voice-band dataservice that interoperates with and uses the resourcesof selected government and private facilities, systems,and networks through the application of standards,and provides access to and egress from internationalservice. GETS provides priority access and specializedprocessing in local and long-distance networks. It ismaintained in a constant state of readiness to makemaximum use of all available communicationsresources should outages or congestion occur during acrisis. GETS is survivable under a broad range ofcircumstances, ranging from local to widespreaddamage, and provides routing, signaling, and networkmanagement enhancements that result in a higherprobability of call completion in congested networks.GETS augments and improves the public switchednetwork with capabilities that include enhancedrouting schemes and priority use of call-by-callpriorities over the PSTN.

To complement GETS wireline services, in 2002 theNCS deployed Wireless Priority Service (WPS), a

Domain-Specific Security

Page 75: About the National Science and Technology Council - NITRD

59

subscription-based priority service throughcommercial wireless service providers that ensuresNS/EP communications availability when wirelesscommunications users experience high levels ofblocking and congestion. WPS allows authorizedpersonnel to gain access to the next available wirelesschannel to initiate NS/EP calls.

Capability Gaps The Internet plays an increasingly important role incommunication and exchange of informationgenerally, and consequently in the NS/EP communityin particular. Furthermore, the previouslyindependent infrastructures for traditional circuitswitched telecommunications and IP-basedcommunications are in the process of evolving intoconverged next-generation networks. Because thetechnological infrastructures for providing existingpriority services do not extend into the IP domain,this area of potentially essential telecommunicationscapability and capacity lacks the ability to prioritizecritical communications traffic to support NS/EPmissions in times of crisis. Instead, communicationsover IP-based networks are based on best-effortdelivery, an approach that may be problematic inconditions of high congestion or reduced capacity.

To achieve assured delivery of NS/EP voice, video,and data traffic over the Internet, research is neededto determine what types of routing overlay and meta-application models are required to authenticateNS/EP users, prioritize packet traffic, detect networkcongestion, reallocate and queue resources, and re-route Internet traffic accordingly. Models that shouldbe investigated include out-of-band network flowmanagement and the use of virtual channels to carrymanagement control information.

Numerous approaches using the IP model arepossible. However, prioritized delivery of individualpackets at lower layers of the OSI model (see box onpage 36) does not guarantee that transactions willreceive priority processing on end systems and servers.Since any single protocol is likely to be insufficient toguarantee priority, several approaches may need to becombined to form an operational system. Differenttypes of IP-based data (e.g., voice-over-IP, streaming

video, and e-mail) may be treated differently due tovarying degrees of sensitivity to networkcharacteristics such as latency, jitter, packet loss,throughput, and availability.

Next-generation priority services should be resilient inthe face of large-scale outages of the Internetinfrastructure and Internet support infrastructuressuch as electric power and telecommunications. Theyshould also be resilient to cyber attacks originatingwithin the Internet such as DoS and worms. Theservices should have ubiquitous coverage so that theyapply to various physical and link layer technologies,locations, applications, and network topologies.Furthermore, they must work within single-providernetworks as well as in cross-provider environments. Inaddition, next-generation priority services will need tosatisfy the more generic NS/EP functionalrequirements discussed above in the context ofexisting priority services (e.g., availability, reliability,survivability, scalability, affordability).

Domain-Specific Security

Page 76: About the National Science and Technology Council - NITRD

60

The R&D topics in this category address approaches,methods, technologies, and tools for evaluating, testing,and measuring security and risk in IT infrastructurecomponents and systems, and in the infrastructure as awhole. Topics in this category are:

❖ Software quality assessment and faultcharacterization

❖ Detection of vulnerabilities and malicious code❖ Standards❖ Metrics❖ Software testing and assessment tools❖ Risk-based decision making❖ Critical infrastructure dependencies and

interdependencies

4.1 Software Quality Assessmentand Fault Characterization

DefinitionThis area includes methods, technologies, and tools toassess overall software quality and to characterizedefects according to their impact on the mostsignificant quality attributes of software, such assecurity, safety, and reliability. This area also includesefforts to enable external assessment of thesecharacteristics.

ImportanceOverall software quality must be assured as afoundation for other security efforts. A keyconsequence of today’s lack of validated methods andmetrics to evaluate software quality is that economicmechanisms (such as risk analysis, insurance, andinformed purchasing decisions) that help advanceother disciplines either do not exist or have little effecton software development. Software quality assessmentwould help the R&D community understand whatdefects lead most directly to security problems andfocus R&D on those problems.

State of the ArtThe gap between the state of the art and the state ofthe practice in developing near-defect-free, securesoftware is large. Methods such as the SoftwareEngineering Institute’s Team Software Process canproduce 20-fold to 100-fold reductions in the numberof defects, yet these methods are not widely used.Research in tools to assure that code has not beenmodified without authorization is promising. Thereare tools that can assure that software is free fromcommon security defects such as the vast majority ofbuffer overflows. However, these tools also are notalways used. The cyber security and informationassurance community currently does not have a goodunderstanding of what would motivate purchasers toinsist upon such measures and developers to use them.

Capability GapsMore robust assessment processes and automatedquality assurance: Tools are needed that enablesoftware developers to assess the quality and securityof the software they are designing throughout thedevelopment process. Automated techniques foranalyzing software would reduce the time and effortrequired to assess software quality, thereby enablingdevelopers to evaluate their designs more frequently.These methods should include mechanisms forassuring that code has not been tampered with by athird party (for example, proof-carrying code). COTSevaluation methods and resources should beaugmented to take advantage of these capabilities, andsoftware purchasers should have access to assessmentmethods and results. R&D is needed to improve theeffectiveness, accessibility, and adoptability of suchmechanisms.

Connect defects to attributes of highest concern:Research is needed to help categorize defects and tounderstand their relative severity. In-depth analysisshould be performed to identify the most commonvulnerabilities in various contexts (e.g., by operating

4. CYBER SECURITY AND INFORMATION ASSURANCE

CHARACTERIZATION AND ASSESSMENT

Page 77: About the National Science and Technology Council - NITRD

61

system, application type, and programming language).The results should be detailed enough that processesand tools can be designed to specifically counter themost common vulnerabilities, and that educationefforts can be used to target the most commonproblems. Some rudimentary high-level statistics areavailable (e.g., statistics on buffer overflows or raceconditions), but these statistics are not detailedenough to shed light on which processes or tools caneffectively be used to counter a given vulnerability.

4.2 Detection of Vulnerabilities and Malicious Code

DefinitionThis research area focuses on methods, technologies,and tools to detect vulnerabilities and malicious codeand to provide assurances about software security andother quality attributes. These capabilities are targetedat source code during development or re-factoring aswell as in-use object or binary code.

ImportanceBecause software vulnerabilities are increasingly beingexploited not by “recreational” hackers but bycriminals or other adversaries with more maliciousintent, there is a need for improved capabilities fordetecting, mitigating, or eliminating vulnerabilitiesbefore they are exploited. In addition, IT developersneed to be able to make assurances about thecharacteristics of their software. Today, developers donot have adequate feedback to make security-criticaldecisions and IT consumers cannot objectivelyevaluate the security of the software products they arepurchasing.

State of the ArtRecent developments in source code analysis addresssome of the scalability, usability, and sustainabilityissues that date from the early days of softwareprogramming. However, information is lacking aboutwhat analyses are most effective, how best to usethem, their applicability to legacy code, and how toovercome obstacles to adoption and sustained use.Many other promising targets for code analysis haveyet to be explored.

Capability GapsResearch is needed to establish new methods,technologies, and tools for vulnerability and maliciouscode detection. Approaches should be evaluated foreffectiveness and the most promising ones identifiedfor further R&D. Specific thrusts should include:

Improve source, object, and binary code scanningtools: Software developers benefit from working in anenvironment in which they have immediate feedbackabout the characteristics of the software they aredeveloping. Creating such an environment willrequire improved source, object, and binary codescanning tools (static analysis tools) and automatedexecution testing tools (dynamic analysis tools) thatsearch for security vulnerabilities. Many such toolsand services exist. However, their capabilities need tobe expanded to a broader range of problems, andobstacles to their adoption such as high false-positiverates, inadequate scalability, and inapplicability tolegacy systems should be overcome. In many cases,these tools and services need to “understand” thedesign intent behind the code. Promising researchthat automatically extracts such intent with minimalguidance from the developer should be encouraged.Tools should take advantage of source code whereavailable, but improved tools are also needed fordetecting vulnerabilities in object and binary codewhen the original source code is not available.

Develop malicious code detectors: Because it is easyto make an intentional injection of malicious codelook like a simple mistake, such code is difficult todetect reliably. Promising developments insimulation, code scanning, function extraction, covertchannel detection, and backtracking would benefitfrom further R&D.

Improve the interoperability of analysis tools:Many individual tools are available to aid in codeanalysis (such as decompilers, debuggers, slicers), butthey often do not work well together. Research isneeded to ascertain how analysts use and want to usetools in combination (including identifyinginformation flows and common processes), and todetermine how to make tools more interoperable,including defining standard interchange formats tosupport such flows.

Characterization & Assessment

Page 78: About the National Science and Technology Council - NITRD

62

4.3 Standards

DefinitionThe goal of cyber security standards is to improve thesecurity of IT systems, networks, and criticalinfrastructures by increasing the security, integrity,and reliability of commercial products. A cybersecurity standard defines both functional andassurance requirements within a product ortechnology area. Well-developed cyber securitystandards enable consistency among productdevelopers and serve as a reliable metric forpurchasing security products. Cyber security standardscover a broad range of granularity, from themathematical definition of a cryptographic algorithmto the specification of security features in a Webbrowser, and are typically implementation-independent.

The best results emerge from standards developmentprocesses that are consensus-based, in which allstakeholders – users, producers, and researchers –participate. In such approaches, each group ofstakeholders contributes unique perspectives to theprocess: users understand their needs, producersunderstand the current state of technology, andresearchers understand future trends. A standard mustaddress user needs but must also be practical, sincecost and technological limitations must be consideredin building products to meet the standard.Additionally, a standard’s requirements must beverifiable; otherwise, users cannot assess security evenwhen products are tested against the standard.

ImportanceThe security of IT systems begins with the security oftheir hardware and software components, andincludes both security technologies and non-securityhardware and software. Federal agencies, industry,and the public rely on commercial security productsto protect information, communications, and ITsystems. Adequate product testing against well-established cyber security standards facilitatestechnical improvements and helps give usersconfidence that the products meet their securityneeds. Both Federal agencies and the public benefitfrom the use of tested and validated products. In the

absence of adequate testing, product weaknesses canrender systems and critical infrastructures vulnerableto common attacks and other malicious activities.

State of the Art As cyber security issues grow in strategic importanceto the Nation, the need for methods and tools toassess and verify the security properties of computer-based systems and components is becoming moreapparent to both developers and consumers. Securitystandards such as the Common Criteria ProtectionProfiles are being developed to address governmentnational security systems. Security standards are alsobeing developed and applied to systems throughoutthe Federal government.

At the same time, the requirement for securitystandards is new enough that many current ITproducts do not provide the level of security neededto protect sensitive information, electronic commerce,and critical infrastructures. In general, widely acceptedcyber security standards that meet the needs for ITsecurity in unclassified and/or sensitive civilgovernment and commercial environments have notyet been developed. Moreover, the cyber securitystandards used in the national security arena areseldom built by consensus and often specifyrequirements beyond existing commercialtechnological capabilities.

Today, evaluation processes to determine compliancewith security standards also are limited. Standarddevelopment processes frequently do not considerhow this testing and validation will be done. Even if astandard has been well thought out from thestandpoint of verifiability, specific test methods areseldom developed upon completion of the standard.

Capability GapsDeveloping cyber security standards is time-consuming, and developing associated test methodscan be even more demanding without properplanning. Both require efforts from well organizedstakeholders. Today’s standards and test methodscommunity is small. Much of the expertise lies in thecommercial sector, with little in the user and researchcommunities. Often, the commercial sector does notinitially perceive the return on investment from

Characterization & Assessment

Page 79: About the National Science and Technology Council - NITRD

63

developing or using cyber security standards.Strategies are needed to encourage earlier buy-in tothe process by all stakeholders.

Compliance testing helps assure that a product meetsa cyber security standard and also helps isolate andcorrect security problems before the product entersthe marketplace. Expeditious, cost-effective, anddetailed technical test methods need to be developedfor each component of an IT system. R&D thatcouples standards development with the creation ofassociated test methods in ways that reduce the timeand cost of product validations can have the mostimmediate benefit to stakeholders. Testing methods,which detail the tests and documentation necessary todetermine compliance with each requirement of asecurity standard, must be science-based and able togenerate consistent, measurable, repeatable, timely,and cost-effective product validation results. Someautomated techniques for generating tests exist today,but automating test generation for generalized cybersecurity standards is beyond current capabilities. Theapplication of formal methods to these problemswarrants additional investigation.

Looking farther into the future, strategies foridentifying emerging technologies that will requirestandardization need to be developed. Given the paceof technological change, research in management oftechnology life cycles, including the effects of newtechnologies, is essential.

4.4 Metrics

Definition Metrics can be defined as tools designed to facilitatedecision making and improve performance andaccountability, such as through the collection,analysis, and reporting of performance data.Operators can use such quantifiable, observable, andmeasurable data to apply corrective actions andimprove performance. Regulatory, financial, andorganizational factors drive the requirement tomeasure IT security performance. A number of laws,rules, and regulations require IT performancemeasurement in general and IT security assessment inparticular. These laws include the Information

Technology Management Reform Act (also known asthe Clinger-Cohen Act), the GovernmentPerformance and Results Act, the GovernmentPaperwork Elimination Act, the Federal InformationSecurity Management Act, and the HealthcareInsurance Portability and Accountability Act. Otherdrivers are the national and homeland securityimplications of IT infrastructure vulnerabilities.

Potential security metrics cover a broad range ofmeasurable features, from security audit logs ofindividual systems to the number of systems withinan organization that were tested over the course of ayear. Security metrics measure diversified multi-dimensional data collected in real time and analyzed.Effective security metrics should be used to identifysecurity weaknesses, determine trends to better utilizesecurity resources, and measure the success or failureof implemented security solutions. Ultimately, themetrics should help characterize an organization’soverall security posture from risk/threat/vulnerability,budgetary, and regulatory standpoints.

ImportanceAlthough numerous products and best practices havebeen developed to provide security solutions,determining and measuring their effectiveness isdifficult in the absence of validated metrics.Organizations can improve security accountability bydeploying IT security metrics. The process of datacollection and reporting enables security managers topinpoint specific technical, operational, ormanagement controls that are not being implementedor are implemented incorrectly.

Ideally, metrics should be available that can measuredifferent aspects of an organization’s IT securitypolicies and mechanisms. For example, the results ofrisk assessments, penetration testing, and securitytesting and evaluation can be quantified and used asdata sources for metrics. Security managers andsystem owners can use the results of the metrics-basedanalysis to isolate problems, justify budget requests,and target investments to areas in need ofimprovement, thereby obtaining the most value fromavailable resources. Security metrics assist withdetermining the effectiveness of implemented securityproducts, processes, procedures, and controls by

Characterization & Assessment

Page 80: About the National Science and Technology Council - NITRD

64

relating results of security issues (e.g., cyber securityincident data, revenue lost to cyber attacks) toorganizational requirements and security investments.Departments and agencies can demonstratecompliance with applicable laws, rules, andregulations by implementing and maintaining securitymetrics programs.

State of the Art Today, developing comprehensive security metrics foran organization’s networks, systems, and informationis hampered by two key issues: 1) the sheer volume ofpotential sources of security information and the factthat much of the information must be gathered,collated, and analyzed by hand; and 2) the reality thatresearchers do not yet adequately understand how toquantify, measure, and evaluate cyber security toinform decision making. Possible information sourcesinclude incident handling reports, test results,network management logs and records, audit logs,network and system billing records, configurationmanagement and contingency planning information,and training, certification, and accreditation records.However, without metrics and tools for automatingthe collection of significant data and streamliningtheir analysis, evaluating security effectiveness isspeculative at best, with hackers and attackers possiblyhaving better awareness of an organization’s securitystanding and weaknesses than the organization itself.

Capability GapsIn the absence of sound methods and valid, persuasiveevidence, the private sector’s ability to make well-informed, risk-based IT security investments islimited and overall levels of cyber security in the ITinfrastructure remain low.

It is difficult for organizations to justify allocatingresources for a security metrics program, particularlyin the context of constrained budgets. Rather thanexpending the time and resources to gather andanalyze security data, organizations too often limittheir cyber security activities to simply purchasingcommercially available security products. Manyorganizations have not taken even the first step inbuilding a security metrics system, which is toestablish a baseline or framework of the key types of

data that will go into measuring security effectiveness.Improved identification of key types of metricsinformation, more intelligent tools, and automationof metrics data collection and analysis are needed. Butthe security metrics field is relatively new, with alimited number of experts.

4.5 Software Testing and Assessment Tools

Definition A test is an execution of a software program or systemto determine one or more of its characteristics.Software assessment makes that determinationthrough a static examination of the software. Staticexamination may focus either directly on the softwareor indirectly on such related aspects as specificationsor development records.

Software testing and assessment tools assist in, andoften automate, the exacting testing and assessmenttasks. Software testing and assessment usuallypresuppose a specified plan. Testing, assessment, andinteroperability provide evidence that a softwareimplementation satisfies requirements such asfunctionality, compliance, security, reliability,usability, efficiency, and portability. Testing andassessment can – and should – occur at every phase ofthe software development process, includingrequirements analysis, design, coding, and acceptance.

ImportanceSecurity vulnerabilities can surface almost anywhere insoftware that is ubiquitous in the Nation’s ITinfrastructure. According to a May 2002 NIST reportentitled Economic Impacts of Inadequate Infrastructurefor Software Testing, the cost to the Nation ofinadequate software quality testing is estimated at$59.5 billion annually. In addition, the report statesthat increased software complexity and decreasedaverage market life expectancy heighten concernsabout software quality.

According to the NIST report, only 3.5 percent oferrors are found during software requirements anddesign phases. This statistic suggests why a release-and-patch approach to software is increasingly

Characterization & Assessment

Page 81: About the National Science and Technology Council - NITRD

65

untenable. The report concludes: “The path to highersoftware quality is significantly improvedrequirements, specifications, and software testing earlyin the life cycle.” It argues that referenceimplementations, metrics, and suites of standardizedtesting tools could help address software inadequacies.A 2003 NSF-funded Computing ResearchAssociation report, Grand Research Challenges inInformation Systems, argues that to “conquer systemcomplexity,” R&D advances are needed that makecomplex systems easier to design.

State of the ArtThe following types of tools can be used in softwaretesting and assessment:

❖ Design tools assist in functional design, internaldesign, and code design. They also analyzerequirements and designs for ambiguities,inconsistencies, security vulnerabilities, andomissions.

❖ Development environments assist in writing,compiling, and debugging code.

❖ Inspection tools assist in requirement reviews andcode walk-throughs.

❖ Test design and development tools are used todevelop test plans and abstract test cases, managetest data, and generate automated tests fromspecifications.

❖ Execution and evaluation tools develop concretetest cases and test harnesses, set up testingenvironments, perform selected tests, record testexecutions, log and analyze failures, and measuretesting effectiveness and coverage.

❖ Artifact examination tools scan code andexecutables for bugs or vulnerabilities and reverse-engineer control flow.

❖ Support tools assist in project management anddocumentation, and control and trackconfiguration.

State-of-the-art testing and assessment tools are notwidely used. Commercial software often isinadequately specified and tested, resulting inproducts with many bugs. The lack of formal

specifications can result in the introduction ofvulnerabilities during the design or implementationstages. Poorly structured software may allow a faultanywhere in thousands or millions of lines of codethat results in security vulnerabilities. Faults may insome cases be deliberately added “back doors” ratherthan inadvertent mistakes. Existing software test andassessment tools could help developers avoid or catchmany systematic errors. The most advanced toolsgenerate automated tests from rigorous specifications,help analysts understand code, and monitorexecution. These tools coupled with the best oftoday’s software development methods can improvesoftware quality.

However, even these tools are often difficult to use,limited in scope and effectiveness, unable to workwith other tools, and may lack clear demonstrationsof effectiveness. While exploratory work suggests thatmore powerful tools are possible and that existingcapabilities can be packaged in easier-to-use formats,developing these improvements is expensive.Advanced tools require large software subsystems astest infrastructures in which to analyze code, makeinferences, and track and correlate information.Because of competitive pressures, there are barriers tocollaboration among stakeholders who could benefitfrom cooperation on test methods, testing suites, anddevelopment environments. The situation isanalogous to a hypothetical state of air travel in whicheach airline has to design, build, and test its ownairplanes, airports, reservation networks, and air trafficcontrol systems.

Capability GapsAs the NIST report states, improving software qualitywill require better testing throughout all phases ofdevelopment. Improved testing throughout softwaredevelopment will in turn require tools that canprovide more comprehensive analysis, increasedautomation, and ease of use to produce morethorough testing at a lower cost. Because bugs aresignificantly more expensive to fix when they arediscovered in later phases of the development process,developing higher-quality, lower-cost software willrequire more testing early in the development process.

Characterization & Assessment

Page 82: About the National Science and Technology Council - NITRD

66

The primary capability gap is at the very beginning ofthe software cycle, in the requirements, specifications,and top-level design phases. Current tools do notprovide the precision and functionality to capturespecifications in sophisticated modeling languages.Such languages could be used to generate measures ofsystem complexity and completeness, identify designinconsistencies and ambiguities, minimize thelikelihood of security vulnerabilities in an artifact thathas been built to specification, and generate code andautomated tests. The development of computationaltools to support the requirements analysis and designphases of system development would be animprovement over what has traditionally been amanual process.

Gaps also exist during the software implementationphase. Avoiding errors and vulnerabilities in thedesign phase, or avoiding them in the developmentphases, does not eliminate the need for later testingbecause errors may be introduced as design decisionsare made and details are added. Testing andassessment at the unit and subsystem level are moreeffective than waiting until the final system is built.

The lack of software assessment and testing tools forthose who work with the final product is anothercapability gap. A comprehensive analyst’s workbenchis needed for post-development security analyses suchas by the administrator checking the suitability of aCOTS package for a given purpose or the contractingofficer determining whether to accept a deliveredsoftware product. Some functions of the workbenchwould be to find control flow, remove obfuscations,structurally edit blocks of code, and maintaininformation about design intent. Such as system canalso incorporate and enhance existing tools that scanfor known vulnerabilities, test for anomalousbehavior, and present functional slices of binary orsource code.

Calibrated, validated tests and assessments are neededacross all phases of software development. In theabsence of advances, developers are unlikely to use atool if it is not clear how much assurance is derivedfrom the results. They are unlikely to choose one tool

over another if increased security cannot be predictedand is not objectively evident afterward. Finding asecurity flaw is significant. Today, little can be saidwith certainty about the code if it is tested and noflaws are found. Although testing will never guaranteethe absence of bugs, better testing capabilities willprovide higher confidence in the security of systemsthan exists today.

4.6 Risk-Based Decision Making

DefinitionRisk-based decision making assists managers inmaking more informed decisions through qualitativeand quantitative mechanisms that account fordesirable and undesirable outcomes. The developmentof investment strategies and resource allocationmodels – funding cyber security or other efforts –relies on information from risk management processesthat facilitate identification and evaluation of threats,vulnerabilities, and impacts (economic and otherwise)of attacks relative to costs.

ImportanceToday, cyber security is often a secondary issue formanagers whose primary considerations areshareholder or stakeholder value, return oninvestment, and earnings. Research suggests that theimpact of a cyber attack can range from theinconsequential, to a brief interruption of regularoperations, to an incapacitating blow to the ability toconduct business. However, what is more importantabout the threat of cyber attack from a risk-management perspective is that the past is notnecessarily a good predictor of future events. Thethreat, vulnerability, and risk space is dynamic.Attackers are constantly developing new approaches,and the increasing interdependence of criticalinfrastructures and the IT infrastructure heightens therisk of economic consequences from successful futureattacks. Because cyber security will become a primaryconsideration only when it is factored intomanagement’s ability to earn profits, R&D is neededto develop sophisticated risk-based models forevaluating total return on investment.

Characterization & Assessment

Page 83: About the National Science and Technology Council - NITRD

67

State of the ArtIn the business community, risk-based decisionmaking methods have traditionally focused on risks ofbusiness interruption, project failure, natural hazard,and financial impact. These risks are so wellunderstood that commercial insurance is available tocover them. However, analyses of businessinterruption and reconstitution rarely consider cyberattacks, and those that do so generally do not considerlow-probability, high-impact events. This may bebecause there are not yet any universally acceptedtools for measuring the costs and benefits ofexpending resources to reduce cyber security risk.

Capability GapsThis challenge provides an opportunity for researchersto develop accurate models for risk-based decisionmaking in cyber security. An assessment of businessrisk from possible cyber attacks must identify threats,vulnerabilities, and consequences. The risk or probablelevel of loss can be calculated as a function of threats,vulnerabilities, and consequences. Each of these threerisk factors can be reduced by various countermeasures. The risk factors cannot readily be estimatedbased on the frequency of occurrences in the past,because the most prevalent types of attacks in the pastmay not be the most common ones in the future, andbecause some types of potentially devastating attackshave not yet occurred but might in the future.

While it is, in principle, possible to estimate howmuch a given countermeasure will reduce thecorresponding risk, research in this area has beenminimal. If the total reduction in risk due to a givencountermeasure can be quantified, then thecountermeasure can be given an expected value, fromwhich a justifiable price can be derived based on risk.The cost of each countermeasure can be comparedwith the reduction in probable loss over a relevanttime period. Once an adequate assessment of overallrisk is made, the development, implementation, anddeployment of cyber security measures can be carriedout with some confidence that the best investmentsare being chosen, given the available information.

R&D in a number of areas is desirable to establish theknowledge base for risk-based decision making in

cyber security. Analyses of organizations’ risk exposurefactors (e.g., industry, location, size, and securitycountermeasures) and analyses of potential financiallosses to organizations from attacks of varyingintensity, precision, and recurrence would provideuseful baseline information and tools for institutionalplanning that do not exist today. Such information isoften viewed as proprietary or sensitive, andorganizations are reluctant to share it. Empiricalstudies of the deployment of risk-based decisionmaking methods in improving cyber security wouldalso be useful in evaluating the applicability oftraditional models and developing new modelstailored to organizational requirements. Studies of theinsurance industry’s posture toward cyber security,including the current accessibility, quality, and scopeof policy underwriting related to cyber security,should also be undertaken.

4.7 Critical InfrastructureDependencies

and Interdependencies

Definition Today, the infrastructures of U.S. sectors deemed to becritical to the national interest are increasinglydependent on the IT infrastructure. Research in thistopic aims to develop a thorough scientificunderstanding of the ways in which criticalinfrastructure (CI) sectors depend on the ITinfrastructure, and the extent to which the CI sectorsare interconnected through components of the ITinfrastructure and are interdependent. This will enableanalyses of the potential impacts of cyber attacks on theoperations of the CI sectors, including cascadingconsequences resulting from CI interdependencies, andassessments of the effectiveness of possible protectiveand mitigation measures for CI sectors.

ImportanceWhen consequences of a cyber attack on CI systems(including process control systems) are significant,consequence analyses of interdependencies of systemswithin an infrastructure and potential cascading effectsacross infrastructures can enable decision makers to

Characterization & Assessment

Page 84: About the National Science and Technology Council - NITRD

68

understand possible outcomes of their decisions andassess trade-offs between alternative actions.

Such understanding is of particular concern to criticalinfrastructure sectors that rely heavily on IT systems tooperate, such as the banking and finance sector.Linking developing infrastructure interdependencyconsequence models with IT system operations andsecurity models provides an opportunity to manage therisks from cyber and physical threats in a holistic waywithin and across critical infrastructures.

State of the Art The general principles of control theory and controlsystems are relatively well understood. In addition,the physical and virtual commodity flows that ITsystems control are generally well understood at anindustry or company level. However, failures andconsequences are often situation-dependent, andneither the interdependencies between theinfrastructures nor the relationship between failures incontrol systems and large-scale consequences are wellunderstood. Understanding these potential impactsrequires understanding the function of the IT andcontrol systems, the operation of the infrastructure,and interdependencies with other infrastructures.

Network engineering models are useful forunderstanding system impacts of disruptions at one ormore network nodes. Modeling network performanceis a fundamental part of infrastructure analysis that isused for graphical representation of the infrastructure,for estimating system performance under adverseoperating conditions, and for verification. Highlyaggregated systems-level models are useful forassessing potential dynamic responses and propagatingeffects across infrastructures, but additional modeldevelopment is needed.

Capability Gaps Coupling critical infrastructure consequence analysisto cyber security faces two major challenges: 1) thelack of information about the behavior of controlsystems in abnormal and malevolent environments;and 2) the lack of modeling, simulation, and decision-support tools that capture the coupling of control andIT system operations to infrastructure operations.

Modeling and simulation: A key challenge indeveloping better understanding of the dependence ofinfrastructure operations on IT and control systems isthat little real-world data about control systembehavior in abnormal and malevolent environments isavailable. Abnormal events are sufficiently uncommonthat data records documenting such events areminimal. Malicious events are uncommon as well. CI modeling and simulation that combine the controlsystems and the controlled processes can increaseunderstanding of the connections to eventconsequences. However, commercial CI sectororganizations are reluctant to share the knowledgeneeded to improve model fidelity because they fearthat competitors may gain some advantage from thatknowledge, or for liability reasons. The alternative ismodeling and simulation using available data andvalidations of the simulations against publiclyaccessible information about abnormal and maliciousevents.

Improved modeling and simulation methods willmake it easier to predict the behavior of genericnetworks in various scenarios such as by performing“what if” analyses that are equivalent to virtualexperiments. Integration of such models into largerinfrastructure models will contribute to understandingthe CI sectors’ interdependencies. As this capabilitymatures, coupled network engineering infrastructureand IT infrastructure models could be used to predictimpending failures and visualize threatened outagesbased on loss of critical components.

Robust decision making: Robust metrics are neededfor optimizing risk-based crisis response whenmultiple infrastructures are mutually dependent.Additional communication protocols and dataaggregation techniques are needed for real-timevisualization and forecasting to aid in responding tointrusions or loss of functionality or confidence inhighly trusted systems. Modeling and simulationtools, coupled with metrics, visualization, andforecasting tools, can be used in crisis response as wellas in analysis and assessment to provide decisionmakers with a better basis to make prudent, risk-basedstrategic investments and policy decisions to improvethe security of critical infrastructures.

Characterization & Assessment

Page 85: About the National Science and Technology Council - NITRD

69

The topics in this category focus on fundamentaltechnological elements that serve as building blocks fordeveloping and engineering a more secure ITinfrastructure. Topics in this category are:

❖ Hardware and firmware security❖ Secure operating systems❖ Security-centric programming languages❖ Security technology and policy management

methods and policy specification languages❖ Information provenance❖ Information integrity❖ Cryptography❖ Multi-level security❖ Secure software engineering❖ Fault-tolerant and resilient systems❖ Integrated, enterprise-wide security monitoring and

management❖ Analytical techniques for security across the IT

systems engineering life cycle

5.1 Hardware and FirmwareSecurity

DefinitionHardware includes not only computers but alsoexternally connected components – cables,connectors, power supplies, and peripheral devicessuch as a keyboard, mouse, and printer – that enablethe system to execute functions.

Firmware is the low-level software or sequences ofinstructions that are written onto programmable read-only memory (ROM) in a computer or peripheraldevice, enabling the device to determine itscapabilities, render them functional, and coordinateoperations. Some firmware may be part of theoperating system (OS) kernel (i.e., the core of acomputer OS that provides basic services for all otherparts of the OS) and may execute in privileged mode.In some cases, firmware provides an interface to therest of the OS so that the system can operate thedevice. In other instances, firmware is executed duringthe computer’s boot process (i.e., when the OS isloaded into the computer’s main memory or random

access memory) – for example, the BasicInput/Output System (BIOS), which executes beforethe OS is loaded. Other firmware resides onperipheral devices, allowing the OS to use the deviceseffectively.

Importance Hardware or firmware attacks can undermine eventhe most sophisticated application-level controls orsecurity mechanisms. Malicious firmware that hasunrestricted access to system components (e.g., if it ispart of the OS kernel) has considerable potential tocause harm, introduce backdoor access (anundocumented way of gaining access to a computer,program, or service), install new software, or modifyexisting software. If the underlying hardware andfirmware cannot be trusted, then the OS andapplication security mechanisms also cannot betrusted.

State of the ArtHardware and firmware, including points ofinterconnection, are subject to attack. One notableexample of an attack against firmware is theChernobyl virus (also referred to as the CIH virus,after the author’s initials); first discovered in Taiwanin June 1998, it destroys a system’s flash BIOS,resulting in lost data. PC users trying to overclocktheir processors often distribute reverse-engineeredand “improved” motherboard BIOSes. Rootkits andother software attacks can execute code on secondaryprocessors (e.g., graphics processing units) or hidemalicious code in flash or electrically erasable,programmable ROMs. Wireless access points can bealtered to deliver more power and broadcast range,thereby making eavesdropping easier. Keystrokes canbe tracked by small hardware devices such as keyboarddongles that can capture and record every keystrokethat is typed. To mitigate such vulnerabilities andreduce risks, several hardware and language-basedapproaches have been proposed.

Capability GapsTrusted computing platforms, and the correspondingOS modifications to leverage them fully, have the

5. FOUNDATIONS FOR CYBER SECURITY AND INFORMATION ASSURANCE

Page 86: About the National Science and Technology Council - NITRD

70

potential to improve some key areas of informationsecurity, especially the level of trust in platformhardware. Research is needed to understandweaknesses and covert channels open to hardware andfirmware attacks. New approaches and rigorousmethods for certifying hardware and firmware areparticularly needed for an environment in which theIT infrastructure’s hardware and firmware areincreasingly developed and manufactured offshore.Areas in which R&D advances are needed include:

Hardware support for security: Efforts are underwayto protect hardware and firmware and enable secure,trusted computing platforms. Cryptographicaccelerators speed the processing of cryptographicalgorithms. Smart cards can be used to protectauthentication keys and for multi-factorauthentication. Although work is ongoing, moreR&D is needed on integrating more securecomponents into a trusted computing platform.

Authentication-based firmware security: Theauthentication-based approach (sometimes referred toas “secure bootstrap”) seeks to ensure firmwareintegrity by using digital signatures to authenticatethe origin of the device and its transmitted data, chainof custody, and physical protection. This approachensures that the firmware has not been changed sinceit was approved. It is a means for preserving anexisting relationship of trust but cannot establishtrust. Authentication alone cannot ensure thatuntrusted code is safe to run. The authentication-based approach is currently the preferred strategybecause the technology is better developed and itsimplementation is more straightforward thanlanguage-based approaches. Increased emphasis ondevelopment and deployment is needed.

Language-based firmware security: Language-basedsecurity is an approach to address securityvulnerabilities in a variety of domains, includingfirmware security as well as others (see section 5.3 onpage 72). It leverages programming, program analysis,and program rewriting to enforce security policies.The approach promises efficient enforcement of fine-grained access control policies and depends on atrusted computing base of only modest size.Unfortunately, these techniques are not as well

established or advanced as authentication-basedapproaches, and are more difficult to implement.However, language-based security offers somepromising advantages worth investigating further. Themain advantage of the language-based approach is theability to establish a basis for trust, regardless of thesource.

In the language-based approach, each time a firmwaremodule is loaded, it is verified against a standardsecurity policy. The verification step prevents thecompiler from being bypassed, spoofed (i.e., forged tomake it appear to have come from somewhere orsomeone other than the actual source), orcounterfeited. Confidence in verified device driversrequires trust only in the verifier, not in the compilerand the code it produces. The security policy isdesigned to rule out the most obvious forms of attackby combining type safety (all behaviors are fullyspecified in the programming language semantics) andvarious architectural constraints.

5.2 Secure Operating Systems

Definition An operating system (OS) manages and protects acomputer’s hardware and software resources. Othersoftware that is subsequently loaded depends on theOS’s core services such as disk access, memorymanagement, task scheduling, and user interfaces.The portion of OS code that performs these coreservices is called the kernel. The OS provides a stable,consistent way for applications to request servicesfrom the hardware without having to know detailsabout the hardware (e.g., the underlying processor,communications mechanisms, or peripherals). Inaddition, the OS creates system abstractions (e.g., theabstract data types, privileges, and hierarchicaldomains used by applications). The OS enforcescritical elements of the enterprise security policy,including information confidentiality and integrityand mandatory or discretionary access controlmechanisms.

ImportanceCombined with hardware and firmware, OSs are thefoundation for all computing, forming the core

Foundations for Cyber Security

Page 87: About the National Science and Technology Council - NITRD

71

software for both general purpose and specializedcomputers, including process control systems,network routers, smart switches, servers, and PCs. AnOS is loaded into a computer by a boot program andthen manages all the other programs (i.e.,applications, services, or application-enablingprograms such as middleware) running on thecomputer. All computers, from small embeddedprocessors to large servers supporting tens ofthousands of users, require an OS. Most OSs havebeen designed and implemented to provide a widerange of features and services, but security has oftennot been a fundamental requirement. An OS that canbe subverted, penetrated, or bypassed cannot providea sound base for critical applications. Without OSsecurity, even a carefully constructed application isvulnerable to subversion.

State of the ArtComputers that connect to the Internet enter acyberspace filled with untrusted networks andsystems, which allow malicious attacks on unprotectedmachines. The large and growing number of newattacks that exploit OS vulnerabilities constitute acontinuing threat that was not present or evenanticipated two decades ago. A secure OS shouldprovide users with the assurance that implementedpolicies are enforced, enforcement mechanisms cannotbe bypassed, and attempts to tamper with theenforcement mechanisms are detected. The currentstate of the art has not achieved these capabilities.

Capability GapsNeeded capabilities include tools to facilitate high-assurance OS development, abstraction layers thatallow for the portability and upgrading of secure OSs,criteria to assess OS vendors’ security claims,including those for distributed and multi-layered trustarchitectures, and models and tools to support userinterfaces and assist administrators in policyconfiguration and management tasks. OS prototypesthat address these issues need to demonstrateprotection mechanisms. Ongoing efforts conducted incollaboration with hardware developers shouldcontinue to work toward interoperability acrossplatforms to permit the rapid adaptation of hardware

functions to specific applications. The following aremajor areas requiring R&D advances:

Tools and resources: Operating systems today do notadequately serve as secure, high-assurance servers ableto separate proprietary, public, classified, andunclassified information and to enforce the applicableseparation policies. In addition, modern OSs do notenforce discretionary (i.e., user-controlled) securitypolicies with high assurance. To address these needs,efforts to design secure OSs should build on conceptsthat have emerged from past and ongoing work aimedat secure and trusted platform development. Thismechanism could be supported by hardware,firmware, and software components.

The development of a secure, common, low-levelBIOS-like mechanism is one approach that can beused to serve as the base for future trusted OSdevelopment. This mechanism may be supported by acombination of hardware, firmware, and softwarecomponents. The goal is to create well-specifiedsecurity abstractions and interfaces that would persistacross upgrades to the hardware, firmware, andsoftware that an OS supports. Constructs that mustbe enabled by such a mechanism need to be identifiedand prototype implementations developed.

Existing specification tools do not provide adequateformal methods for mapping policies to implementedcontrols and countermeasures and then for evaluatinghow effectively the policies are implemented. Formalverification tools tailored to secure systemdevelopment also are needed. These tools mustinclude formal specification and analysis methods.

A weakness of some past generations of secure OSswas an emphasis on maintaining a secure state oncethe OS had achieved its initial runtime state, whilenot giving sufficient attention to the start-up processesoccurring prior to the runtime state. Future secure OSdevelopment efforts should make use of the conceptof secure bootstrapping and secure systeminitialization techniques to ensure that the process ofreaching a runtime state is also effectively secured.New mechanisms that support secure distributedcommunications between secure operating systems

Foundations for Cyber Security

Page 88: About the National Science and Technology Council - NITRD

72

can be utilized to replace the reliance of current OSson low-assurance and untrusted components.

High-assurance OS requirements: Many OSsrequire configuration tools to input policy- andsecurity-critical information. Research is needed todevelop the requirements and assurance metrics forthese tools and to implement prototypes. In addition,standardized high-assurance OS requirements andwell-defined construction requirements must beestablished to effectively support development andevaluation of high-assurance OS security. The high-assurance requirements must distinguish betweenstate-of-the-art and research-level security engineering.In addition, advanced methods should be developedto support and facilitate secure systemsdocumentation and evaluation processes.

Trusted paths: A reliable OS should enable trustedpaths (i.e., protected, unspoofable communicationschannels between trusted parts of the system and theuser) for transmitting sensitive information. Currentmainstream commercial OSs typically lack trustedpaths and are so complex that security cannot beverified. A trusted session built on trusted pathsshould be extensible across a distributed enterprise. Inaddition, trusted paths and sessions need to supportbusiness processes in a manner that ensures bothsystem security and overall quality of service.

OSs need to provide for dynamic securereconfiguration to support new applications withoutsubstantial security modifications. To meet thisobjective, mechanisms are needed to control andenforce privileges before, during, and after thereconfiguration. These mechanisms shouldincorporate enterprise business models and should beeasy to use. Several major OS file systems lackdiscretionary control mechanisms that allow morethan “owner control” of files and directories.

Virtual machines: Support is needed to ensure thatvirtual machines (multiple instances of OSs operatingsimultaneously on a single computer) operatesecurely. Partitioning a machine into virtual machinesto support concurrent execution of multiple OSs

poses several challenges. For example, varied OSsmust be accommodated and the performanceoverhead introduced by virtualization must be small.The virtual machine manager should also enforce OSisolation and ensure that necessary inter-OScommunications do not compromise security.

5.3 Security-Centric Programming Languages

Definition Languages are used throughout software development,from low-level assembly languages and machine code,to conventional programming languages used forapplication development, to high-level modeling andspecification. Security requirements are also expressedin languages. Security-centric programming languagesaddress security as part of the language andincorporate features (or the absence of features) toincrease the assuredness of code written in thelanguage.

ImportanceThe rising number of reported cyber securityvulnerabilities due to errors in software requirements,design, and implementation necessitates research todevelop tools that can better specify securityrequirements and programming languages thatproduce inherently more secure code. Such tools andlanguages could also be used in developing certifiablysecure systems.

State of the Art and Capability GapsKey R&D challenges in security-centric languagesinclude better support for expressing securityattributes in high-level and low-level languages,methods to transform the higher-level securityrequirements and descriptions of secure componentsinto implementations, more accurate and scalablehigh-level analyses for cyber security, and efficientruntime enforcement of trust management policies.R&D is also needed to seamlessly integrate the resultsof this work into current software developmentprocesses. R&D topics include:

Foundations for Cyber Security

Page 89: About the National Science and Technology Council - NITRD

73

Secure language design: The design of security-centric languages focuses on the explicit specificationof security features. The emphasis is on clarity andease of use. Research is needed to enable modelingand high-level programming languages such as rule-based languages to explicitly express security features.Security can often be integrated into a language byextending existing features and attributes or removingothers. However, when changing the language isimpractical, libraries and development environmentscould provide enhancements or restrictions that helpproduce more secure software.

Secure implementation methods: Secureimplementation methods should map systemdescriptions and security components in higher-levellanguages to system descriptions and securitycomponents in lower-level languages. Because lower-level languages support the functions that a systemcan perform, secure implementation methods must bedesigned to ensure that security flaws are avoided andthat vulnerabilities are not introduced. In addition,automated methods for generating well-definedcomponents of complex secure systems could reducethe cost of developing, validating, and maintainingthose components.

Security analysis: Security analyses check whethersecurity requirements are satisfied, support secureimplementation methods, and detect instances ofknown vulnerabilities. An example of the use ofanalysis to verify that security requirements aresatisfied comes from the model-carrying codeparadigm: Using this principle, mobile code isaccompanied by a model of its security-relevantbehavior, allowing the recipient to analyze thereceived code to verify that the model’s behavior isconsistent with the given security policy. An exampleof low-level code analysis is checking whether a C orassembly-language program has attempted to illegallyaccess memory. Proof-carrying code and typedassembly languages provide other approaches to suchchecking. Information-flow analyses that producefewer false positives and effectively handle commonlanguages are needed, as are more scalable verificationtechniques.

Analyses that guarantee conformance to securityrequirements can eliminate or reduce runtimecompliance checking. In contrast, analyses that detectinstances of known vulnerabilities can ensure onlythat specific common defects are absent, not that theoverall security objectives are met. Examples includeprogram analyzers that detect potential bufferoverflows and format-string vulnerabilities (whichenable users to initiate potentially harmful codemanipulations) or vulnerability scanners that detectcommon weaknesses in operating systems andnetwork configurations.

Secure execution support: Secure execution supportrelies on efficient runtime techniques to help achievesecurity goals. One example is language-basedtechniques that have led to more flexible and efficientmechanisms for enforcing access control policies.However, as traditional access control is supersededby trust management in enterprise systems, efficientenforcement of trust management policies is required.The distributed nature of the problem (i.e., the policyitself is dispersed) must be addressed and languagesfor expressing the policies need to be developed.

Development frameworks: Improved developmentframeworks for software assurance could dramaticallyreduce the effort required to specify and implementthe process of security analysis. Frameworks areneeded for efficiently achieving these results usingdeclarative rules and transformation of higher-levellanguages into lower-level code specified in an aspect-oriented style.

Foundations for Cyber Security

Page 90: About the National Science and Technology Council - NITRD

74

5.4 Security Technology and Policy Management Methods

and Policy Specification Languages

DefinitionAn organization is governed by security policies thatdescribe the rights and responsibilities for accessing ITsystems and information at various organizationallevels, establishing priorities for use of theorganization’s resources, and releasing information.Policy specification languages are the means by whichthese policies are expressed and implemented acrossan organization’s systems, networks, and information.

ImportanceSecurity policies derive from various sources,including an organization’s business rules, regulationsestablished by Federal regulatory agencies, or publiclaw. Federal agency policies are also governed byPresidential directives as well as government-wide andlocal policy making. In addition, state and localgovernments, universities, and companies establishtheir own policies to manage their activities.

Policy languages enable organizations to specify andpromulgate policies and customize them to specificneeds. IT system security policies are important toorganizations not only to protect resources, includingintellectual property, but also to promote businessactivities. As organizations become more complex,policy languages may be the primary means by whichan enterprise interacts with its IT systems.

State of the ArtCyber security must be managed across an entireorganization, including systems and networksconnected to the IT infrastructure. Formulating andimplementing organization-wide security policies isdifficult. The policies may not completely cover allrelevant situations, can be mutually incompatible, ormay exceed the scope of organizational control. Theycan also be ambiguous and difficult to evaluate forconsistency and compatibility. Moreover, even ifpolicies are expressed unambiguously, identifying andassessing their collective impact and understandinghow to implement them can be challenges.Administration of systems and policies, especially

those relating to security, becomes even more difficultwhen an organization’s IT system infrastructure islargely outsourced.

Taxonomy of policies: Taxonomies provideprinciples and practices for the classification ofsystems. Taxonomies for integrated organization-widesecurity monitoring and management are slowlyemerging. Aspects of a taxonomy may include boththe source of a policy (e.g., security policies may bederived from public law) and technologies toimplement policies. A generally accepted taxonomy orontology for describing and organizing organization-wide security concepts would be a significant stepforward.

Languages for enterprise security policy: Policylanguages for security, and particularly for trustmanagement and rights management, should providehigh-level controls for architecting process executionand information sharing. The languages that havebeen developed to express the controls and constraintson organizational operations do not easily support thespecification of detailed controls. These high-levelpolicy languages must enable system administratorsand decision makers to select specific controls, assesstheir potential impact, identify potential adverseinteractions, and adjust the controls without unduecomplication.

Controls over different resources may interact withunanticipated effects. For example, firewall policies atdifferent layers and locations may result in awkwardfirewall management or even new vulnerabilities.Certification of system security is complicated by thelarge number of events and potential interactions thatmust be considered. Currently, no languages havesufficient generality to express organization-widesecurity policies, but some languages can be used inlimited contexts.

Legacy systems: Established systems operate based onpolicies that are implicit in software but not alwaysexplicitly known to the user. Legacy systems presentthe challenge of extracting and reverse-engineeringaccurate, meaningful policies from code. Someorganizations outsource the management of theirinformation systems to control costs, benefit from

Foundations for Cyber Security

Page 91: About the National Science and Technology Council - NITRD

75

economies of scale, and gain centralized control overlegacy software applications that may have beendeveloped in isolation. However, this business strategycarries risks. Legacy systems may not be fullyintegrated because of unresolved conflicts amongsecurity policies and organizational responsibilities,and thus may present vulnerable targets for attack.

Capability Gaps Organizations and their operations will be hamperedwithout the necessary security and security policies foraccessing IT systems, networks, and information.Each organization must determine what constitutessufficient security and how to manage an integratedorganization-wide IT infrastructure. Unfortunately,no ready-made solutions yet exist. The major gaps inthe technical foundations for improved securityregimes include the need for a rigorous semantic basisfor policy specification languages and the need forassured consistency and acceptability of securitypolicies between organizations.

5.5 Information Provenance

DefinitionInformation provenance can be defined as theaccurate historical record of an information objectsuch as a digital text, an image, or an audio file.Provenance begins with identification of the originalform and authorship of an object or its constituentcomponents and continues with identification of eachsubsequent alteration to the object. Provenanceinformation can include not only what was changedbut also who or what produced the change, when thechange was made, and other attributes of the object.As reliance on networked information andtransactional processes grows, the need for technicalmeans of establishing information provenancebecomes increasingly important. The goal ofinformation provenance capabilities is to track thepedigree of a digital object from its origin through alltransformations leading to the current state.

ImportanceToday, most information is aggregated from manysources and, even when the sources are sound, the

aggregation processes can create weak points ininformation management mechanisms. Many newand emerging applications can store information indifferent formats for static, stream, and interactivemultimedia use. Some products transform data fromone format to another, thus making many productsinteroperable but at the same time compoundingsecurity challenges. Separating releasable data fromsensitive data becomes more difficult with theseaggregation and transformation processes.Furthermore, the vulnerabilities in transformationprocesses add a new dimension to concerns aboutreliability. Information provenance techniques arenecessary to provide reliable histories of informationthat is received from, and delivered by, IT systems.

Provenance of information can help a user determinewhether to trust it and how to interpret it.Information provenance techniques are also needed tocontrol information sharing. Partners (e.g., allies,collaborators, or corporations engaged in a jointproject) generally want to share information, but onlyto a limited extent. For example, there may be aconstraint that information of certain types can beshared only among certain partners. Enforcing suchconstraints is complicated by alternate data formatsand by transformations that combine data and deliverderived results. For example, the classification level ofinformation derived from otherwise unclassifiedsources may prevent its public release. In addition, asdiverse datasets are combined, accurate informationmay be interspersed with inaccurate information. Toverify the provenance of the information, data aboutits source and derivation (or aggregation) must bepropagated with the information itself.

State of the Art Information provenance combines key concepts fromoperating system security, access control, andauthentication. R&D advances have enabledapplication of some information provenancetechniques in such areas as security software andmanagement of large-scale collections of digitalmaterials, often referred to as digital libraries. Butnext-generation versions of related technologies, suchas metadata processing, taxonomies and ontologies,

Foundations for Cyber Security

Page 92: About the National Science and Technology Council - NITRD

76

and digital rights management need to be integratedinto more sophisticated information provenancecapabilities.

Capability GapsInformation provenance methods, technologies, andtools are needed to help people and systems takebetter advantage of digital information byunderstanding its history and credibility. Taxonomiesand ontologies for provenance metadata that gobeyond those for security classifications andprocedures for controlling secure documents areneeded. Fully developed and integrated informationprovenance standards and digital rights managementcapabilities are needed to realize the benefit ofinformation provenance for the IT infrastructure, andparticularly for cyber security and informationassurance. R&D areas include:

Metadata: Information provenance depends ontracking and maintaining data about information,known as metadata. Metadata might include thesource of the information, transformations producingthe derived information, and even procedures toextract content. Although it is not practical tomaintain all details about information provenance,metadata can provide a conservative subset. Metadatacan be maintained at varying degrees of granularity,depending on the sensitivity and criticality of theinformation.

Metadata labeling and granularity: A fine-grainedapproach to controlling data in repositories requiresextensive labeling. For example, the label on theoutput of a transformation may reflect the type oftransformation and its level of sensitivity. When asingle object contains components constructed fromdifferent sources, OS-level mandatory access control ispoorly suited to separate the components. Refining thegranularity of provenance will be needed fordocuments that may have components constructedfrom different sources, whose provenance may need tobe tracked separately. Manageable frameworks for fine-granularity provenance will require research drawingon ideas from secure OSs and language-based security,as well as interoperability and human usability.

Taxonomies, ontologies, and standards: Currently,there are no accepted information provenancestandards, such as standard taxonomies and ontologiesfor classifying types of metadata. Metadata thatsupport information provenance concepts andprinciples could have a powerful effect on thedevelopment of information systems. Assumptionsabout data used by these systems would be explicitand could be used in determining the amount of trustassociated with the information. Reliable provenanceinformation and tools for analyzing provenance willrequire ontologies for provenance metadata andcorresponding operations.

Access controls: At the OS level, provenance may bebased on mandatory access control, where the systemcontrols the metadata labels applied to each field ofdata. Mandatory access control documents the data’ssensitivity and classification and assures thattransformation and aggregation results are labeled atthe highest level among the input sources. Multi-levelsecurity remains a coarse-grained approximation forprovenance and ensures the separation of classificationlevels. For example, multi-level security guaranteesthat data labeled at one level of classification do notcontain information marked at a higher level ofclassification.

Memory management: A small portion of allocatedmemory may contain sensitive information that mayexpand during processing. The balance of theallocated memory may be of lesser sensitivity. Theprovenance of these different portions of memorymust be tracked separately to accurately label differentoutputs.

Code management: Provenance could also be used todetermine the portions of a process that can bepermitted to execute as code. For example, code thatis downloaded from the Internet might not bepermitted to execute in certain IT environments.Currently, mobile code systems use a variant of thisapproach to distinguish locally resident libraries thatare trusted to execute with high privilege fromdownloaded code, which can be executed only withlow privilege.

Foundations for Cyber Security

Page 93: About the National Science and Technology Council - NITRD

77

Digital rights management: Rights managementlanguages such as eXtensible rights Markup Languagemay help to ensure that metadata are used uniformlyand consistently across a range of systems. Digitallibraries of documents with detailed provenanceinformation could benefit from this approach, if itproves successful.

5.6 Information Integrity

Definition Integrity is the attribute of information that addressesits authenticity, correctness, and reliability. Protectingand monitoring information integrity are the goals oftechnologies and tools that prevent tampering anddetect unauthorized modification or destruction ofinformation.

ImportanceInformation integrity is a prerequisite for trustthroughout the IT infrastructure. Without integrity,data, information, messages, and systems cannot betrusted. Without trust in the underlying information,higher-level functionalities, including measures toprotect and safeguard the system itself, cannot berelied upon.

Data integrity assures that unauthorized modificationof a system’s data resources is detected and thatmessages or data in transit, including headers andcontent, are unchanged between the data’s source anddestination. Data resources include systemconfigurations, data structures, the code controllingthe behavior of the operating system, and othersystem or application software. Integrity controls alsoprovide non-repudiation – that is, proof of the originand integrity of data that can be verified by a thirdparty, which prevents an entity from successfullydenying involvement in a previous action. Integrity isnecessary for a system to provide reliable services,including security services. Many attacks begin byundermining system integrity.

State of the Art Information integrity is an essential foundation formost security services. An authentication service, forexample, cannot succeed if the information it relies on

is inaccurate or unreliable. But systems andinformation have differing integrity requirements. As aresult, security managers must ensure that informationis appropriately protected, that the appropriate levelsof security are met, and that resources are applied inproportion to the integrity requirements and theircost-effectiveness. Information integrity requirementsare not static, however, and integrity must bereassessed throughout a system’s life cycle.

Information integrity can be compromised throughaccidental or intentional action by system developers,system administrators, operations and maintenancestaff, end users, routine equipment failures, ormalicious actors. To ensure effectiveness, informationintegrity planning and analysis must be coordinatedwith other concurrent engineering activities. Toachieve and maintain information integrity, a varietyof engineering methods and techniques may beimplemented. For instance, using a key-hashedmessage authentication code, information integrity isachieved by hashing the contents of each messagewith any header fields that also require protection.When the peer receives the message and the hash, thepeer re-computes the hash value and checks that itequals the hash received.

Assuring system integrity may require a broad rangeof tools, including hardware and software solutions.R&D in system integrity is needed to reinforce thisholistic approach to integrity through newengineering techniques and methodologies.

Capability Gaps Advances are needed in information integritytechnologies to enable managers to continuouslyevaluate integrity throughout a system’s life cycle andto ensure that integrity is maintained regardless ofsystem mode or state (e.g., start-up, shutdown,normal operations, preventive maintenance,emergency shutdown). Developing these capabilitieswill require new software engineering and analysistechniques for integrity. Key technical areas in whichR&D is needed include:

Integrity across formats: Electronic information nowincludes not only text but audio, video, signals, andother forms of data, and information generated in one

Foundations for Cyber Security

Page 94: About the National Science and Technology Council - NITRD

78

form can be transformed into others. Methods,techniques, and tools need to be developed formaintaining the integrity of information when itundergoes one or more format transformations.

Scalability: Because the volume of data processed inreal time across the IT infrastructure will continue toincrease, new highly scalable techniques extendingfrom small networks to global information systems areneeded for assuring information integrity.

Granularity: Some types of information (e.g.,financial transactions, intelligence data, medicalrecords) may require fine-grained scrutiny, whileothers (e.g., Internet search results) need less rigorousassessment. Techniques for assuring informationintegrity need to include capabilities at varyingappropriate levels of granularity.

Integrity across security layers: Capabilities forchecking information integrity should includetechniques for every layer of system security.

5.7 Cryptography

Definition Cryptography is the study of secret writing. Inpractice, cryptography applies mathematics,engineering, and information theory to provide dataconfidentiality, integrity, and authenticationprotections. Cryptanalysis – the other side of the samecoin – is the art of defeating these protections.Designers of network and system security featuresemploying cryptography must understandcryptanalysis to do their jobs effectively.

Importance Cryptography is a foundation science for assuringinformation security. It is now widely used to protectboth stored and transmitted information and toimplement digital signatures for a variety ofapplications, such as electronic commerce. Strongcryptographic algorithms and protocols are built intoeven ordinary Web browsers and are widely used tosecure communications and transactions. However,vulnerabilities in cryptographic protocols often lead totheir being broken, and weak or badly implementedcryptography still occurs.

State of the Art Several decades ago, cryptographic research in theU.S. was largely the purview of Federal intelligenceagencies. Today, a substantial open cryptographicresearch community is well established, with academicand industrial as well as Federal participants.

Designing information systems that resist attack oftenconsists largely of applying techniques that make itdifficult for an attacker to break a cryptographicalgorithm. The security of a well-designedcryptographic system rests only on maintaining thesecrecy of its keys; if adversaries know all details of thesystem except the keys, a successful attack should stillbe unlikely. Emerging approaches for cryptographyinclude multi-party computation (computing onencrypted data) and partitioning computations acrossseparate hosts. A new thread of work is developing onengineering production-quality realizations of newalgorithms to help ensure that the state of practicebenefits from advances in cryptography research in atimely fashion.

The difficulty of vetting algorithms and designingrobust modes with good, provable security propertiesargues for the development and use of carefully vettedstandards rather than a more ad hoc approach tocryptography. Symmetric key (block ciphers andstream ciphers) and asymmetric key (or public key)are today’s main types of cryptographic algorithms.Symmetric key algorithms, in which both parties usethe same key, are usually much faster than public keyalgorithms, in which there is a private key and arelated public key, and are used for most bulk dataencryption. Public key algorithms are typically usedfor key management, authentication, and digitalsignatures. Hash functions, which are also widelyused, have no key but produce cryptographicchecksums of messages.

In general, the security of cryptographic algorithmscannot be proven, so extensive cryptanalysis is requiredto vet them against attack. These heavily analyzedalgorithms are then used as primitives in carefullyconstructed protocols or modes of operation forparticular functions. It has now become standardpractice to require analysis of the mathematicaltechniques to ensure that breaking these modes requires

Foundations for Cyber Security

Page 95: About the National Science and Technology Council - NITRD

79

a large amount of computation or is equivalent tobreaking the underlying cryptographic primitive.

Block ciphers encrypt fixed-size blocks of plaintextinto fixed blocks of ciphertext. Stream ciphersgenerate a “keystream” of pseudo-random bits, whichare logically combined with data to produce theciphertext and can potentially be applied morequickly (i.e., with less latency) than block ciphers.Secure symmetric key block cipher algorithms arenow available along with asymmetric key algorithmsfor key transport, key agreement, and digitalsignatures.

Capability GapsThe algorithms described above perform well ontoday’s powerful desktop computers. Customsemiconductor devices provide even higherperformance. Performance issues arise mainly at theextremes, such as in ultra-high-bandwidthcommunications trunks, and in computation-constrained environments, such as in RFID chips orother inexpensive, low-power devices. Both types ofuses require more efficient algorithms than thosecurrently in use.

Although high-performance (e.g., high-bandwidth)applications would benefit from the availability ofparallelizable algorithms, existing standardizedauthenticating encrypting modes are not readilyparallelizable. More efficient, parallelizable constructsare known, but they rely on block cipher primitivesand have intellectual property conflicts. Streamciphers, which may offer significantly betterperformance than block ciphers, can be easilycompromised; therefore, they are most effective whentightly bound to a mode with authentication. Onepromising cryptographic research area is combinedstream cipher/authentication algorithms. Thesealgorithms stress speed and lightweightimplementation in order to produce a secure,efficient, authenticated stream cipher that is robustagainst inadvertent misuse or misapplication.

Cryptographic hash functions, which have been calledthe Swiss army knives of cryptography, have been thesubject of little published research. Recently, however,one widely used algorithm and several less widely used

ones have been broken. Continued research is neededtoward understanding these functions at a levelcomparable to that of block ciphers and improvingtheir performance.

Identity-based public key cryptography is a promisingand rapidly developing area. Although the techniquewas proposed decades ago, the first practical identity-based algorithm was developed in the last few years.In this scheme, a unique identifier associated with aperson (for example, an e-mail address) is used as apublic key, which can then be used to encrypt amessage to that individual. An authority generatespublic encryption parameters to be used with theidentifier to encrypt messages to that identifier andprovides the private decryption key to the person withthat identifier.

Quantum computing is potentially the greatest long-term threat to existing cryptography. Practicalquantum computers today are only a research goal,and opinions differ over whether they can ever bebuilt. While symmetric key cryptography effectivelyresists known quantum computing attacks if key sizesare large, quantum computing algorithms are capableof breaking known public key algorithms.

Quantum encryption is theoretically unbreakable evenby quantum computers, but such encryption is todaysomewhat cumbersome. Moreover, quantumencryption does not replace conventional public keycryptography for digital signatures. Some functionsare provably as difficult to compute in a quantumcomputer as on a conventional computer. However,there is some question as to whether a practical publickey cryptosystem can be developed that is as hard tobreak on a quantum computer as on a conventionalcomputer. If not, the arrival of mainstream quantumcomputing – which some believe will occur within afew decades – would mean the end of public keycryptography and digital signatures. Thecryptographic community would then have to returnto the use of more complicated symmetric keymanagement methods, or perhaps to even morecumbersome quantum encryption key managementmethods. Alternatively, the community may turn tonew classes of algorithms that have yet to bedeveloped.

Foundations for Cyber Security

Page 96: About the National Science and Technology Council - NITRD

80

5.8 Multi-Level Security

Definition Multi-level security (MLS) addresses the protectionand processing of information at varied levels ofsecurity classification. MLS systems must permitsimultaneous access to information by authorizedusers and processes while denying access tounauthorized ones.

ImportanceToday, information-sharing requirements are globalfor both the Federal government and the privatesector. Many elements of critical infrastructures relyon communications over untrusted networks withvarying levels of security policies. Sharing informationand maintaining distributed control among MLSsystems using such networks are significant challenges.

State of the Art Sensitive information systems that were previouslyisolated are now linked to other networks that, inmany cases, have minimal levels of trust.Unfortunately, few multi-level or cross-domain securesystems support data sharing or transfers betweennetworks with disparate security levels. Many MLSsystems support one-way data transfer, filter or parseonly specified, fixed-format data types, or requirehuman intervention. Since the inception of net-centric warfare, the need for automated, higher-trustdevices that permit transfer of additional data typeshas grown.

Capability Gaps Designs for multi-level and cross-domain securitysystems must allow direct and efficient interactionbetween various users operating at differing securityclassification levels, while still enforcing the applicablesecurity policies. These systems must be able to detectand prevent cyber attacks while ensuring availability toauthorized users. The systems must support requiredfunctionality, operate in near-real time, and provideadequate mechanisms to ensure confidentiality,integrity, and availability of information. In addition,the systems must provide sufficient assurance tosupport system security certification for use in a targetoperational environment.

The challenge in MLS is to balance security againstrisk, cost, timeliness, and usability. In particular,research advances are needed in engineering tools andtechniques for MLS application development,inherited trust, strong separation principles, andinternationally recognized cross-domain solutions.

Information flow: In multi-level and cross-domainsystems, procedures to control information flow assurethat only individuals with appropriate authority areallowed to access data or affect associated processes.Security policies specify the rules governing theseprocedures and processes. Such systems require theanalysis of data access policies and programs to protectagainst the opening of covert communication channelsthat can allow uncontrolled information flow.

The Internet-mediated interconnectivity of multiplenetworks operating at varying security classificationlevels increases the exposure of vulnerabilities toattack. Higher-trust networks must ensure dataconfidentiality, preventing the intentional orinadvertent disclosure of sensitive information whileprotecting against adversaries on lower-trust networkswho may attempt to establish an outflow path forsensitive data. In addition, the integrity andauthenticity of data used in higher-trust networksmust be assured to mitigate potential compromises bymalicious users.

Policy enforcement: Since there are no mechanismsto identify malicious intent and networks at all levelsare susceptible to insider threats, strong securitypolicies must be implemented. To address the securitychallenges of shared network-centric environments,data and resource protection policies must beenforced. Data protection services include non-discretionary policies that dictate how data are shared,discretionary access mechanisms that allow users tospecify data access rights, and transfer policies.Resource protections include secure communicationto prevent unauthorized modification or surveillance,mechanisms to ensure resistance to attacks, maliciouscontent detection, and the use of trusted processes tomanage separation between networks and data.

Trusted operating systems base: Several trustedoperating systems provide mechanisms sufficient to

Foundations for Cyber Security

Page 97: About the National Science and Technology Council - NITRD

81

support a strong security policy. However, much oftoday’s security is based on application-level userfunctionality and often does not use securitymechanisms present in an OS. For example, a systemcan enforce strong network separation but maycontinue passing sensitive data if transfer policyenforcement is weak. A trusted OS can be thefoundation for building a secure system, but anoverall system design also needs to integrate strongsecurity policies across all components of the system.

Validation and verification: A secure system needsan effective security policy, according to whichinformation flow and system behavior are governed.Under the current certification process, new multi-level and cross-domain systems must undergopenetration testing. Unfortunately, certificationtesting is time-consuming and does not guaranteethat all security features are correctly implemented.Tools to automate this testing process are needed.Validation and verification software tools based onformal methods can provide evidence of the strengthsand weaknesses of system security policies. Benefitsresulting from the use of such tools may includereduced time from system design to implementationand improved system security.

Data labeling: Applying and correlating data labelsamong applications, OSs, and trusted databaseschemas are difficult because commercial productsdefine and use labels in different ways. For example,mandatory access control is based on clearance levelsassociated with users and classification levelsassociated with data. In contrast, a single classificationlevel – for example, sensitive – in the OS maycorrespond to a range of sensitivity labels in a trusteddatabase. Mandatory access control labels are definedin Federal statute, while sensitivity (discretionary)labels are defined by the organization. Finally,untrusted applications cannot be relied on to supporttrusted labels. Models need to be developed tosupport interrelationships among mandatory anddiscretionary labels.

Coalition issues: Data integrity and authenticity areachievable only in certain environments. In manyinstances, the data source and whether the data havebeen modified in transit cannot be determined. Data

communicated over the Internet are often notauthenticated or provided integrity protection becauseof the expense of implementing these securitypolicies. In some instances, non-scalable, system-unique authentication mechanisms are implementedor, alternatively, a higher level of risk is accepted.New, internationally recognizable solutions forcommunities of interest are needed. Token standards,certificate issuance, cross-certification, and inheritedtrust are just a few of the issues to be addressed.

5.9 Secure Software Engineering

DefinitionSoftware engineering is the application of engineeringmethods, technologies, tools, and practices to thesystematic development of computer software.Security is a component of software engineering, andsecurity requirements must be met just as therequirements for functional correctness must be. Theprimary goal of secure software engineering is thedesign, development, verification, and validation ofsecure and correct software. Secure softwareengineering is required throughout the softwareengineering life cycle.

Importance Cyberspace increases the importance of security insoftware engineering because the infrastructure thatmakes systems highly interconnected also makes themhighly accessible to adversaries and, if not adequatelysecured, vulnerable to attacks. Exploitablevulnerabilities in the IT infrastructure frequently aretraceable to failures in software development andengineering. Secure software engineering will becomeincreasingly important as service-orientedarchitectures are more widely adopted. Service-oriented architectures are loosely coupled,interoperable application services, developed usingsmall sets of standardized components that enablethese services (e.g., Web services) to be broadlyprovided and incorporated.

State of the Art Secure software engineering is a concern across allphases of a system’s life cycle: initiation, acquisitionor development, implementation and assessment,

Foundations for Cyber Security

Page 98: About the National Science and Technology Council - NITRD

82

operations and maintenance, and sunset ordisposition. During the initiation phase, theconfidentiality, integrity, and availability objectivesare specified. Assets that need to be protected arespecified and a preliminary risk assessment isperformed. In the development phase, the securitycontrol baseline is selected and modified, as required,and the security controls are designed. Securityimposes additional overhead on a system’sdevelopment and performance that must be balancedagainst the cost and risks of a system compromise.

To ensure the integration of security controls into asystem during the implementation and assessmentphase, programming languages, compilers, andlibraries that are certified against specific securitycriteria should be used. Unfortunately, there are fewwidely used certified compilers and libraries. Inaddition, code should be verified against theapplicable specifications to ensure that theconfidentiality, integrity, and availability objectiveshave been met and that security has not beencompromised.

Verification using formal methods still remains elusivedespite significant advances. Testing softwarecomponents and assessing a system require acombination of static methods (e.g., reviewing thesoftware/firmware code and documentation toidentify flaws and potential vulnerabilities) anddynamic methods (e.g., testing the software andfirmware against suitably chosen scenarios).Frequently, unit testing and testing of distributedsoftware are key development activities. Assessmentsmay lead to refined security requirements andcontrols that necessitate revision of a system; thisrevision process continues during the operations andmaintenance phase.

Capability GapsResearch results in software engineering techniquesthat address security issues in areas such asprogramming languages, software development, andcode analysis have proven effective in some phases ofthe system development life cycle. Further R&Dshould focus on incorporating the best securitydevelopment practices, such as from language-based

security, access control and trust management, andprotocols, into software engineering methods.Advances in secure software engineering principles,methods, tools, techniques, and semantics oflanguages will be required. Ideally, secure softwareengineering practices should include the followingelements:

Security policy models: Potential threats andvulnerabilities and their associated controls andcountermeasures define the security environment inwhich security objectives must be met by the softwareunder development. Security controls can beconfigured to achieve these objectives. Securityspecifications and architectures provide choices for anoverall strategy for configuring security mechanisms.As software components are integrated andapplications and services interoperate, the consistencyand consolidation of security policies must bevalidated to assure that no new vulnerabilities areintroduced and systems are not unduly constrained.Few, if any, development tools currently meet allthese needs.

Certification: Secure software engineering shouldenable stakeholders to assess software security andthus should provide methods to demonstrate trust ateach phase of the software life cycle. This requirestesting and verification methods that are thorough,practical, scalable, and relevant to security, with staticand dynamic code analyses tailored to exposingvulnerabilities in code and weaknesses in securitymodels. Security certification becomes morecomplicated as programs grow larger, architecturesbecome more adaptive, and software involves morepolymorphisms.

Human-computer interfaces/human-systeminteractions (HCI/HSI): While security functions insoftware are designed to be transparent to users, bothusers and system administrators need ways tounderstand the nature of security policies. Systemadministrators need sophisticated graphical interfacesthat ease the difficulties of configuring systemsecurity. Presenting an overview of cyber security for adistributed system is an HCI/HSI design challenge.Currently, HCI/HSI design is more art than science

Foundations for Cyber Security

Page 99: About the National Science and Technology Council - NITRD

83

and there are few broadly applicable guidelines forbest practices in the context of IT system security.

Updates, upgrades, and system migration: Thedifficulty of facilitating change in an operationalenvironment is frequently underestimated and isalways complicated by the breadth and depth ofunderstanding required to plan, design, build, andsystematically introduce change. For example,frequent software updates and occasional softwareupgrades present a software engineering challenge.These updates and upgrades are often required tooperate correctly and effectively across differentplatforms and applications and across the ITinfrastructure without introducing new vulnerabilities.There are no tools and techniques currently availableto facilitate this aspect of software engineering withsufficient assurance.

Interoperations and trust: Composability ofassurances must be considered when assemblingsoftware components during development and whencombining services dynamically at runtime. Service-oriented architectures present particular challenges totraditional validation, verification, certification, andsecurity assurance because of their dynamic nature.Trustworthiness of a system cannot be automaticallyassumed but must be verified.

Trusted libraries: Code libraries have long been usedin software development. Libraries facilitate the re-useof components, tools, techniques, and best practices,and should be extensible to adjust to new insights anddevelopments. While today the more successful alibrary is, the more valuable and trusted it is perceivedto be, security, trust, and information provenanceshould also become explicitly evaluated aspects oflibraries.

Ontologies and taxonomies: Libraries requirestructure and organization, which often come fromontologies and taxonomies. Taxonomy concerns thepractice, principles, and rules of classification, whileontology refers to a model of components and theirinteractions in a given knowledge domain. Thepurpose of any taxonomy or ontology is to facilitatesharing of knowledge through a commonly acceptedlanguage of concepts and terminology.

Such sharing helps realize the full benefits of the ITinfrastructure, especially for service-orientedarchitectures and for data exchange. However,coordination of cyber security taxonomies andontologies is a fundamental challenge in thedevelopment of standards, guidelines, and bestpractices. Broadly accepted standards could have astrong positive effect on the development of ITsystems. For example, there are currently no acceptedstandards for metadata for software re-use. Codeavailable for re-use could have metadata describing thedegree of trust that can be attributed to the code.

5.10 Fault-Tolerant and Resilient Systems

Definition Fault-tolerant systems are systems designed tocontinue to function with predefined performancemeasures despite the malfunction or failure of one ormore components. A resilient system will easilyrecover from disruptions within an acceptable periodof time.

Importance Fault tolerance is generally focused on mitigating theimpacts of non-malicious events such as accidents andrandom failures. New principles need to be added tothe concept in order to develop systems that areresilient in the face of malicious activity and hostileattacks. In a highly distributed system environmentsuch as the Internet, component and node failures arecommon. Resilient systems (also referred to as “fail-secure” systems in the context of IT security) thatretain their security properties amid componentfailures could mitigate potential risks that may arise asa result of such failures. Systems designed to maintainpredictable timeliness properties must also be resilientagainst denial of service attacks and disruption ofsystem resources.

In some mission- and safety-critical systems, such asthe national power grid, an attacker who canmanipulate control variables faster than the systemcan respond could potentially produce catastrophicresults. “Resource-secure” systems are constructed to

Foundations for Cyber Security

Page 100: About the National Science and Technology Council - NITRD

84

preclude such abuses. Resiliency and real-time fault-tolerance are crosscutting requirements in manymission- and safety-critical systems. Both theserequirements as well as security requirements need tobe effectively addressed without adversely affectingsystem functionality and performance. For example, amalicious attack should not cause service disruptionsbecause the system failed to maintain timeliness ofresponse or to recover from unexpected behavior.

State of the ArtSystem dependability can be achieved only byunderstanding the relationships between security andthe properties of fault tolerance and resiliency. Therelationships among these properties have beeninvestigated primarily in studies of various types ofsynchrony (i.e., timeliness) assumptions to resolvefault-tolerance problems. However, the fundamentalrelationships between security and fault tolerance andbetween security and timeliness are largelyunexplored, especially in Internet-enabled systems.

Capability GapsBuilding secure, dependable systems thatsimultaneously exhibit predictable timing behavior,withstand failures, and rapidly recover will requireextending fault-tolerance computing techniques tosystem security properties. R&D challenges include:

Transforming techniques: Simply transferring –rather than transforming – existing fault-tolerancetechniques to cyber security may introducevulnerabilities that an attacker can exploit. The reasonfor this is that fault-tolerance techniques aredeveloped based on the assumption that randomfailures are inevitable and can be mitigated, but thisassumption does not necessarily hold for maliciousattacks. The transformation approach, currently anactive research area, requires protecting frommalicious attacks each fault-tolerant computingtechnique placed in a cyber security scenario.

Safety and liveness must be exhibited at many levelsof system abstraction. (A safety property stipulatesthat “nothing bad” will happen during the executionof a system. A liveness property stipulates that“something good” will happen, eventually, during the

execution of a system.) A human is frequentlyinvolved at some level of an assured computingsystem to coordinate post-attack recovery.Unfortunately, human activity (whether an accidentalerror or an intentional action) can result inimpairments to assured computing. Non-random anddirected faults introduced by a malicious attackerchallenge the transformation of fault tolerance tocyber security, because faults introduced by legitimateusers may be indistinguishable from malicious attacks.

Performance guarantees: Research in real-timesystems has traditionally focused on dedicatedsystems, using a task-scheduling model to capture theapplication workload and to guarantee absoluteperformance. In integrating real-time and securityenforcement, real-time system technologies must beexpanded to handle more flexible and dynamicworkloads in order to guarantee differing levels ofperformance. Absolute guarantees must be providedto safeguard the responsiveness of critical controlfunctions in cyberspace so that, even under attack,essential services can continue to operate. Other lesscritical infrastructure services can be designed withweaker levels of guarantees, such as an occasionaltiming failure whose frequency of occurrence isbounded. Research on typing, formal theoremproving, automated runtime monitoring, and otherapproaches is addressing resource-bounded safety, butmore work is needed.

End-to-end security policies: Many future real-timemultimedia applications will depend on ITinfrastructure support for end-to-end security. TheInternet currently provides building blocks, such asInternet Protocol security (IPsec) virtual privatenetworking gateways, to support secure networkoperations. Mobile users can access mission-criticalnetwork resources by establishing secure connectionsto an office network’s IPSec-compliant gateway andfirewalls. However, to protect an Internet telephonysession, the security policies of different networksecurity devices along the end-to-end route must beconsistent and interoperable. Inconsistencies in a setof individually valid security and routing policiesmight introduce undesirable side effects such as

Foundations for Cyber Security

Page 101: About the National Science and Technology Council - NITRD

85

unexpected violations of end-to-end securityproperties.

Dependability: Dependability is a necessity in bothapplication and infrastructure software. Research isneeded to devise new fault-tolerant techniques tolocalize and minimize the damage caused by untrustedmalicious software, such as by using virtual channelsor other methods. This research can benefit fromadvances in real-time systems research, includingopen-systems resource management and real-timequeuing theory.

5.11 Integrated, Enterprise-WideSecurity Monitoring

and Management

Definition An enterprise consists of one or more organizationscooperatively engaged in achieving a common goal.An enterprise is often large and may include multiplesupply chain partners. Integrated security monitoringand management provide real-time situationalawareness, decision support, and command andcontrol over the security of the enterprise’s systems,networks, and information.

Importance The absence of integrated, enterprise-wide securitymonitoring and management may impede anenterprise’s ability to respond rapidly and intelligentlyto cyber attacks and may leave its information systemsmore vulnerable, less trustworthy, and less usable.

State of the Art Security monitoring and management for manyenterprises are based on a layered defense. At aminimum, this may include the use of passwords,firewalls, anti-virus protection, and cryptographicprotocols. Management of network security may ormay not be organized hierarchically. But for anenterprise, any local information needs to beaggregated into a common operational securitypicture across all systems, networks, and information.Among the technical hurdles in building this

capability are system complexity and interactivity. Forexample, network behaviors in large, highlyinterdependent infrastructures often contribute to theemergence of anomalous behaviors affecting theconnected systems. Geographic dispersion of thecomponents of an enterprise can also increasedependence on the IT infrastructure and can increasethe complexity of administration. In addition, changecontrol and configuration management may introducesystem vulnerabilities if not implemented correctly.

Capability GapsDefining, understanding, quantifying, using, andmanaging the IT security of a large-scale enterprise aresubstantial challenges. The goal is a command andcontrol capability for enterprise-wide informationsecurity that integrates monitoring and managementsystems to provide sophisticated decision support forthe enterprise as a whole. Needed capabilities include:

Macro-models of network activity: Improving theunderstanding of activities taking place on enterprisenetworks is an open area for investigation. Variedmodels are available for monitoring and diagnosingglobal network behavior. For example, models of theInternet incorporate graphs, influence diagrams forcausal analyses, fluid flow and other differentialequations, and stochastic networks. Fluid flowanalogies have been useful in identifying “sources” and“sinks” (dark or unused IP addresses) within theInternet. A recent research trend is toward morerealistic scale-free Internet models that depict theInternet as consisting of both intensively connectedhubs of nodes and sparsely connected nodes.

Analyses of these models have led to betterunderstanding of Internet robustness against randomnode failures and fragility with respect to hub failures.To accompany the general Internet models, morespecialized models focus exclusively on specificInternet problems. In one example, statisticalmechanics and queuing theory are used to modelInternet traffic congestion. Epidemiological models ofcomputer virus and worm propagation are inspired bybiological models of the spread of natural viruses;these include graphs, hidden Markov models, and

Foundations for Cyber Security

Page 102: About the National Science and Technology Council - NITRD

86

human virus propagation mechanism analogies. Theseemerging capabilities need to be integrated into newmodels of system complexity and security applicationsin large-scale enterprises.

Enterprise situational awareness: An accurate, real-time view of policy, enforcement, and exceptions isnecessary for administration of networked systems ofany size, and the larger the organization, the moredifficult the problem. Situational awareness of theapplication of enterprise policy helps ensure thatmalformed policies can be identified and correctedboth as the enterprise grows and when anomalousbehavior appears. Visual and graphical presentationsused for situational awareness provide a limitedpicture when an enterprise is large, but there is noagreement on the best approach for integrating andsynthesizing heterogeneous data in a real-timedecision-support capability.

Enterprise-wide information infrastructure: Thefact that organizational components of a distributedenterprise are often controlled and administeredindependently introduces a variety of challengesassociated with operating an enterprise-wideinformation infrastructure. Enterprises will coordinateand in some cases consolidate IT systems andnetworks to gain better control over informationresources. Technologies to facilitate integration ofenterprise-wide security monitoring and managementwill be required to achieve the interoperabilitynecessary to support network-centric operations.

Global identity management: Problems arise whenenterprises need to interoperate between distinctadministrative domains. Agreed-upon approaches tomanaging identity information are necessary to allowsystems and people to interact with systems inmultiple domains. Development of standardizedidentity formats and associated software wouldenhance both security and functionality. This effortshould begin with investigations to develop a detailedunderstanding of various organizations’ requirements.

5.12 Analytical Techniques forSecurity Across the IT Systems

Engineering Life Cycle

DefinitionSecurity across the IT systems engineering life cyclemust be defined, measured, and evaluated. Analyticaltechniques facilitate detecting, quantifying,measuring, visualizing, and understanding systemsecurity.

Importance Security is a core development concern throughoutthe systems engineering life cycle, from initialconceptual design to retirement and disposal. Practicalanalytical techniques are needed to certify systemsecurity, estimate security costs, and evaluate thetradeoffs of various security controls andcountermeasures at every stage of the life cycle.

State of the ArtSecurity is difficult not only to analyze thoroughly butalso to effect. Assuring system security requiresverifying that the system and its information cannotbe compromised. Few IT systems are formally verifiedor rigorously tested from top to bottom. No systemhas been tested for every possible combination ofconfigurations and events. No deployed system can becertified as completely secure. Furthermore, there isno assurance that a system composed of securecomponents is itself secure. New and unexpectedvulnerabilities often emerge simply through theprocess of integrating a system’s components, andminor changes to a program may have unintendedeffects on security.

Research has been directed at understanding softwareas mathematical objects at an appropriate level ofabstraction. Mathematical theory provides aframework in which to analyze software and security.Techniques come from logic, automated theoremproving, model checking, and operations research.Many programming languages still lack rigorousmathematical semantics for all their features. Gametheory, with its foundations in operations research,

Foundations for Cyber Security

Page 103: About the National Science and Technology Council - NITRD

87

logic, modeling, economics, and even sociology, viewssecurity as a game played between a system and anadversary, and offers new means to understandinteractions between attackers and defenders, and todesign security systems accordingly.

Formal methods for software engineering aremathematical techniques for the specification,development, and verification of systems. Thesemethods are used to prove that a program meets itsspecification, including security requirements. Despiteconsiderable effort, formal methods have been unableto prove properties of large complex programs becauseof the many combinations of events that must beanalyzed – a time-consuming and computationallyintensive exercise. Still, formal methods in generalremain a promising approach to assuring correctnessof code or the security properties of software systems.

Empirical approaches have often been used tounderstand and predict properties associated withsoftware and its development. Such approaches haveled to useful insights into software processes.Estimates based on empirical studies of large projectsare useful in planning and managing softwareprojects, although estimates can be subject toconsiderable variability and uncertainty.

Capability Gaps Security is an important but often overlookedcomponent of software engineering. Analyticaltechniques for understanding security requirementsare being developed. Fundamental principles forsoftware development and software engineering aredifficult to identify and describe. Interdisciplinarytools for modeling, analysis, mitigation, andremediation that focus on the dynamic aspects ofsystems and networks are needed.

System models need to include multi-scale and multi-resolution capabilities with varying levels of abstractionand scale-free properties. These capabilities will enablemore computationally efficient analyses that focus ononly one level of abstraction. More accurate andefficient models are needed to understand the behaviorand security of complex IT systems.

Current formal methods are not sufficiently robustfor large-scale software development. Yet advances inthese methods are needed to enable developers toaddress network environments and system designsimultaneously. Formal methods must adapt toservice-oriented architectures, incorporate socialfactors such as economics, and enable predictions of,for example, macro-level properties of systembehaviors. In addition, analytical techniques andformal methods should lead to better approaches forcertification of software and particularly of assuranceproperties of software systems. Analytical techniquesaddressing security must be founded uponfundamental principles and must be composable,scalable, usable, and widely applicable. Suchcapabilities will augment the tools and techniques forsecure software engineering across the life cycle ofsoftware systems.

Foundations for Cyber Security

Page 104: About the National Science and Technology Council - NITRD

88

The topics in this category address more genericenabling technologies rather than those specific to ITsystem security. Such enabling technologies can beapplied to the design, construction, and evaluation ofIT systems in general and the Internet in particular inorder to improve cyber security and informationassurance. The topics in this category are:

❖ CSIA R&D testbeds❖ IT system modeling, simulation, and visualization❖ Internet modeling, simulation, and visualization❖ Network mapping❖ Red teaming

6.1 Cyber Security and InformationAssurance R&D Testbeds

Definition A testbed can be defined as a framework forexperimentation for large development projects or asan infrastructural platform to support testing anddeployment of technologies at a smaller scale. Unliketheoretical models and simulations, testbeds are madeup of the physical hardware and software componentsof a real-world operational environment. In testbedenvironments, researchers can deploy experimentaltools, generate execution scenarios involving multiplecomponent interactions in real time, and collect andanalyze the results. Because they allow researchers toinvestigate what can “break” a system, testbedsprovide a uniquely rigorous way to test scientifictheories and new technologies.

Importance Given the scale and complexity of networks andenterprise systems and the constantly evolving varietyof cyber threats and system vulnerabilities, testbedshave a particularly important role to play in cybersecurity and information assurance R&D. Disruption,denial of service, and denial of access attacks, forexample, involve making networked resourcesunavailable to the people and systems that use them.

These attacks include insertion of tasks into a processstream to divert attention, saturate resources, displacecapacity, or disrupt communications across both thecyber and physical infrastructures.

A testbed focusing on these threats can enableinvestigations of reconfiguration, redundancy, or re-routing options, and self-healing and self-sustainingcapabilities to rapidly restore services or to provide aminimum level of service until full recovery actionscan be implemented. Such a testbed would also enableresearchers to generate and test models to helpmitigate vulnerabilities by identifying optimalconfigurations in an emulated real-worldenvironment.

An R&D testbed can also be used to help researchersdevelop methods to protect against infiltration ortheft, modification, and destruction of data. Testbedscan provide environments for testing the effectivenessof tools and technologies for protection against cyberattacks and attack detection, mitigation, and recovery,both for the purpose of improving technologies still inR&D stages and for testing the effectiveness ofexisting commercial technologies.

Because of the difficulty in performing tests ofmalicious attacks at the largest scales of the Internet,researchers often turn to models and simulations forgaining understanding of such events, such as thepropagation of a rapidly spreading worm across theInternet. In addition to providing realisticenvironments for testing new protective and defensivetechnologies, testbeds also provide the means forvalidating models and simulations of IT systems andinfrastructure at large scale, providing researchers withgreater confidence in the results provided by themodels and simulations they use.

State of the ArtDisruption, DoS, and denial of access attacks, whichare now commonplace across the Internet, were notgenerally considered in the planning and design ofcritical infrastructures and computer-based systems.

6. ENABLING TECHNOLOGIES

FOR CYBER SECURITY AND INFORMATION ASSURANCE R&D

Page 105: About the National Science and Technology Council - NITRD

89

Today, methods for protecting against or mitigatingthese kinds of attacks are not universally effective.However, because such attacks generally employknown techniques, the current state of the art inprevention and mitigation, which includes theemergence of self-healing networks and systems, isnarrowing the gap. Several Federally fundednetworking and cyber security testbeds are aidingresearch in this area.

The issues of data infiltration, tampering, destruction,and monitoring have been addressed in sectors suchas banking and finance, but less effort has gone intoidentifying and communicating information aboutspecific threats to control systems, facilities, and othercritical infrastructure assets. Under idealcircumstances, only trusted personnel handle criticalsystem information. But such behavior cannot beassured in the face of increased network connectivity,added exposure of system vulnerabilities, and thesurreptitious gathering of information that exposurecan make possible. System vulnerabilities can beexploited to open a door for an intruder to monitor,remove, change, or destroy data.

Currently, there is no widely used infrastructure forexperimenting with a wide variety of threats to ITsystems or for testing and validating the effectivenessof new security products or novel next-generationapproaches to providing cyber security andinformation assurance that are still at R&D stages.

Capability Gaps Several elements of an experimental infrastructure areneeded in order to support research in and evaluationof cyber security and information assurancetechnologies, including approaches to creating,expanding, validating, and effectively using testbedsfor evaluating R&D results.

Although a testbed is itself an inherently physicalinfrastructure, both application of existingtechnologies and development of new technologiesare required to support the construction of testbeds.This includes the use of virtualization technologies tocreate virtual nodes, whereby one machine can appearas multiple machines on a network, thereby allowing

a network with a given number of machines toperform and behave as a much larger, heterogeneousnetwork would. Also needed are rapid reconfigurationtechniques that allow the physical and/or logicalconnectivity of a testbed, as well as the state (e.g., ofoperating systems or disk images) associated withtestbed nodes to be easily and quickly modified.Geographically distributed networks need to be moreeasily integrated so that they can readily be combinedto serve as a single, larger-scale testbed network.

Software infrastructure is also needed to enableeffective use of cyber security and informationassurance R&D testbeds, while ensuring that thetestbeds themselves do not put operational networksat risk. This includes the use of software frameworksfor supporting and automating experimentation;collection, analysis, and visualization of data; andsoftware for characterizing behavior and performance.In addition, effective measures are needed to validatetestbeds and verify their scalability, in order to helpensure that their behavior is realistic andrepresentative of much larger networks.

6.2 IT System Modeling, Simulation,and Visualization

DefinitionThe model of an IT system abstractly describes whateach component does, alone and in concert withother components. Models can capture behavior atvarying levels of detail and abstraction. The propertiesof most IT system components are dynamic; they mustrespond to and interact with users, and can havebehavior that varies with time (e.g., their performancemay degrade over time). This makes it difficult forsystem modelers to identify which details to captureand which to ignore when developing a model. Aneffective model captures the essential properties of thecomponents while keeping complexity manageable byhiding or omitting some details.

The purpose of modeling IT systems is to reproducerelevant properties and predict the behavior ofindividual and networked components. Researchers,analysts, and managers need to succinctly describe

Enabling Technologies for CSIA R&D

Page 106: About the National Science and Technology Council - NITRD

90

how an IT component works under both normal andatypical conditions. Simulation and visualization areused to determine how the systems behave over timeand under a variety of conditions, and to convey thatinformation to humans. Few components of ITsystems are static – the systems are constantly beingmodified and upgraded – so the models themselvesmust be dynamic.

ImportanceIT systems can be flexible – components perform thesame function under various circumstances – andmalleable – components perform functions other thanthose for which they were originally designed.Modeling IT systems is fundamental to answeringquestions about performance, management, andsecurity, and it enables researchers and analysts todraw conclusions about capability. For example, anaccurate model of system call behavior in an operatingsystem can be used to determine when malicious codeis being executed. Such a model also enables trade-offanalysis, such as to determine whether the overhead(e.g., time and cost) of checking for deviations fromnormal behavior patterns is worth the security itprovides.

A key challenge of IT system modeling lies in thescope of its components – ranging from magneticmedia to optical fiber and anything in between. Eachsystem component presents its own problems formodelers deciding what to model. For example, whilemodeling the behavior of random access memory maynot require an understanding of the underlyingphysics, knowing the physics of optical fiber isessential to answering questions about optical networkthroughput. When components are combined intoenterprise-scale systems, complex interactions and theneed for matching different levels of modelabstraction add to the modeling challenge.

Simulations and visualizations based on systemmodels can be put to a wide range of uses, including:

❖ Predicting system performance for a givenconfiguration, or suggesting an optimal design

❖ Demonstrating the functionality of a givencomponent in an enterprise

❖ Providing trade-off analysis for calls on systemresources

❖ Exercising user interfaces❖ Testing system defenses❖ Monitoring routine system behavior to detect

anomaliesBy enabling better understanding of systemfunctionalities and component interactions, suchsimulation capabilities can be used to design moresecure systems.

State of the ArtIT systems require models that are able to scale upwith increasing size and complexity. Today, modelsexist for small- to moderate-size systems. Somemodels for simple components are available (e.g.,mean time between failure for data storage systems),as are complicated models for small-scale behavior.Sophisticated mathematical packages can provideanalytical and compositional capabilities. Currentresearch includes expanding the ability of complexsystem models to scale to enterprise size. Research oncombining these models is on the horizon. Forexample, correlating models across scales (such as byusing heuristics or artificial intelligence) may providethe ability to predict enterprise-wide behavior in orderto detect failure, anomaly, or malicious activity.

Capability GapsThe greatest capability gaps associated with modelingof IT systems are in scaling and composition. Few ifany current modeling technologies can scale torealistic numbers of system components andthroughputs for large IT systems. Complex systemstheory suggests that, under certain circumstances,even simple systems cannot be modeled to greatfidelity for long periods of time (chaos precludespredictability). IT systems, however, are engineeredsystems, so scalable modeling may be achievable.Current technologies are well suited to modelingindividual components, but work is needed tocompose models, such as embedding a detailed modelof a component (e.g., a single computer) within anenterprise model. The difference in time scalesassociated with different behaviors (e.g., differing

Enabling Technologies for CSIA R&D

Page 107: About the National Science and Technology Council - NITRD

91

response times from electronic components, inter-network communications, and human action)increases the difficulty of composing multiple modelsinto models of aggregate system behavior.

Human understanding of complex systems behavior isevolving. A multidisciplinary research community iscurrently focusing its attention on complex datanetworks, with some success. For example, highlyoptimized tolerance theory is being used to analyzethe aggregate behavior of large collections of ITcomponents. But the mathematics for describing ITsystems is still being developed. Research in this areahas a stochastic flavor and has yet to broadly addressmobility and long-range dependency, as well asnonlinearity and non-stationarity issues.

6.3 Internet Modeling, Simulation,and Visualization

Definition Internet modeling, simulation, and visualizationenable researchers, analysts, and decision makers tomodel the Internet or a portion of it; simulate a mixof traffic and measure key parameters such ascongestion, queuing delays, network transit time,packet loss, and jitter; and visualize networkperformance graphically (though not necessarily inreal time).

ImportanceThe Internet comprises autonomous networks that areconnected via network links, across whichcommunications take place using the Border GatewayProtocol. As the Internet grows to carry more trafficthat is sensitive to latency, jitter, and packet loss (e.g.,voice over IP or streaming video), modeling ofInternet behavior will become increasingly helpful forensuring that critical traffic is delivered in a timelymanner. For example, understanding how a routerdrops packets when network traffic is normal isessential to knowing how that router will behavewhen the network is congested. Advanced graphicaltools will enable analysts to better investigatecongestion under present conditions and also as thesize and complexity of the Internet grow.

The development of a comprehensive Internetmodeling and visualization capability will allowanalysts to evaluate the effect of new protocols (suchas those that handle larger packet sizes); develop andtest techniques to thwart distributed network attackssuch as distributed denial of service, physicaldisruption, and worm propagation; estimateperformance of a network design under various typesof load and operational conditions; study howcongestion affects time-sensitive traffic; and facilitatethe configuration of networks for increasedrobustness.

State of the Art Mathematical modeling techniques such as queuingtheory are being used in developing Internetperformance models. These techniques can be readilycombined with commercial visualization software tographically portray network performance. They havebeen successfully run on IP networks with 1,000nodes; the next step is the extension to a network ofInternet scale and diversity. Simulations can be usedto calibrate and validate models and to demonstratetheir accuracy, but because they have long run times,they are not practical for many types of networkperformance analysis and prediction. Simulation toolsalso are needed for certain types of problems forwhich analytical capabilities are still underdevelopment. Analytical tools can be used to studyprivate IP network performance under baselineconditions and can predict performance under severaltypes of network attack, such as DDoS and wormattacks.

Capability GapsScaling these capabilities to Internet-size networks is aresearch problem. Today’s technologies enable onlypartial or targeted models of facets of Internetbehaviors. A variety of capability gaps currently exist,including a need for analytical tools to addressInternet-scale complexity problems such as: the natureof distribution functions for many quantities ofinterest that makes closed-form queuing equationsunusable; estimating performance of Internet-sizenetworks; difficulties with heavy-tailed distributionsof network traffic parameters; analytical modeling of

Enabling Technologies for CSIA R&D

Page 108: About the National Science and Technology Council - NITRD

92

network traffic types and protocols; modeling ofvarious classes of distributed cyber attacks; the lack ofuniversally accepted topology, traffic, and protocoldata associated with the Internet; and softwareintegration of analytical and visualization tools.

6.4 Network Mapping

Definition By design, a network map shows physical and/orlogical network configurations, including informationabout hierarchies of connectivity (e.g., a mainnetwork and its subnetworks). Network maps areconstructed using information either supplied by ahuman or obtained through automated discovery.Topological or logical network information can bedisplayed alone or superimposed on a geographicalmap. Network mapping can also include theautomated processing and analysis of network trafficand topology data for presentation using networkvisualization tools.

Importance Network mapping is essential to network engineering,monitoring, maintenance, and repair. Network sizeand complexity require that analysts be able to quicklyvisualize congestion points using near-real-time trafficdata. By graphically monitoring large networks anddetecting when link utilization is growing faster thanexpected, it may be possible to quickly determine ifthe network is experiencing a physical disruption or acyber attack. Network mapping is also used inresearch on network performance.

State of the Art Several network mapping systems have implemented,to some degree, many of the features of interest tousers, including: drawing detailed maps from userdata; automated discovery or scanning of a range of IP addresses; drawing of subnet maps using standardcomputer-aided design drawing capabilities;automated scheduled network sweeps thatautomatically update a map; showing traffic on maps;listing device IP addresses; and user-initiatedvulnerability scanning on networked devices. Thesecapabilities, however, are available for the most part at

the level of local networks, not at the level of large-scale networks or across autonomous systems.

Capability Gaps Current network mapping capabilities require time tomap and display small sections of networks usinglimited data. For real-time and near-real-timemapping of traffic flows and congestion and tosupport automated network management andvisualization of networks across domains, thesecapabilities need to be expanded to enable rapidmapping of much larger sections of the network.Network traffic data need to be parsed, scrubbed, andformatted before being accepted by the mappingsoftware. Greater flexibility is needed for importingnetwork data from database applications.

Other mapping capabilities needed include: improvedmapping speeds; the ability to map larger networks;and the ability to map networks that are “hidden”behind network address translation devices.Additional needs include: real-time or near-real-timemapping of network traffic flows and congestion; theability to more easily incorporate network topologychanges into existing maps; rapid identification ofattacks or other network problems; and automatedanalysis of network topology and recommendations ofconfiguration changes to improve performance.

Enabling Technologies for CSIA R&D

Page 109: About the National Science and Technology Council - NITRD

93

6.5 Red Teaming

DefinitionRed teaming is a technique for analyzing IT systemvulnerabilities by actually putting a system underattack. In a red team exercise, skilled outside expertsplan and carry out surprise adversarial cyber attackson an enterprise’s systems to find and exploitvulnerabilities and reveal flaws in security planning,policies, and defenses. Unlike role playing or tabletopexercises, the “hostile adversaries” in red teamingexercises make every effort to outthink defenders and“win” by overcoming real cyber defenses and gainingaccess to actual systems, networks, and information.The attack phase of the exercise is followed by athorough analysis of what transpired. Red teamingcan be combined with or used by other types ofassessment such as risk, vulnerability, threat,consequence, system management, system security,accreditation, and certification.

ImportanceAn effective red teaming exercise should challengesecurity assumptions and strategies, expose operationaland technical weaknesses, and stimulate fresh thinkingabout an enterprise’s security posture. Red teaminghas been applied for varied purposes, including:testing cyber defenses and response plans; improvingthe design and implementation of a system and itssecurity throughout its life cycle; system calibration;generating likely adversary actions to obtain signaturesand test detection capabilities; technical analysis ofadversarial scenarios; observing the effects of variousdecisions and prioritizations on an adversary’sresponse; demonstrating a scenario involving realsystems and operational constraints; and training.

Red teaming can be an effective tool for IT systemengineering or for evaluating the security of complexsystems through an increased understanding ofcomponent and system function and behavior. Redteaming can encompass globally distributed systems,numerous distributed organizations, a range oftechnologies, and the effects of interdependenciesamong systems. While Federal and private-sector redteaming activities may take place independently in

order to address their respective needs, the Federalgovernment can facilitate cooperation to assessinterdependencies and improve red teamingcapabilities and effectiveness.

State of the ArtRed teaming is useful for identifying technical systemvulnerabilities and managerial oversights. In industryit may be used to assess the security of high-consequence targets such as those in a banking orfinancial infrastructure. However, much informationabout red-teaming methods has not yet beendocumented. Dedicated red teams often do not sharetheir knowledge with other teams, and temporary redteams rarely have the resources to capture their ownknowledge for re-use. There is no easy way to measurea red team’s capability and performance to determineits effectiveness.

Federal and industry needs for skilled red teamingexceed the capacity of available resources. Derivativesof red teaming, such as structured self-assessments,may be used to address some issues with fewerresources. However, such an approach is insufficientfor the complexity, scope, and scale of ITinfrastructure security issues.

Capability GapsThe September 2003 Final Report of the DefenseScience Board Task Force on The Role and Status ofDoD Red Teaming Activities recommended that DoDred teaming be expanded to deepen understanding ofU.S. adversaries in the war on terrorism, and inparticular their capabilities and potential responses toU.S. initiatives. The report also recommendeddeveloping best-practice guidelines and multipleforms of red teams.

Technical capability gaps identified in the reportinclude:

❖ Security assessment metrics: A comprehensive setof security metrics for red teaming andassessments, including domain-specific metrics

❖ Red teaming tools: Specific procedures as well ashardware and software tools to support redteaming, enabling more efficient, consistent,

Enabling Technologies for CSIA R&D

Page 110: About the National Science and Technology Council - NITRD

94

measurable, and reproducible results. Needsinclude tools that: 1) assist and improve redteaming processes: 2) can be used for particulartechnical domains (e.g., network discovery); and 3) enable analysis of red teaming activities.

❖ Adversarial modeling: The ability to model thebehavior of particular adversary classes, groups, orindividuals to make the red teaming process moreaccurate and realistic, particularly when fineresolution is needed. Alternative approaches forunconstrained and innovative adversarial tacticsalso need to be developed.

❖ Techniques and methods: Techniques andmethods are needed to generate efficient,measurable, and reproducible results; to ensureaccurate composition of results from different redteams over space and time; and to red-teamnumerous widely distributed and complex systems.

❖ Qualification and certification: The capabilitiesand qualifications of red teams need to be betterunderstood so that they can be more effectivelyselected and used for particular tasks.

Enabling Technologies for CSIA R&D

Page 111: About the National Science and Technology Council - NITRD

95

The topics in this category focus on methods,technologies, and architectures that will enable thecreation of new generations of IT infrastructurecomponents and systems that are designed and builtto be inherently more secure than those in use today.The topics in this category are:

❖ Trusted computing base architectures❖ Inherently secure, high-assurance, and provably

secure systems and architectures❖ Composable and scalable secure systems❖ Autonomic (self-managing) systems❖ Architectures for next-generation Internet

infrastructure❖ Quantum cryptography

7.1 Trusted Computing BaseArchitectures

Definition The trusted computing base (TCB) is the set of allsystem hardware, firmware, and software that is reliedupon to enforce the system’s security policy. Theability of a TCB to correctly enforce a security policydepends on the mechanisms within the TCB and onthe correct input by system administrative personnelof parameters related to the security policy.

A TCB architecture is a description of theinterrelationships among the hardware, firmware, andsoftware that, in combination, enforce the desiredsecurity policies for the system. In principle, a TCBarchitecture enables analysis to determine if certainsecurity properties hold, and it allows continuousmonitoring and verification of the integrity andproperties of the TCB (including the kernel,configuration files, secure memory, privilegedapplications, and running applications).

ImportanceThe TCB is critical to the secure operation of an ITsystem. If the security of any component of the TCB

is compromised, then the security of the entirecomputing system is suspect and cannot be assured.

State of the ArtThe TCB kernel must interact with many processesand applications, both locally and over complexnetworks. Increasing system code complexity makesanalysis of components of the TCB as well asinteractions with untrusted components increasinglydifficult. For all but the simplest of computationalcomponents and systems, it can be impractical orimpossible to determine whether the TCB operates asdesired and enforces all desired system securitypolicies at all times. It is equally difficult to analyze aTCB architecture to ensure that it provides thesecurity functionalities that are desired of a system.

Capability Gaps Currently, it is not known how to effectively test andvalidate the properties of a TCB architecture, how toensure that the TCB is fault-tolerant for failures inportions of the TCB, how to ensure that the TCBdegrades gracefully under adverse circumstances, orhow a TCB can best be defined and assembled forfuture computing architectures. Although a TCBoften consists of system components provided bymultiple vendors, generic, broadly accepted methodsfor architecting and assembling a TCB in suchsituations do not exist.

Small high-assurance kernels and partitioning kernelsare often rejected by developers due to a variety offactors (e.g., restricted functionality), but there is alack of robust research into techniques for practicalhardening of traditional commercial-off-the-shelfkernels. Research is needed to develop monolithic anddistributed TCB architecture concepts as well asmethods for assuring their security properties.Processes and tools are needed throughout the design,development, and deployment of computationalsystems to support the verification of TCB propertiesand the assurance that TCB properties are notcompromised.

7. ADVANCED AND NEXT-GENERATION SYSTEMS AND ARCHITECTURES

Page 112: About the National Science and Technology Council - NITRD

96

Additional work is needed to help developers identifythe minimal set of components and functions neededfor TCBs (either generally or for use in a givenapplication), while providing methods for mappingdesired security properties into architecturalspecifications. While TCBs are intended to be assuredso that they can safely interact with untrustedprocesses, applications, and systems, architectingsystems that provide secure sharing and interoperationbetween trusted systems (e.g., multiple independentTCBs) remains a research area. R&D is also neededin such emerging approaches as techniques to movetrust from one part of a system to another andemploying strong isolation (e.g., virtual machinemanagement – see section 5.2, page 70).

7.2 Inherently Secure, High-Assurance, and Provably

Secure Systems and Architectures

DefinitionAn inherently secure system fully integrates cybersecurity mechanisms with system functionality andother properties. This may involve tailoring securitymechanisms to system characteristics to achieve bothsecurity and functionality. For example, partitioningcan be used to both prevent corruption of data andenforce separation of access for processes that are notallowed to share data. Critical systems whose failureor corruption would cause severe consequencesrequire high-assurance design and development.High-assurance systems should be subjected torigorous design and implementation checking, andcontrol steps should go beyond routine processes usedin developing software. A provably secure system isone whose security mechanisms can be proven not tofail in certain modes that would allow inappropriateaccess or modification of data, or would otherwisedisrupt the integrity of system function and data.

ImportanceCritical infrastructures increasingly integrateinformation using hardware and software thatinteroperate over the Internet and depend on the ITinfrastructure. Vast amounts of information are

collected and shared within government andthroughout the private sector using interdependentphysical and IT infrastructure. Critical distributedinformation resources and Web services that supportoperations must be protected against inappropriateaccess and malicious attack. These resources includeFederal and military systems, critical components ofthe banking and finance sector, agriculture,transportation, and national disease tracking andhealth care delivery systems. Control infrastructures(e.g., for aviation, power, and water) must operatecorrectly in a multi-system context and must notpermit disruption by malicious attack. Disruptions ofcritical components of the IT infrastructure (e.g., airtransport or finance) may affect citizen safety andconsumer confidence, causing national economicramifications. Thus, the ability to detect vulnerabilitiesand assure that systems operate as intended is vital.

State of the ArtProvably secure systems have long been sought,though some question whether the goal is attainable.But since R&D in provably secure systems enhancesunderstanding of security properties, advances towardthis objective can nevertheless serve as a foundationfor improving security technologies.

IT system validation and verification are largely basedon evaluation of the development process. Fordecades, Federal agencies have supported a productevaluation (and re-evaluation) program for criticalcomponents of the trusted computing base used inmilitary systems. The Common Criteria (CC)standards provide an international classificationsystem for security functionality and a process forevaluating security products. Product evaluation andre-evaluation remains a largely manual process usingprocess-based functionality checklists. The CCframework relies on informal description and lacksformal semantics and models for capability evaluation.

Commercial tools audit systems for vulnerabilities andperform limited analyses. However, the technologybase for designing and building inherently secure andassured or verified systems remains weak. Trusted

Next-Generation Security Systems

Page 113: About the National Science and Technology Council - NITRD

97

computing base architectures for managinginformation domains that must remain separate havebeen proposed.

Improvements in analysis and model-checking toolsenable evaluation of increasing numbers of lines ofcode for simple properties such as stack or bufferoverflow and storage leaks, but precision remains aproblem in these tools. The private sector nowroutinely applies “lightweight” verificationtechnologies such as model checking to restrictedclasses of components such as device drivers.

Verification technologies have been applied selectivelyto check critical properties (e.g., isolation of virtualchannels used for managing information flows).Research progress is seen in correct-by-constructionmethods for specific problems, such as programmablesmart cards and encryption protocols, supported bysecurity-domain-specific programming technologies.

Capability GapsResearch in this area is addressing the information,testing, and verification technologies required todevelop inherently more secure systems. Processes andtools are needed throughout the design, development,and deployment cycles of products and systems tosupport evaluation, acceptance, and (in some cases)certification of critical and non-critical ITinfrastructures. Research is needed to extend andadvance the maturity of high-confidence design andassurance capabilities. There is a need for formalmethods to be applied to exemplary systems, such asseparation kernels. Development of secure and highlyassured systems requires evidence-based methods thatcan improve today’s process-oriented and largely after-the-fact evaluation. Monolithic automated theorem-proving tools, usable only by experts, must bereplaced by technologies that enable skilled developersto provide specified levels of assurance. Robustresearch prototypes are needed to allowexperimentation with new security concepts.

Examples of R&D needs include:

Security concepts: Including security models andtechniques for systems of varying types and scales(e.g., enterprise and Web services, real-time embedded

control, widely distributed sensor and actuatornetworks); models of risk and trust that addressrealistic and changing (e.g., dynamic andconfigurable) system needs; formal models of theproperties of critical systems (e.g., security, faulttolerance, real-time response); design technologies toassess properties and manage their interactions; andnovel security concepts that can be integrated intoemerging information technologies.

System-level security: Including assurance services(e.g., reconfiguration and repair) for autonomic orself-managing systems; capabilities to certifyproperties of system-level components, withassumptions about the contexts in which theseproperties hold; capabilities to check end-to-end andemergent properties for composite and cooperatingsystems; and assurance in the presence of combinedcyber and physical system behavior.

Design and analysis: Including environment- orcontext-aware design and assurance technologies;software composition and analysis technologies;composition tools that can check cross-layerproperties; processes that integrate development andevidence-based assurance; and verificationtechnologies that are automated, open, andinteroperable and that can be tailored for keyproperties and domains.

Case studies, experimental testbeds, andprototypes: Including reference implementations ofboth systems and assurance technologies, togetherwith their design evidence, assurance technologyprototypes, and evaluation case studies.

Next-Generation Security Systems

Page 114: About the National Science and Technology Council - NITRD

98

7.3 Composable and ScalableSecure Systems

DefinitionComposable secure systems are assembled fromcomponents and subsystems in ways that preserve thesecurity properties of those constituent elements whilesatisfying system-wide and end-to-end securityrequirements. Systems can be composed by applyingrigorous engineering measures to assure systemproperties, or alternatively, the systems can befederated in an ad hoc manner while ensuring thatthey interoperate securely, reliably, and sustainably. A highly scalable, secure critical infrastructure shouldbe able to accommodate variation in systemparameters (e.g., number of records, users, nodes, orclients, as well as the degree of geographicdistribution, heterogeneity, or complexity) withoutfailure of security or system reliability.

ImportanceComposability and scalability are interlocking issuesnot only in cyber security and information assurancebut more broadly in architectures for increasinglycomplex IT systems and networks that mustnonetheless be far more robust and reliable thantoday’s. An underlying concept is modularity: ifsecure and demonstrably reliable components can beengineered to be fitted together reliably and withoutloss of security, the composed system that results cancontinue to be predictably scaled up through theaddition of other secure, reliable components. Whileintuitively straightforward, composability andscalability are among the most difficult technicalchallenges in the development of IT systems becausethey require rethinking and re-integration of allelements of hardware and software engineering,including first principles, requirements, architecture,and development methods and practices. However,composability and scalability are deemed essentialapproaches for increasing the overall security andtrustworthiness of the IT infrastructure.

State of the ArtAt present, understanding of security properties andsystem composition methods is such that it is notpossible to assure that a system composed of assured

components will preserve desired security properties atthe system level. Similarly, it is not possible to know a priori that methods to assure security at one level ofscale will function effectively and robustly at greaterscales. Some system architectures in use today arebased on design assumptions and practices that aredecades old. Many existing designs are not adequatelyrobust against high failure rates or correlated failuremodes, such as might occur under coordinatedattacks. While research in some aspects ofcomposability and scalability addresses increasedsecurity (e.g., trusted computing base, high-assurancesystem architectures, secure software engineering), fewefforts to date focus on drawing together all thedesign and engineering principles, technicalapproaches, and component technologies that need tobe integrated to achieve composable and scalablesecure systems.

Capability GapsR&D, including long-term research, is needed todevelop new and transitional system architectures,software engineering methods, techniques and tools,programming languages, and evaluation andassessment criteria that can enable the development ofcomposable, scalable systems that are more secure andmore robust than those of today. Areas in whichR&D should focus include:

New frameworks and architectures: Includingtrusted computing base architectures to enablecontrolled sharing, interoperation, and coordination;trust management architectures for open,interoperable systems; computing and systemssoftware architectures that integrate scalable andinteroperable security measures (e.g., securecomputing platforms, kernel-level, OS, andmiddleware services); architectures for authenticatedhuman-computer, component-to-component, andsystem-to-system interaction, with tailored, easy-to-use security interfaces; coordination andinteroperation platforms; reference architectures andimplementations, and experimental platforms.

Secure, composable, and scalable IT systemtechnologies and development methodologies:Including guaranteed and secure cooperative global-scale dynamic resource management; distributed open

Next-Generation Security Systems

Page 115: About the National Science and Technology Council - NITRD

99

systems and middleware technologies that enablerobust, dynamically reconfigurable system operation;and system composition, integration, and managementtechnologies for analyzing components and assemblingcomplex systems.

Composable and scalable cyber securitytechnologies: Including core security servicearchitectures for systems at varied scales; securitymodels and architectures for heterogeneous systems;limitation of misuse and damage by insiders; securitystatus notification and coordination architectures; andscalable, assured system security services (e.g.,cryptographic management, authentication, andaccess control).

7.4 Autonomic Systems

Definition Autonomic or self-managing (i.e., predictive, self-aware, self-diagnosing, self-configuring, self-optimizing, and self-healing) systems increase therobustness, reliability, resilience, performance, andsecurity of complex IT systems by: 1) exploitingexpert knowledge and reasoning; 2) reacting quicklyand automatically to events of interest; and 3)minimizing the need for human intervention and theadverse consequences associated with the responsetime needed for human intervention. An adaptive,autonomic system should ensure that services remaintrustworthy and meet required service levels, evenacross multiple application domains within a system.

Importance Both the private sector and the Federal government(including the military with its network-centricwarfare requirements) need robust systems thatrespond automatically and dynamically to accidentaland deliberate faults, while maintaining requiredlevels of trustworthy service. Capabilities such as faulttolerance have made computing and informationsystems more resilient during failures due toaccidental faults and errors. However, even with theseadvancements, a system can exhaust all resources inthe face of prolonged failures or deliberate, sustainedattacks. In addition, systems that rely on manualintervention can become more fragile and susceptible

to accidental faults and errors over time if manualmaintenance and updating are not performed in aregular and timely fashion. Adaptive, autonomictechnologies can address this brittleness byautomating activities that would otherwise requirehuman intervention for responding to effects ofproblems such as imperfect software, human error,accidental hardware faults, or cyber attacks.

State of the Art Most complex systems are installed, configured,initialized, integrated, optimized, repaired, andprotected by human operators. This process ischallenging, time-consuming, and error-prone, evenfor experts. Autonomic systems can facilitate self-configuration by automatically converting policiesbased on business objectives into configurationsettings on system components. Within an autonomicsystem, a new component can incorporate itselfseamlessly while the rest of the system adapts to itspresence. Autonomic systems can facilitate self-optimization by continually monitoring systemoperation to identify parameter adjustments that canimprove performance.

Failures in complex computing systems can in someinstances take experts weeks to diagnose and repair,while autonomic systems facilitate self-healing byrapidly detecting, diagnosing, and repairing localizedproblems resulting from bugs or other hardware orsoftware failures. Ongoing research is developingpredictive diagnostic techniques that can avoidfailures and mitigate problems before they impedesystem operation.

The current state of the art in quality of service (QoS)is communication-centric and does not consider QoSin a more generic sense, to cover all of the necessarycomputing elements (processing, data management,and communication) at the component, application,and system level. Many existing systems either do notadapt to changes or have ad hoc hardwiredmechanisms that accommodate only a small,predefined set of changes.

Autonomic systems broadly include fault-tolerantsystems, intrusion-tolerant systems, and autonomiccomputing. While traditional fault-tolerant systems

Next-Generation Security Systems

Page 116: About the National Science and Technology Council - NITRD

100

generally focus on addressing accidental faults anderrors, intrusion-tolerant systems address intentionalfaults caused by a malicious adversary. Autonomiccomputing uses automated management techniquesto install software and patches, or to otherwiserespond or adapt to changes in the computingenvironment such as failure-induced outages, changesin load characteristics, and addition of server capacity.Autonomic computing systems may not effectivelyrespond to failures or changes in operating conditionsdue to malicious attacks without being deliberatelydesigned to do so.

Capability GapsToday, human operators decide how to protectsystems from inadvertent cascading failures andmalicious attacks, which can occur even when someclasses of cyber security tools are in place. Autonomicsystems facilitate self-protection by: 1) monitoringsystems and automatically improving system defenses(i.e., improving protective measures, configurations ofIT, and cyber security systems); 2) using sensorreports to anticipate problems before they occur oridentify them as they occur, and taking steps to avoidthem or reduce their consequences; and 3) identifyingemerging problems arising from failures or attacksthat are not corrected by self-healing measures, andresponding to mitigate their impact. For example, anautonomic security system might provide automatedsecurity patch identification and deployment, or mightautomatically correlate security information across anenterprise to facilitate management of distributedprotective measures.

Autonomic system technologies are needed to betterprotect systems and continually improve reliability,respond to ongoing failures and attacks, isolatecompromised portions of IT systems, reconfigurenetworks and resources, and reconstitute systems torecover from attacks. These systems must protectagainst accidental faults introduced by human andsoftware errors as well as against misuse and maliciousinternal and external attacks.

Autonomic systems R&D and integration of keytechnologies are needed in several research areas,including information models, machine reasoning andlearning, and autonomic platforms and infrastructure.

Autonomic systems require design andimplementation of models that completely andaccurately capture information about the state of asystem and its components. Predictive models ofautonomic systems are needed in order to helpguarantee desired behavior. New languages may beneeded to capture the data and semantics in thesemodels, aggregate knowledge, parse and translateknowledge to support reasoning, and enableheterogeneous components to infer and shareknowledge. Platforms and infrastructures forretrieving and manipulating models for distributedheterogeneous components need to be designed,developed, and validated through simulation. Thiswill require the development of languages and APIsfor querying components, and a communicationinfrastructure for facilitating interaction betweendistributed components.

Such advances will enable greater scalability,interoperability, security, scope, and robustness of ITsystems, while reducing costs associated withoperating, maintaining, and protecting them. Oncethis infrastructure exists, domain-specific applicationswill be needed to capitalize on the benefits ofautonomic systems. Different domains may requirethe establishment of different policies to allowinferences about the operational state of a systemacross distributed heterogeneous components. Eachapplication needs mechanisms for automated learningabout the system, automated reasoning usingknowledge about the system, establishing rules basedon policies, propagating new or revised policies, andmanipulating components.

Approaches to adaptive, autonomic systems includeR&D in a variety of research areas. For example,biologically inspired cognitive response strategies canbe developed to use machine learning and proactivecontingency planning to automate cyber-basedanalogs to immune response and system regeneration.Work aimed at reducing the time required to achieveconsistency among replicated information stores afteran update can help increase the effectiveness ofredundancy techniques to the point where they canenable self-healing when bodies of information aredamaged or compromised. This work, being applied

Next-Generation Security Systems

Page 117: About the National Science and Technology Council - NITRD

101

to centralized servers and distributed publish-and-subscribe settings, can include reasoning about theinsider threat to preempt insider attacks; detectingsystem overrun by inferring user goals and intent;enabling anomaly detection; and combining andcorrelating information such as from system layersand direct user challenges.

Research is also needed on extended and refined end-to-end QoS models. Such models must provide aquantitative basis for efficient and effective resourcemanagement for adaptive, autonomic systems,enabling them to respond to changes due to overload,component failure, malicious attacks, evolvingoperational requirements, and/or a dynamicoperational environment.

End users need the ability to define policies based onapplication-specific QoS metrics to control systemoperation in order to apply resources in the mostappropriate manner. Research needs include:developing a policy specification language to captureapplication-specific QoS requirements; mapping high-level application QoS specifications into lower-levelsystem metrics; predictable QoS-aware components forprocessing, data management, and communication;and composability for end-to-end assurance.

7.5 Architectures for Next-Generation Internet

Infrastructure

Definition The next-generation Internet infrastructure willcomprise the protocols, hardware, and software forsecure, reliable communication across loose federationsof networks. Its design and uses will be driven by anew generation of applications. The full range ofapplications is impossible to predict and unanticipatedforms of network usage can be expected.

Importance The Internet has become an essential component ofthe Nation’s critical infrastructures. It inheritstraditional requirements associated with supportingthese infrastructures, but also needs to address new

requirements such as real-time response, reliable andsecure communications, and quality of service.Internet vulnerabilities include threats to availability(e.g., denial-of-service attacks), unauthorized access toand use of information (e.g., compromised privacy orintegrity of personal or corporate records and theft ofintellectual property), and potential disruption ofoperations for essential government and publicservices (e.g., financial systems, transportation, andpower, water, and food supply and distribution). Newarchitectures are essential to assured, secure operationof critical and non-critical Internet-based systems andservices.

State of the ArtThe Internet was designed for transparency, scalability,and redundancy to enable resilience and survivability.Packet-switched services underpin end-to-end deliveryof information and communications. The earlyadoption of the simple, robust Transmission ControlProtocol/Internet Protocol (TCP/IP) suite enableddiverse implementations and services such as theaddition of virtual circuits via sockets. Performanceassessment focused on statistical characteristics ofaggregate traffic through the network, such as averagemessage latency. The strong separation betweennetwork and application provided by the TCP/IParchitecture enabled a variety of new networkingmedia (e.g., wireless and optical networking) and newclasses of applications (e.g., streaming video, IPtelephony) to be added without major changes to theunderlying architecture.

Internet enhancements have included augmentationof host-based physical addressing to include dynamicIP address assignment and virtual IP addressing.Virtual Private Network (VPN) architectures use theBorder Gateway Protocol to set up tunneled networksthat have separate address spaces. VPNs permitgroups of trusted participants to exchangeinformation securely, supported by encryption andkey management that assures isolation for each VPN,while operating over the Internet fabric.

Most Internet services depend on a core infrastructureof backbone networks and routers, with local areanetworks and nodes at the edge delivering services to

Next-Generation Security Systems

Page 118: About the National Science and Technology Council - NITRD

102

end users. Existing networking protocols continue tobe improved and new ones are developed forspecialized applications. For example, InternetProtocol version 6 (IPv6) is a variant of IP that offersan expanded address space, and protocols have beendeveloped for mobile ad hoc networks. Specializedcontroller area networks for real-time control arebeing developed using time-triggered physical layersand time-triggered architectures.

In some cases, as with IPv6, enhancements toprotocols include incorporation of securitymechanisms. In other instances, new protocols aredeveloped to provide security, as with the WiredEquivalent Privacy protocol that is intended toimprove the security of certain classes of wirelesscommunications. Security extensions to the DomainName System are starting to transition intooperational use; efforts aimed at developing security-based improvements to routing protocols areunderway; and security is a key requirement invarious areas of networking research such as wirelessnetworking and optical networking.

Capability Gaps Development of architectures for next-generationInternet infrastructures will require research in a widevariety of technical areas. These include: improvedand/or new protocols that include enhancements forsecurity and authentication; optical networkingincluding optical circuit-based networking, opticalswitching, and optical computing; networkmanagement and control (e.g., virtual control plane)technologies; new networking services for naming,addressing, and identity management; scalable, robusttechnologies to meet high-assurance, high-reliability,real-time computing needs; testbeds to supportarchitecture research, development, analysis, testing,and evaluation; and new applications that takeadvantage of next-generation Internet architectures.

Research will also be needed on deployment issues,including development of technology transition andmigration paths – taking compatibility issues intoaccount – from current to next-generationinfrastructure and services, interoperability ofheterogeneous (IP-based, optical, and wireless)systems, scalability issues, and enhanced

understanding of business cases and economics ofcommercial deployment.

7.6 Quantum Cryptography

DefinitionQuantum cryptography is a set of methods forimplementing cryptographic functions using theproperties of quantum mechanics. These methods arebased on quantum mechanics, but they need not, andcurrently do not, make use of quantum computing.Most quantum cryptography research is directedtoward generating a shared key between two parties, aprocess known as quantum key distribution (QKD).Shared keys may be used directly as keys for aconventional symmetric cryptographic algorithm or asa one-time pad. A variety of protocols have beendeveloped for QKD, but they generally share twobasic features: 1) the idealized version of the protocolprevents an eavesdropper from obtaining enoughinformation to intercept or decode messages (e.g.,messages are encoded by using the shared key as aone-time pad); and 2) the communicating parties candetect the presence of an eavesdropper becauseeavesdropping will alter the quantum properties of theparticles used in key distribution in a measurable way.

ImportanceQuantum cryptography offers the potential forstronger information assurance, but QKD must bedesigned and implemented properly to deliverpromised benefits. QKD systems may be subject to anumber of attacks, depending on the implementationand the protocol, and as is always the case, even thestrongest of cryptographic methods are vulnerable toflaws in design and implementation.

State of the ArtQuantum cryptographic products have been offeredsince 1999, with research ongoing to advance the stateof the art. The most common type of quantum keydistribution uses a scheme known as BB84 in whichpolarized photons are sent between thecommunicating parties and used to develop theshared key. The BB84 protocol has been shown to besecure for implementations that preserve assumptionsabout physical properties of the system. Many

Next-Generation Security Systems

Page 119: About the National Science and Technology Council - NITRD

103

varieties of the BB84 scheme have been developed,and other forms of quantum key distribution havebeen proposed.

Rapid progress has led to products capable of keydistribution through many kilometers of fiber-opticcable. Additional products include quantum randomnumber generators, single photon detectors, andphoton sources. However, vulnerabilities may beintroduced in the physical systems, quantumprotocols, application software, and operating systemsused to process keys. Existing QKD systems are notable to guarantee the production and receipt of asingle photon per time slice, as required by mostquantum protocols. Multiple photons emitted in asingle time slice may allow an attacker to obtaininformation on the shared key.

Capability GapsExisting quantum cryptographic protocols may alsohave weaknesses. Although BB84 is generally regardedas secure, researchers frequently introduce newprotocols that differ radically from the BB84 scheme,and a number of these protocols are vulnerable toattack. Quantum cryptographic equipment must beintegrated with an organization’s network, potentiallyleaving the QKD system and its software open toconventional network attacks. Methods for evaluatingand certifying QKD systems have not yet beenincorporated into existing security evaluationcapabilities.

Next-Generation Security Systems

Page 120: About the National Science and Technology Council - NITRD

104

Topics in this R&D category address the impacts ofcyber security on people, organizations, and society,and the implications of cyber security for law, policy,and social systems. Topics in this category are:

❖ Trust in the Internet❖ Privacy

8.1 Trust in the Internet

Definition While the focus of CSIA R&D is necessarily ontechnical advances that improve cyber security andinformation assurance, such technical activities takeplace within a broader cultural context: the overalllevel of public confidence, or trust, in the Internetand the varied transactions and processes it makespossible. Public trust in the Internet can be defined asthe degree to which individuals and organizations feelcomfortable that their sensitive information will behandled securely, their privacy will be maintained,and their systems will be free from intrusion in anyinteractions and transactions over the Internet.

Importance In the years ahead, economic innovation – includingdevelopment of novel applications exploiting high-bandwidth connectivity – will depend heavily on asteady strengthening of the public’s trust in onlinetransactions. Current data indicate, however, that asInternet use increases, so do the levels of electroniccrime and malicious attacks that users experience.Consumers worry about identity theft, theft of credit-card information, and other fraudulent activities (e.g.,through phishing, spyware, keylogging). Studies havefound tens of millions of different spyware productsin use and the number of phishing attacks has beenknown to grow by double-digit rates from one monthto the next.

State of the Art Media accounts of cyber misbehavior and crime havehad a positive impact, in that they have raised public

awareness of security issues and useful protectivemeasures and have spurred vendors of software andInternet services to improve their products. Practicesto build public confidence now include password-protected Web sites, anti-virus software, firewalls,trustmarks (seals of approval from trusted partiesposted on sites and products), certifications, digitalsignatures, and mechanisms for customer service andcomplaints. In addition, government, academic, andprivate-sector organizations have begun to moresystematically collect and analyze data about Internetcrime and transgressions of trust. This will clarifyglobal trends and enable more informed decisionmaking by regulators, hardware and softwaredevelopers and vendors, and consumers.

Capability Gaps Improved user awareness and sustained enforcementof cyber crime and IT-related consumer protectionlaws help maintain trust in the Internet. However, thedevelopment of new technologies can also contributeto maintaining this trust, such as by deterring orpreventing online activities that lead to loss of trust inthe Internet.

The financial services community has a long historyof developing fraud-detection technologies based onpattern recognition and classification of purchasingbehavior. Online fraud detection is a less mature areathat requires additional R&D. Better detection oftechnologies used by criminals, such as keystrokeloggers used to steal credit card, banking, or othersensitive information, is also needed.

Much malicious activity has its roots in socialengineering, such as a Web site designed to trickindividuals into installing a malicious piece ofsoftware on their computers or an e-mail designed totrick recipients into divulging sensitive or personalinformation. Better technologies for detecting socialengineering attacks can help reduce the exposure ofusers to various classes of attacks that ultimatelycompromise trust in the Internet. In addition, public-

8. SOCIAL DIMENSIONS OF CYBER SECURITY

AND INFORMATION ASSURANCE

Page 121: About the National Science and Technology Council - NITRD

105

and private-sector studies and analyses of maliciousand criminal cyber techniques, trends, and costsshould be improved and refined to provide betterguidance for decision making by policy makers andconsumers alike.

8.2 Privacy

DefinitionPrivacy can be defined as freedom from surveillance,intrusion, and unauthorized access to information. Inthe context of the IT infrastructure, privacy can bedefined as the expectation that personal and businessinformation, communications, transactions, or othercomputer-mediated activities will not be viewed,intercepted, recorded, manipulated, or used withoutone’s knowledge or permission, except in certain wellunderstood circumstances (e.g., informationlegitimately subpoenaed for law enforcement purposes).

Tensions can arise between expectations of preservationof privacy and the use of authentication and need formonitoring and analysis of information andcommunications to enhance cyber security. Effectivecyber security must not only assure information andcommunication privacy but also, more broadly, protectthe physical functionality of IT systems to process,store, and make available information with assuredconfidentiality and integrity. Good informationmanagement and system design should support bothprivacy and security; indeed, true protection of privacyis not possible without good security.

Most cyber security involves the use of technology toprotect data or systems from improper access oralteration. Privacy issues include data access andaccuracy but can extend further, to such questions ashow data are used and whether citizens are informedabout the collection and use of their personal data, aswell as about their ability to correct inaccurate data.These are often policy issues that must be addressedusing mechanisms that may or may not includetechnology.

ImportanceAddressing privacy issues is critical to ensuring publictrust in the integrity of the IT infrastructure. The

Administration stated in its 2003 National Strategy toSecure Cyberspace that protecting privacy and civilliberties is a guiding principle of cyber security and thatcyber security programs must strengthen, not weaken,such protections. The report stated that the Federalgovernment should lead by example in implementingstrong privacy policies and practices in cyberspace.

State of the ArtThe scope of security in cyberspace includes theinternal networks of an organization and itsconnections with the public, the private sector, andgovernment entities. Each sector is subject to its ownprivacy laws, regulations, and/or industry practices.Given this interconnected environment, cyber securityresearch should incorporate privacy principles thatunderscore the requirements and practices of both thepublic and private sectors. Domain-specific examplesof lawmaking in the privacy arena include the HealthInsurance Portability and Accountability Act, whichgoverns use of personal medical information by thirdparties; the Federal Trade Commission (FTC) Act,which gives the FTC power to enforce companies’privacy promises about how they collect, use, andsecure consumers’ personal information; and theFinancial Modernization Act of 1999 (also known asthe Gramm-Leach-Bliley Act), which requires theadministrative, technical, and physical safeguarding ofpersonal information by businesses.

Although this is an evolving area of law, regulation,and institutional policy, it is clear that cyber securitytechnologies have the ability to impact privacy. Newtechnologies resulting from R&D have the potentialto continue raising privacy issues requiring resolution;consequently, these issues should be considered aspart of developing cyber security technologies.

Capability GapsFully integrating privacy in cyber security R&D willentail development of principles and methods in thefollowing areas:

Integrating privacy in IT system life cycles:Including privacy practices and processes at theearliest stages of R&D helps ensure the efficientdeployment of systems that require privacyprotections. Privacy risks that are identified at early

Social Dimensions

Page 122: About the National Science and Technology Council - NITRD

106

stages of development are more easily mitigated withreduced impact on the development effort. Resourcesand tools can include privacy-impact assessments andprivacy audits, which together can establish objectivesand evaluate privacy throughout the life cycle of thesystem and its data.

Privacy principles: The private sector, government,and citizens are each subject to privacy laws,regulations, and/or practices. Cyber security R&Dshould enable new technologies and theirimplementation to be consistent with privacy lawsand widely accepted privacy principles. Examplesinclude principles for assuring data quality andintegrity; limits on data collection, use, disclosure,and retention; openness and accountability; andcitizen participation and impact through notifications,accessibility, and avoiding or redressing harm frominaccurate data.

Privacy environments: The technical environmentsfor privacy in cyber security are of two primary types:1) the maintenance environment, which involvessystem architecture and the storage and protection ofdata; and 2) the transaction environment, whichconcerns how data are shared and exchanged withinand across organizations. Cyber security andinformation assurance R&D should address privacyissues raised by both types of environments.

The maintenance environment provides for thecollection and storage of information. Privacy issuesconcern the information itself – its scope, its accuracy,and how it is used – as well as how it is managed,such as in policies for storage, retention periods, anddisposal. Privacy issues in data collection involve datasources, the type and quantity of data stored, andnotification. Research questions include how to limitthe scope of personal information needed and avoidcollection of unnecessary data; methods to improvedata accuracy; impacts of inaccurate data; concernsabout data sources; and methods for providingnotifications when information is collected. Instorage, issues include methods for preventing privacybreaches resulting from events that range from lost orstolen computers to deliberate penetration ofenterprise systems and data theft. Retention and

disposal issues include how to ensure that data are notretained longer than needed, and that data areproperly destroyed from all media at the end of theretention period.

The transaction environment provides for informationsharing and requires access controls, authentication,and technologies for sharing. Privacy issues relating toaccess controls include defining the categories ofauthorized users and the access rights and permissionsappropriate for each category. Privacy issues associatedwith authentication methods include invasiveness,types of data required (e.g., collection and use ofbiometrics), and whether or how personal informationcan be linked across data stores or organizations. Onthe topic of forensics issues, one question is how IPaddresses, audit trails, or other techniques can be usedto ascertain whether transactions violate privacypolicies or laws.

Data sharing issues include how to establish andevaluate technical and non-technical solutions toensure that: 1) sharing practices are aligned withenterprise policies such as any confidentiality promisesmade in privacy policies; 2) data will be shared onlywith approved parties; and 3) data shared withsuppliers are subject to appropriate protections.

As information technologies and cyber securitytechnologies evolve, better understanding of howprivacy features of these technologies will be assessedand what metrics will be used to measure theireffectiveness also needs to be developed.

Social Dimensions

Page 123: About the National Science and Technology Council - NITRD

Appendices

Page 124: About the National Science and Technology Council - NITRD
Page 125: About the National Science and Technology Council - NITRD

109

This appendix provides brief descriptions of themissions of the Federal agencies that participate in theCyber Security and Information AssuranceInteragency Working Group (CSIA IWG) as well assummaries of their CSIA R&D activities andinterests.

Department of Commerce (DOC) andNational Institute of Standards and

Technology (NIST)

Building upon the Computer Security Act of 1987(P.L. 100-35), the Paperwork Reduction Act of 1995(P.L. 104-13), and the Information TechnologyManagement Reform Act of 1996 (i.e., Clinger-Cohen Act, P.L. 104-106, Division E), the FederalInformation Security Management Act of 2002(FISMA) (P.L. 107-347, Title III) provides the basicstatutory requirements for securing Federal computersystems. The FISMA requires each agency toinventory its major computer systems, identify andprovide appropriate security protections, and develop,document, and implement an agency-wideinformation security program.

FISMA, the Cyber Security Research andDevelopment Act of 2002 (P.L. 107-305), and OMBCircular A-130 authorize NIST to conduct researchand develop standards and guidelines for use byFederal agencies for securing non-national securitysystems. NIST carries out this mission principallythrough technology transfer initiatives and theissuance of NIST Special Publications and FederalInformation Processing Standards (FIPSs). NISTconducts research to determine the nature and extentof information security vulnerabilities, to developtechniques for providing cost-effective informationsecurity solutions, and to support its standards andguideline programs.

FISMA authorizes the Secretary of Commerce tochoose which of these standards and guidelines topromulgate, and it authorizes the director of OMB tooversee the development and implementation of thesesecurity policies, principles, standards, and guidelines.FISMA authorizes the OMB director to: requireagencies to follow the standards and guidelinesdeveloped by NIST and prescribed by the Secretary ofCommerce; review agency security programs annuallyand approve or disapprove them; and take actionsauthorized by the Clinger-Cohen Act (includingbudgetary actions) to ensure compliance. These rolesand responsibilities assigned to NIST and theSecretary of Commerce do not extend to computersystems identified as national security systems. Thedirector of OMB has the authority to take budgetaryactions and report to Congress on similar mattersrelated to national security systems.

In addition to supporting Federal activities underFISMA, the security standards, guidelines, andresearch results developed by NIST are also frequentlyused by U.S. and global industries and foreigngovernments as sources for IT system security policiesand methods.

Department of Defense (DoD)DoD is concerned with protecting the security of allaspects of the IT infrastructure that affect criticalmilitary infrastructures, including private-sectorinfrastructures on which the warfighter relies. Theassured exchange of information and communicationsis a crucial component of military activitiessupporting the primary mission of DoD.

As outlined in Joint Vision 2020 and the DefenseTechnology Area Plan, U.S. forces depend oninterrelated capabilities that combine command,control, communications, and computers with

APPENDIX A

CSIA IWG Agency Roles and Responsibilities

Page 126: About the National Science and Technology Council - NITRD

110

intelligence, surveillance, and reconnaissance. All ofthese capabilities must be supported by an underlyingfoundation of information assurance to facilitate“decision superiority” on the battlefield. Successfulapplication of these capabilities will enable full-spectrum dominance for U.S. forces in the future.

The DoD Science & Technology (S&T) programadvances the S&T base for protecting critical defenseinfrastructures and develops tools and solutions toeliminate any significant vulnerability to cyberattacks. The program includes thrusts in the areas ofanalysis and assessment, mission assurance,indications and warning, threats and vulnerabilities,remediation, mitigation response, and reconstitution.The program focuses on DoD requirements forprotection that go well beyond what the private sectorrequires and commercial technologies provide.

The Director for Defense Research and Engineering(DDR&E) is responsible for DoD S&T. TheDDR&E is also the Chief Technology Officer for theSecretary of Defense and the Under Secretary ofDefense for Acquisition, Technology, and Logisticsresponsible for scientific and technical matters andtechnology readiness of major acquisition programs.The three technical offices within the Offices ofDDR&E are: the Office of the Deputy UnderSecretary of Defense (DUSD) for Laboratories andBasic Sciences (LABS), the DUSD for Science andTechnology (S&T), and the DUSD for AdvancedSystems and Concepts (AS&C). In addition,DDR&E oversees the Defense Advanced ResearchProjects Agency (DARPA).

DDR&E oversees the Technology Area Review andAssessment process, which is DoD’s mechanism tocoordinate and review S&T programs throughout thedepartment. Within DDR&E, DUSD (S&T) isresponsible for policy, programmatic, financial, andstrategic oversight of the department’s appliedresearch and advanced technology development.DUSD (LABS) is responsible for basic research andDoD laboratory management issues, and DUSD(AS&C) is responsible for technology demonstrationsand transition programs such as the AdvancedConcept Technology Demonstrations.

DoD strives for a balanced R&D program acrossbasic and applied research and advanced developmentin academia, DoD laboratories, and industry. DoD’sinternal cyber security and information assuranceresearch programs are concentrated at the NationalSecurity Agency (NSA), Army Research Laboratory,Naval Research Laboratory, Air Force ResearchLaboratory, and Army’s Communications andElectronics Research, Development, and EngineeringCommand. DARPA funds targeted research projectswith three- to five-year lifetimes. The Army ResearchOffice, Office of Naval Research, and Air ForceOffice of Scientific Research fund most of the DoD-sponsored university research. DoD laboratory andsponsored industrial R&D emphasize advanceddefensive technologies that DoD requires but are notavailable in commercial systems.

DDR&E also collaborates and coordinates with theOffice of the Assistant Secretary of Defense(Networks and Information Integration) InformationAssurance Directorate, which is responsible for policy,oversight, and acquisition in operational informationassurance to ensure linkage between operational needsand long-term S&T investments.

Department of Energy (DOE)DOE is principally a national security agency and allof its missions flow from this core mission. Thedepartment provides the scientific foundations,technologies, policies, and institutional leadershipnecessary to achieve efficiency in energy use, diversityin energy sources, a more productive and competitiveeconomy, improved environmental quality, and asecure national defense.

DOE also works to ensure energy security, maintainthe safety, security, and reliability of the nuclearweapons stockpile, clean up the environment fromthe legacy of the Cold War, and develop innovationsin science and technology. Science and technology arethe department’s principal tools in the pursuit of itsnational security mission.

DOE’s research activities complement and are closelycoordinated with the activities of other Federalagencies, including DARPA, EPA, NASA, NIH,

Page 127: About the National Science and Technology Council - NITRD

111

NSA, and NSF. The department also promotes thetransfer of the results of its basic research to a broadset of application fields such as advanced materials,national defense, medicine, space science andexploration, and industrial processes. The departmenthas taken a deliberate and integrated approach to itsR&D portfolio, using the strengths of all programs toaddress its central mission. For example,environmental security and economic securityunderpin national security, and each is sustained byscience. Within the department, the Office of Sciencemanages fundamental research programs in basicenergy sciences, biological and environmentalsciences, and computational science.

The DOE Office of Science is the single largestsupporter of basic research in the physical sciences inthe United States. It oversees and is the principalFederal funding agency for the Nation’s researchprograms in high-energy physics, nuclear physics, andfusion energy sciences. In addition, the Office ofScience manages fundamental research programs inbasic energy sciences, biological and environmentalresearch, and advanced scientific computing research.The Office of Science also promotes workforcedevelopment by sponsoring programs that support thescientific advancement of students and educators.

Department of Homeland Security(DHS)

DHS is leading the Federal government’s unifiedeffort to secure the United States homeland. Thedepartment’s organizations are focused on a variety ofmissions associated with preventing and deterringterrorist attacks and protecting against and respondingto threats and hazards to the Nation. Theresponsibilities of the department’s Science andTechnology (S&T) Directorate include identifyingpriorities for, establishing, conducting, andcoordinating basic and applied research, development,testing, and evaluation (RDT&E) activities that arerelevant to all areas of the department mission.

DHS’s cyber security R&D portfolio engages in cybersecurity RDT&E endeavors aimed at securing theNation’s critical infrastructures through coordinatedefforts to improve the security of today’s IT

infrastructure and to provide a foundation for a moresecure next-generation IT infrastructure. In thiscontext, the IT infrastructure is considered to includethe infrastructure that underlies Internetcommunications as well as the IT components thatare essential to the operations of the Nation’s criticalinfrastructure sectors.

Cyber security RDT&E activities funded by DHSS&T are carried out by a variety of organizations,including the private sector, universities, and nationallaboratories. The strategic approach taken by DHScyber security portfolio and program managersemphasizes public-private partnerships and otherforms of collaboration. The goal of this approach is toencourage widespread use of more secure IT systemsand components through technology transfer anddiffusion of Federally funded R&D into commercialproducts and services.

Department of Justice (DOJ)DOJ’s mission is “to enforce the law and defend theinterests of the United States according to the law; toensure public safety against threats foreign anddomestic; to provide Federal leadership in preventingand controlling crime; to seek just punishment forthose guilty of unlawful behavior; and to ensure fairand impartial administration of justice for allAmericans.”

To execute this mission, the department fields morethan 110,000 employees in 39 separate componentorganizations led by the U.S. Attorney General. Theseinclude the U.S. Attorneys who prosecute offendersand represent the U.S. government in court;investigative agencies – the Federal Bureau ofInvestigation, the Drug Enforcement Administration,and the Bureau of Alcohol, Tobacco, Firearms, andExplosives – that deter and investigate crimes andarrest criminal suspects; the U.S. Marshals Service,which protects the Federal judiciary, apprehendsfugitives, and detains persons in Federal custody; andthe Bureau of Prisons, which confines convictedoffenders.

Litigating divisions represent the interests of theAmerican people and enforce Federal criminal and

Page 128: About the National Science and Technology Council - NITRD

112

civil laws, including civil rights, tax, antitrust,environmental, and civil justice statutes. The Office ofJustice Programs and the Office of Community-Oriented Policing Services provide leadership andassistance to state, tribal, and local governments.Other major departmental components include theNational Drug Intelligence Center, the United StatesTrustees, the Justice Management Division, theExecutive Office for Immigration Review, theCommunity Relations Service, the Executive Office ofthe Attorney General, and the Office of the InspectorGeneral. Headquartered in Washington, D.C., thedepartment conducts most of its work in officeslocated throughout the country and overseas.

Department of StateThe Department of State promotes internationalscientific and technical exchanges in the service ofU.S. ideals and interests. Such internationalcollaboration accelerates the progress of advancedresearch and creates global communities of like-minded researchers. International cooperation incyber security research is especially critical, due to theglobal nature of networks and the ability of cyberattacks to move rapidly across borders.

While the department does not perform or sponsor itsown research activities in science and technology, itworks with other Federal agencies to facilitateinternational research collaborations and to coordinatethe exchange of information about research activitieswith other nations. Cooperation often takes placeunder the auspices of umbrella agreements, negotiatedby the department bilaterally, for research cooperationwith other nations. Working with other agencies, theState Department coordinates meetings and exchangesbetween U.S. and foreign government officialsoverseeing R&D and seeks to develop self-sustainingdialogues among researchers.

The goals of State Department efforts in cybersecurity are: 1) to foster international collaborationand technical exchange in cyber security R&D; 2) to encourage standardization and the adoption ofbest practices around the globe; and 3) to enhancenational security by facilitating internationalcommunication about cyber threats.

The State Department has coordinated exchangesbetween U.S. cyber security and informationassurance research officials and their counterparts inother nations and has facilitated the development ofinternational collaborations in cyber security R&D.Some of these have been stand-alone initiatives, suchas the U.S.-India Cyber Security Experts Group.Others have been components of larger collaborativeR&D programs for critical infrastructure protection,such as the bilateral programs with Canada, theUnited Kingdom, and Japan. Federal agencyparticipants have included NIST, NSF, DoD, DHS,and DOE. Recent programs have served to strengthencollaborative cyber security research relationshipsbetween the U.S. and Japan, India, Canada, theEuropean Union, and several individual Europeancountries. These activities have taken place in bothbilateral and multilateral settings.

The State Department participates as an observer onthe INFOSEC Research Council and providesoversight for the Technical Support Working Group.

Department of Transportation (DOT)and Federal Aviation Administration

(FAA)The mission of the FAA is to provide the safest, mostefficient aerospace system in the world. In securingthe national airspace system, the FAA supports DHSprograms in emergency preparedness, crisismanagement, and continuity of government planning.

The FAA is a member of the Joint Planning andDevelopment Office (JPDO), which is chartered byCongress to develop a vision of the aviation system inthe year 2025 and a Next Generation AirTransportation System (NGATS) ImplementationPlan. The JPDO includes DHS, DOC, DoD, DOT,NASA, and OSTP. Close partnerships with otherFederal agencies on integration of securitytechnologies and management of over-flight programshelp ensure continuous operation of the nationalairspace system.

FAA cyber security and information assuranceresearch activities seek to maximize budgeteffectiveness and leverage developments by other

Page 129: About the National Science and Technology Council - NITRD

113

agencies. FAA’s unique requirements are based onidentification of security measures that provide for thesafety and security of the FAA workforce, facilities,and critical infrastructure. Cyber-defense conceptmodeling plays a significant role in improving thesecurity of FAA’s information infrastructure. Theagency’s cyber security goal is mission survivability byachieving zero cyber events that disable orsignificantly degrade FAA services. The Director ofInformation Technology Research and Development(Chief Technology Officer) is responsible fordeveloping, managing, and executing FAA’s IT andinformation systems security R&D programs.

FAA cyber security and information assuranceresearch includes such topics as:

❖ Adaptive quarantine methods to better isolatenetwork damage

❖ Information detection techniques to help officialsanticipate and thwart the plans of potentialterrorists

❖ Topological vulnerability analysis to determine thetopology of dependencies among networkvulnerabilities and analyze possible attacks againstthe network

❖ Development of a time-phased (2005 to 2015)National Airspace System (NAS) InformationSystems Security Architecture (ISSA)

❖ R&D to extend the software-costing model,COnstructive COst MOdel II (COCOMO II), toinclude security for the full FAA acquisitionoperation cost for secure systems

Department of the Treasury Though the Department of the Treasury does nothave an R&D budget, through an outreach tofinancial-sector chief information and chieftechnology officers, the department has documentedthe sector’s cyber security R&D requirements andidentified near-, medium-, and long-term R&Dprojects to meet the requirements. In addition, theFederal Reserve has agreed to participate in a pilotproject to develop a system for tracking the physicaldiversity of telecommunications circuits.

The suggested projects in Treasury’s R&D agendacome from many sources and cover many areas. Theseprojects are designed to identify security approachesthat are easily transferable to the private sector and toincrease the overall resiliency of the sector. Theprojects are also designed to encompass: 1) all facetsof the critical infrastructure protection life cycle; 2) a wide variety of technology and business practiceareas; 3) short- to long-term development; and 4) low- to high-risk research. In addition,Department personnel are continuing theirdiscussions with experts and organizations both insideand outside the financial services sector to identifyR&D projects that will help make the sector moreresilient against external and internal cyber threats.

Intelligence CommunityThe Intelligence Community provides assessments,including national intelligence estimates, of foreignthreats to the U.S. IT infrastructure. Theseassessments consider capabilities and intent bothcurrently and looking forward five years and beyond.

National Aeronautics and SpaceAdministration (NASA)

NASA is at the forefront of exploration and discoveryas the world’s preeminent organization for space andaeronautics R&D. NASA is currently undergoing atransformation to align its core competencies with itsspace exploration vision.

Given NASA’s focus on its core competencies, theagency’s cyber security and information security R&Dportfolio is limited. Currently, NASA is participatingin a multi-agency R&D project that seeks to securedata transmission and IT systems used by theNational Airspace System through securing onboardnetworks, protecting air/ground data links, andproviding improved situational awareness of theonboard environment.

NASA’s other areas of interest germane to cybersecurity and information assurance include themaintenance of availability for satellite and spacecrafttransmissions, protection of NASA’s space-based

Page 130: About the National Science and Technology Council - NITRD

114

assets, and security requirements for post-IP-basedspace communications.

National Institutes of Health (NIH)The National Institutes of Health (NIH), a part ofthe U.S. Department of Health and Human Services,is the primary Federal agency that conducts andsupports medical research. Helping to lead the waytoward medical discoveries that improve people’shealth and save lives, NIH scientists investigate waysto prevent disease and to identify the causes,treatments, and even cures for common and rarediseases.

NIH’s work in cyber security and informationassurance R&D is aimed at supporting the mission ofthe Institutes, with an emphasis on continuing thedevelopment of the security infrastructure to supportdistributed multi-organization federated data andcomputation, including fine-grained access control forbiomedical information.

National Science Foundation (NSF)NSF is an independent agency established to promotethe progress of science through investment in researchand education. The Federal government’s only agencydedicated to the support of education andfundamental research in all scientific and engineeringdisciplines, NSF serves as the primary source ofFederal investment in long-range, innovative research.A central goal of this investment is to ensure that theUnited States maintains leadership in scientificdiscovery and the development of new technologies.In addition, NSF’s mission explicitly includes supportfor research to advance the national health, prosperity,and welfare, and to secure the national defense.

NSF primarily supports peer-reviewed, long-range,innovative research conducted by academicinstitutions and not-for-profit research laboratories.This is implemented through a wide range ofinvestments, from grants to individual investigators,to multi-university teams, to large centers. Themajority of NSF funding takes the form of researchgrants and cooperative agreements. When

appropriate, NSF cooperates with other Federal andinternational agencies to enable co-funding ofpromising peer-reviewed research.

All aspects of IT infrastructure are included in theNSF research portfolio. NSF’s cyber security R&D ismanaged in the Computer and Information Scienceand Engineering (CISE) Directorate. NSF’s CyberTrust initiative provides the central focus for cybersecurity research investment. Research related to otheraspects of the IT infrastructure is also centered inCISE. This includes research to advance: networkingand Internet technologies, communications, computersystems, operating systems and middleware, databasesand information management, distributed systems,and embedded sensing and control. The NSFEngineering Directorate supports research in severalrelated areas, including sensor networks andinfrastructure risk analysis. In areas of commonconcern, CISE investment in IT research is closelycoordinated with the other NSF science andengineering directorates.

Technical Support Working Group(TSWG)

The TSWG is a multi-agency organization thatidentifies, prioritizes, and coordinates interagency andinternational R&D requirements for combatingterrorism. Overseen by the Department of State andincluding representatives from agencies across theFederal government, the TSWG rapidly developstechnologies and equipment to meet the high-priorityneeds of communities engaged in combatingterrorism, and addresses joint internationaloperational requirements through cooperative R&Dwith major allies.

Since 1986, the TSWG has pursued technologies tocombat terrorism in the broad context of nationalsecurity by providing a cohesive interagency forum todefine user-based technical requirements spanningFederal agencies. By enlisting the participation of U.S.and foreign industry, academic institutions, andFederal and private laboratories, the TSWG ensures arobust forum for developing technical solutions to the

Page 131: About the National Science and Technology Council - NITRD

115

most pressing counterterrorism requirements.Participants in the 10 functional subgroup areas ofthe TSWG can come to a single table to articulatespecific threats and user-defined approaches to therapid prototyping and development ofcounterterrorism devices, training tools, referencematerials, software, and other equipment.

The TSWG’s program development efforts seek tobalance investments across the four main capabilitiesneeded for combating terrorism:

❖ Antiterrorism – Defensive measures to reducevulnerability to terrorist acts

❖ Counterterrorism – Offensive measures to prevent,deter, and respond to terrorism

❖ Intelligence support – Collection anddissemination of terrorism-related informationregarding the entire spectrum of terrorist threats,including the use of chemical, biological,radiological, and nuclear materials or high-yieldexplosive devices

❖ Consequence management – Preparation for, andresponse to, the consequences of a terrorist event

Page 132: About the National Science and Technology Council - NITRD

116

The Networking and Information TechnologyResearch and Development (NITRD) Program isauthorized by Congress under the High-PerformanceComputing (HPC) Act of 1991 (P.L. 102-194) andthe Next Generation Internet Research Act of 1998(P.L. 105-305). The goals of the Program are to:

❖ Provide research and development foundations forassuring continued U.S. technological leadershipin advanced networking, computing systems,software, and associated information technologies

❖ Provide research and development foundations formeeting the needs of the Federal government foradvanced networking, computing systems,software, and associated information technologies

❖ Accelerate development and deployment of thesetechnologies in order to maintain world leadershipin science and engineering; enhance nationaldefense and national and homeland security;improve U.S. productivity and competitivenessand promote long-term economic growth; improvethe health of the U.S. citizenry; protect theenvironment; improve education, training, andlifelong learning; and improve the quality of life

Program StructureThe Cabinet-level National Science and TechnologyCouncil (NSTC) is the principal means by which thePresident coordinates the diverse science andtechnology programs across the Federal government.The Director of the Office of Science andTechnology Policy (OSTP) manages the NSTC forthe President. The NITRD Subcommittee, whichreports to the NSTC Committee on Technology,coordinates planning, budgeting, and assessment forthe NITRD Program. The Subcommittee iscomposed of representatives from 13 Federal agencies

that participate in the formal NITRD budgetcrosscut, OSTP, the Office of Management andBudget, and the NITRD National CoordinationOffice. (In the NITRD Program, the term “agency”may also refer to a department, a major departmentalsubdivision, or a research office, institute, orlaboratory.) The member agencies are:

AHRQ – Agency for Healthcare Research andQuality

DARPA – Defense Advanced Research ProjectsAgency

DHS – Department of Homeland SecurityDOE/NNSA – Department of Energy/National

Nuclear Security AdministrationDOE/SC – Department of Energy/Office of ScienceEPA – Environmental Protection AgencyNASA – National Aeronautics and Space

AdministrationNIH – National Institutes of HealthNIST – National Institute of Standards and

TechnologyNOAA – National Oceanic and Atmospheric

AdministrationNSF – National Science FoundationNSA – National Security AgencyOSD and Service research organizations – Office of

the Secretary of Defense and DoD Air Force,Army, and Navy research organizations

In addition to the CSIA agencies described inAppendix A, other agencies that participate inNITRD Program activities include:

FAA – Federal Aviation AdministrationFDA – Food and Drug AdministrationGSA – General Services AdministrationNARA – National Archives and Records

Administration

APPENDIX B

The Networking and Information Technology Research and Development Program

Page 133: About the National Science and Technology Council - NITRD

117

The HPC Act of 1991 authorizes the establishment ofan advisory committee to provide guidance to thePresident on the Federal role in networking andinformation technology R&D to assure U.S. scientificleadership and competitiveness. Presidential ExecutiveOrder 13226, dated September 30, 2005, assignsthese responsibilities to the President’s Council ofAdvisors on Science and Technology (PCAST).

The NITRD Program’s broad impact derives in partfrom its diversified and multidisciplinary researchstrategy, which funds fundamental scientificinvestigations across Federal laboratories and centers,research universities, nonprofit organizations, andpartnerships with industry. The NITRD Program is aleading source of not only fundamental technologicalbreakthroughs but also highly skilled human resourcesin the advanced computing, networking, software,and information management technologies thatunderpin U.S. infrastructures and quality of life.

NITRD Program Component Areas,Interagency Working Groups,

and Coordinating GroupsThe NITRD Program is organized into eight ProgramComponent Areas (PCAs). The work of each PCA isguided by either an Interagency Working Group(IWG) or a Coordinating Group (CG) of agencyprogram managers. The collaboration fostered in theIWGs and CGs results in more effective use offunding resources by leveraging agency strengths,avoiding duplication, and generating interoperableresults that maximize the benefits of Federalnetworking and IT R&D investments to both agencymissions and private-sector innovation. These groups,which report to the Subcommittee, meet monthly tocoordinate planning and activities in their specializedR&D areas.

The NITRD PCAs evolve in response to changingresearch needs. In August 2005, the new CyberSecurity and Information Assurance (CSIA) PCA wasestablished when the NSTC’s Critical InformationInfrastructure Protection (CIIP) IWG was renamedand rechartered to report jointly to the NSTC’s

Subcommittee on Infrastructure and NITRDSubcommittee. These steps were taken to facilitatebetter integration of CSIA R&D with NITRDactivities, reflecting the broader impact of cybersecurity and information assurance beyond criticalinformation infrastructure protection. The NITRDPCAs are:

High-End Computing Infrastructure and Applications (HEC I&A)

HEC I&A agencies coordinate Federal activities toprovide advanced computing systems, applicationssoftware, data management, and HEC R&Dinfrastructure to meet agency mission needs and tokeep the United States at the forefront of 21st centuryscience, engineering, and technology. HECcapabilities enable researchers in academia, Federallaboratories, and industry to model and simulatecomplex processes in biology, chemistry, climate andweather, environmental sciences, materials science,nanoscale science and technology, physics, and otherareas to address Federal agency mission needs.

High-End Computing Research and Development (HEC R&D)

HEC R&D agencies conduct and coordinatehardware and software R&D to enable the effectiveuse of high-end systems to meet Federal agencymission needs, to address many of society’s mostchallenging problems, and to strengthen the Nation’sleadership in science, engineering, and technology.Research areas of interest include hardware (e.g.,microarchitecture, memory subsystems, interconnect,packaging, I/O, and storage), software (e.g., operatingsystems, languages and compilers, developmentenvironments, algorithms), and systems technology(e.g., system architecture, programming models).

The HEC Interagency Working Group (IWG)coordinates the activities of both the HEC I&A andthe HEC R&D PCAs.

Cyber Security and Information Assurance (CSIA)CSIA focuses on research and advanced developmentto prevent, resist, detect, respond to, and/or recoverfrom actions that compromise or threaten to

Page 134: About the National Science and Technology Council - NITRD

118

compromise the availability, integrity, orconfidentiality of computer-based systems. Thesesystems provide both the basic infrastructure andadvanced communications in every sector of theeconomy, including critical infrastructures such aspower grids, emergency communications systems,financial systems, and air-traffic-control networks.These systems also support national defense, nationaland homeland security, and other vital Federalmissions, and themselves constitute critical elementsof the IT infrastructure. Broad areas of concerninclude Internet and network security; confidentiality,availability, and integrity of information andcomputer-based systems; new approaches to achievinghardware and software security; testing and assessmentof computer-based systems security; andreconstitution and recovery of computer-basedsystems and data.

The CSIA Interagency Working Group coordinatesthe activities of the CSIA PCA.

Human Computer Interaction and Information Management (HCI&IM)

HCI&IM R&D aims to increase the benefit ofcomputer technologies to humans, particularly thescience and engineering R&D community. To thatend, HCI&IM R&D invests in technologies formapping human knowledge into computing systems,communications networks, and information systemsand back to human beings, for human analysis,understanding, and use. R&D areas include: cognitivesystems, data analysis in fields such as human healthand the environment, information integration,multimodal and automated language translation,robotics, and user interaction technologies.

The HCI&IM Coordinating Group coordinates theactivities of the HCI&IM PCA.

Large Scale Networking (LSN)LSN members coordinate Federal agency networkingR&D in leading-edge networking technologies,services, and enhanced performance, includingprograms in new architectures, optical networktestbeds, security, infrastructure, middleware, end-to-end performance measurement, and advanced

network components; grid and collaborationnetworking tools and services; and engineering,management, and use of large-scale networks forscientific and applications R&D. The results of thiscoordinated R&D, once deployed, can assure that thenext generation of the Internet will be scalable,trustworthy, and flexible.

The LSN Coordinating Group coordinates theactivities of the LSN PCA.

Three teams report to the LSN Coordinating Group:

The Joint Engineering Team (JET) coordinates thenetwork architecture, connectivity, exchange points,and cooperation among Federal agency networks andother high-performance research networks, andprovides close coordination of connectivity,interoperability, and services among government,academia, and industry to improve end-to-end userperformance and avoid duplication of resources andefforts. The JET also coordinates internationalconnectivity and interoperability.

The Middleware And Grid InfrastructureCoordination (MAGIC) Team coordinatescooperation among Federal agencies, researchers, andcommercial entities to research, develop, widelydeploy, and use interoperable grid and middlewaretechnologies, tools, and services and to provide aforum for international coordination.

The Networking Research Team (NRT) coordinatesagency networking research programs and sharesnetworking research information among Federalagencies. It provides outreach to end users bydisseminating networking research information andcoordinating activities among applications developersand end users.

High Confidence Software and Systems (HCSS)The goal of HCSS R&D is to bolster the Nation’scapability and capacity for engineering effective andefficient distributed, real-time, IT-centric systems thatare certifiably and inherently dependable, reliable,safe, secure, fault-tolerant, survivable, andtrustworthy. These systems, which are oftenembedded in larger physical and IT systems, are

Page 135: About the National Science and Technology Council - NITRD

119

essential for the operation and evolution of thecountry’s national defense, key industrial sectors, andcritical infrastructures.

The HCSS Coordinating Group coordinates theactivities of the HCSS PCA.

Social, Economic, and Workforce Implications of IT and IT Workforce Development (SEW)

The activities funded under the SEW PCA focus onthe nature and dynamics of IT and its implicationsfor social, economic, and legal systems as well as theinteractions between people and IT devices andcapabilities; the workforce development needs arisingfrom the growing demand for workers who are highlyskilled in information technology; and the role ofinnovative IT applications in education and training.SEW also supports efforts to speed the transfer ofnetworking and IT R&D results to the policy makingand IT user communities at all levels in governmentand the private sector. A key goal of SEW researchand dissemination activities is to enable individualsand society to better understand and anticipate theuses and consequences of IT, so that this knowledgecan inform social policy making, IT designs, the ITuser community, and broadened participation in ITeducation and careers.

The SEW Coordinating Group coordinates theactivities of the SEW PCA.

Software Design and Productivity (SDP)SDP R&D will lead to fundamental advances inconcepts, methods, techniques, and tools for softwaredesign, development, and maintenance that canaddress the widening gap between the needs ofFederal agencies and society for usable anddependable software-based systems and the ability toproduce them in a timely, predictable, and cost-effective manner. The SDP R&D agenda spans boththe engineering components of software creation (e.g.,development environments, component technologies,languages, tools, system software) and the economicsof software management (e.g., project management,schedule estimation and prediction, testing, documentmanagement systems) across diverse domains that

include sensor networks, embedded systems,autonomous software, and highly complex,interconnected systems of systems.

The SDP Coordinating Group coordinates theactivities of the SDP PCA.

Page 136: About the National Science and Technology Council - NITRD

120

ATDnet - Advanced Technology Demonstration NetworkAFRL - Air Force Research LaboratoryAMPATH - AmericasPATHAPI - Application programming interfaceAS&C - Advanced Systems and ConceptsATM - Automatic teller machineBGP - Border Gateway ProtocolBIOS - Basic input/output systemCC - Common CriteriaCI - Critical infrastructureCIH - Chernobyl virusCIIP IWG - Critical Information Infrastructure Protection

Interagency Working GroupCIO - Chief Information OfficerCISE - NSF’s Computer and Information Science

Engineering directorateCOCOMO II - FAA’s Constructive Cost Model IICOTS - Commercial off the shelfCSIA - Cyber Security and Information AssuranceDARPA - Defense Advanced Research Projects AgencyDDoS - Distributed denial of serviceDDR&E- DoD’s Director for Defense Research &

EngineeringDHS - Department of Homeland SecurityDNS - Domain Name SystemDNSSEC - Domain Name System Security ExtensionsDOC - Department of CommerceDoD - Department of DefenseDOE - Department of EnergyDOJ - Department of JusticeDoS - Denial of serviceDOT - Department of TransportationDREN - DoD’s Defense Research and Engineering

Network

DUSD - Deputy Under Secretary of DefenseEGP - Exterior Gateway ProtocolESnet - DOE’s Energy Sciences networkFAA - Federal Aviation AdministrationFIPS - Federal Information Processing StandardFISMA - Federal Information Security Management Act of

2002FTC - Federal Trade CommissionGETS - Government Emergency Telecommunications

ServiceHCI/HSI - Human Computer Interfaces/Human-System

InteractionsHI - Host identityHIP - Host Identity ProtocolI/O - Input/outputIA - Information assuranceIC - Intelligence communityIDS - Intrusion detection systemIETF - Internet Engineering Task ForceIGP - Interior Gateway ProtocolIKE - Internet key exchangeInfosec - Information securityIP - Internet ProtocolIPsec - Internet Protocol securityIR - InfraredIRC - INFOSEC Research CouncilIRTF - Internet Research Task ForceISO/OSI - International Organization for Standardization/

Open System InterconnectISP - Internet service providerISSA - FAA’s Information Systems Security ArchitectureIT - Information technologyIWG - Interagency Working GroupJPDO - Joint Planning and Development Office

APPENDIX C

Acronyms

Page 137: About the National Science and Technology Council - NITRD

121

LABS - Laboratories and Basic SciencesMANLAN - Manhattan Landing exchange pointMIP - Mobile Internet ProtocolMLS - Multi-Level SecurityNAS - National Airspace SystemNASA - National Aeronautics and Space AdministrationNAT - Network Address Translation/TranslatorNCS - National Communication SystemNGATS - Next Generation Air Transportation SystemNGIX - Next Generation Internet ExchangeNGN - Next-Generation NetworkNIH - National Institutes of HealthNISN - NASA’s Integrated Services NetworkNIST - National Institute of Standards and TechnologyNITRD - Networking and Information Technology

Research and Development ProgramNREN - NASA’s Research and Education NetworkNS/EP - National Security/Emergency PreparednessNSA - National Security AgencyNSF - National Science FoundationOMB - Office of Management and BudgetOS - Operating systemOSI - Open Systems InterconnectOSTP - White House Office of Science and Technology

PolicyPC - Personal computerPCS - Process control systemPITAC - President’s Information Technology Advisory

CommitteePKI - Public key infrastructurePSTN - Public switched telephone networkQKD - Quantum key distributionQoS - Quality of serviceR&D - Research and developmentRBAC - Role-based access control RDT&E - Research, development, test, and evaluationRF - Radio frequencyRFID - Radio frequency identificationROM - Read-only memory

S&T - Science and technologySA - Situational awarenessSCADA - Supervisory control and data acquisitionSIM - Subscriber identification moduleTCB - Trusted computing baseTCP/IP - Transmission Control Protocol/Internet

ProtocolTRM - Technical reference modelTSWG - Technical Support Working GroupUPC - Universal Product CodeUSD (AT&L) - Under Secretary of Defense for

Acquisition, Technology, and LogisticsVPN - Virtual private networkWPS - Wireless priority serviceXML - eXtensible Markup Language

Page 138: About the National Science and Technology Council - NITRD

122

The Federal Plan for Cyber Security and Information Assurance Research andDevelopment is the product of extensive efforts by the co-chairs and members of theInteragency Working Group (IWG) on Cyber Security and Information Assurance. Inaddition, experts not formally affiliated with the IWG provided specialized technicalinformation and feedback on drafts that were also essential to the completion of thisPlan.

The National Coordination Office for Networking and Information TechnologyResearch and Development played an instrumental role in the Plan’s development,including research assistance and substantive technical input as well as intensiveeditorial review and publication of the final document.

Representatives of the following departments and agencies participated in developingthe Plan and made multiple technical and editorial contributions to the content ofthis document:

ACKNOWLEDGEMENTS

Central Intelligence AgencyDefense Advanced Research Projects AgencyDepartment of EnergyDepartment of Homeland SecurityDepartment of JusticeDepartment of StateDepartment of TransportationDepartment of the TreasuryDisruptive Technology OfficeFederal Aviation AdministrationFederal Bureau of InvestigationNational Aeronautics and Space AdministrationNational Institute of Standards and TechnologyNational Institutes of HealthNational Science FoundationNational Security AgencyOffice of the Secretary of Defense

and Department of Defense Service research organizationsTechnical Support Working GroupU.S. Postal Service

Page 139: About the National Science and Technology Council - NITRD

Cover Design, Graphics, and Printing

The cover design and graphics are the work of Scientific Designer/Illustrator James J. Caras ofNSF’s Design and Publishing Section. Printing was overseen by NSF Electronic PublishingSpecialist Kelly DuBose.

Page 140: About the National Science and Technology Council - NITRD