the sei series in software...

56

Upload: others

Post on 08-May-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

The SEI Series in Software Engineering

The Addison-Wesley Software Security Series

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks.Where those designations appear in this book, and the publisher was aware of a trademark claim, the designationshave been printed with initial capital letters or in all capitals.

CMM, CMMI, Capability Maturity Model, Capability Maturity Modeling, Carnegie Mellon, CERT, and CERT Coordi-nation Center are registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.

ATAM; Architecture Tradeoff Analysis Method; CMM Integration; COTS Usage-Risk Evaluation; CURE; EPIC; Evolu-tionary Process for Integrating COTS Based Systems; Framework for Software Product Line Practice; IDEAL; InterimProfile; OAR; OCTAVE; Operationally Critical Threat, Asset, and Vulnerability Evaluation; Options Analysis forReengineering; Personal Software Process; PLTP; Product Line Technical Probe; PSP; SCAMPI; SCAMPI LeadAppraiser; SCAMPI Lead Assessor; SCE; SEI; SEPG; Team Software Process; and TSP are service marks of CarnegieMellon University.

Special permission to reproduce portions of Build Security In, © 2005–2007 by Carnegie Mellon University, in thisbook is granted by the Software Engineering Institute.

Special permission to reproduce portions of Build Security In, © 2005–2007 by Cigital, Inc., in this book is granted byCigital, Inc.

Special permission to reprint excerpts from the article “Software Quality at Top Speed,” © 1996 Steve McConnell, inthis book is granted by Steve McConnell.

The authors and publisher have taken care in the preparation of this book, but make no express or implied warrantyof any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequen-tial damages in connection with or arising out of the use of the information or programs contained herein.

The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales,which may include electronic versions and/or custom covers and content particular to your business, training goals,marketing focus, and branding interests. For more information, please contact: U.S. Corporate and Government Sales,(800) 382-3419, [email protected].

For sales outside the United States, please contact: International Sales, [email protected].

Visit us on the Web: informit.com/aw

Library of Congress Cataloging-in-Publication DataSoftware security engineering : a guide for project managers / Julia H. Allen ... [et al.]. p. cm. Includes bibliographical references and index. ISBN 978-0-321-50917-8 (pbk. : alk. paper) 1. Computer security. 2. Software engineering. 3. Computer networks—Security measures. I. Allen, Julia H.

QA76.9.A25S654 2008 005.8—dc22

2008007000

Copyright © 2008 Pearson Education, Inc.

All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permis-sion must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or trans-mission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For informationregarding permissions, write to: Pearson Education, Inc., Rights and Contracts Department, 501 Boylston Street, Suite900, Boston, MA 02116, Fax: (617) 671-3447.

ISBN-13: 978-0-321-50917-8ISBN-10: 0-321-50917-XText printed in the United States on recycled paper at Courier in Stoughton, Massachusetts.First printing, April 2008

xi

Foreword

Everybody knows that software is riddled with security flaws. At firstblush, this is surprising. We know how to write software in a way thatprovides a moderately high level of security and robustness. So whydon’t software developers practice these techniques?

This book deals with two of the myriad answers to this question. Thefirst is the meaning of secure software. In fact, the term “secure soft-ware” is a misnomer. Security is a product of software plus environ-ment. How a program is used, under what conditions it is used, andwhat security requirements it must meet determine whether the soft-ware is secure. A term like “security-enabled software” captures theidea that the software was designed and written to meet specific secu-rity requirements, but in other environments where the assumptionsunderlying the software—and any implied requirements—do nothold, the software may not be secure. In a way that is easy to under-stand, this book presents the need for accurate and meaningful secu-rity requirements, as well as approaches for developing them. Unlikemany books on the subject of secure software, this book does notassume the requirements are given a priori, but instead discussesrequirements derivation and analysis. Equally important, it describestheir validation.

The second answer lies in the roles of the executives, managers, andtechnical leaders of projects. They must support the introduction ofsecurity enhancements in software, as well as robust coding practices(which is really a type of security enhancement). Moreover, they mustunderstand the processes and make allowances for it in their schedul-ing, budgeting, and staffing plans. This book does an excellent job oflaying out the process for the people in these roles, so they can realisti-cally assess its impact. Additionally, the book points out where thestate of the art is too new or lacks enough experience to haveapproaches that are proven to work, or are not generally accepted towork. In those cases, the authors suggest ways to think about theissues in order to develop effective approaches. Thus, executives, man-agers, and technical leaders can figure out what should work best intheir environment.

Forewordxii

An additional, and in fact crucial, benefit of designing and implement-ing security in software from the very beginning of the project is theincrease in assurance that the software will meet its requirements. Thiswill greatly reduce the need to patch the software to fix securityholes—a process that is itself fraught with security problems, under-cuts the reputation of the vendor, and adversely impacts the vendorfinancially. Loss of credibility, while intangible, has tangible repercus-sions. Paying the extra cost of developing software correctly from thestart reduces the cost of fixing it after it is deployed—and produces abetter, more robust, and more secure product.

This book discusses several ways to develop software in such a waythat security considerations play a key role in its development. Itspeaks to executives, to managers at all levels, and to technical leaders,and in that way, it is unique. It also speaks to students and developers,so they can understand the process of developing software with secu-rity in mind and find resources to help them do so.

The underlying theme of this book is that the software we all use couldbe made much better. The information in this book provides a founda-tion for executives, project managers, and technical leaders to improvethe software they create and to improve the quality and security of thesoftware we all use.

Matt BishopDavis, CaliforniaMarch 2008

xiii

Preface

The Problem Addressed by This Book

Software is ubiquitous. Many of the products, services, and processesthat organizations use and offer are highly dependent on software tohandle the sensitive and high-value data on which people’s privacy,livelihoods, and very lives depend. For instance, national security—and by extension citizens’ personal safety—relies on increasingly com-plex, interconnected, software-intensive information systems that, inmany cases, use the Internet or Internet-exposed private networks astheir means for communication and transporting data.

This ubiquitous dependence on information technology makes soft-ware security a key element of business continuity, disaster recovery,incident response, and national security. Software vulnerabilities canjeopardize intellectual property, consumer trust, business operationsand services, and a broad spectrum of critical applications and infra-structures, including everything from process control systems to com-mercial application products.

The integrity of critical digital assets (systems, networks, applications,and information) depends on the reliability and security of the softwarethat enables and controls those assets. However, business leaders andinformed consumers have growing concerns about the scarcity of practi-tioners with requisite competencies to address software security [Carey2006]. Specifically, they have doubts about suppliers’ capabilities to buildand deliver secure software that they can use with confidence and with-out fear of compromise. Application software is the primary gateway tosensitive information. According to a Deloitte survey of 169 major globalfinancial institutions, titled 2007 Global Security Survey: The Shifting Secu-rity Paradigm [Deloitte 2007], current application software countermea-sures are no longer adequate. In the survey, Gartner identifies applicationsecurity as the number one issue for chief information officers (CIOs).

Selected content in this preface is summarized and excerpted from Security in the Software Lifecycle:Making Software Development Processes—and Software Produced by Them—More Secure [Goertzel 2006].

Prefacexiv

The absence of security discipline in today’s software development prac-tices often produces software with exploitable weaknesses. Security-enhanced processes and practices—and the skilled people to managethem and perform them—are required to build software that can betrusted to operate more securely than software being used today.

That said, there is an economic counter-argument, or at least the per-ception of one: Some business leaders and project managers believethat developing secure software slows the software developmentprocess and adds to the cost while not offering any apparent advan-tage. In many cases, when the decision reduces to “ship now” or “besecure and ship later,” “ship now” is almost always the choice madeby those who control the money but have no idea of the risks. Theopposite side of this argument, including how software security canpotentially reduce cost and schedule, is discussed in Chapter 1(Section 1.6, “The Benefits of Detecting Software Security DefectsEarly”) and Chapter 7 (Section 7.5.3, in the “Knowledge and Exper-tise” subsection discussing Microsoft’s experience with its SecurityDevelopment Lifecycle) in this book.

Software’s Vulnerability to Attack

The number of threats specifically targeting software is increasing, andthe majority of network- and system-level attacks now exploit vulner-abilities in application-level software. According to CERT analysts atCarnegie Mellon University,1 most successful attacks result from tar-geting and exploiting known, unpatched software vulnerabilities andinsecure software configurations, a significant number of which areintroduced during software design and development.

These conditions contribute to the increased risks associated withsoftware-enabled capabilities and exacerbate the threat of attack.Given this atmosphere of uncertainty, a broad range of stakeholdersneed justifiable confidence that the software that enables their corebusiness operations can be trusted to perform as intended.

Why We Wrote This Book: Its Purpose, Goals, and Scope

The Challenge of Software Security Engineering

Software security engineering entails using practices, processes, tools,and techniques to address security issues in every phase of the software

1. CERT (www.cert.org) is registered in the U.S. Patent and Trademark Office by Carnegie MellonUniversity.

Preface xv

development life cycle (SDLC). Software that is developed with securityin mind is typically more resistant to both intentional attack and unin-tentional failures. One view of secure software is software that is engi-neered “so that it continues to function correctly under maliciousattack” [McGraw 2006] and is able to recognize, resist, tolerate, andrecover from events that intentionally threaten its dependability.Broader views that can overlap with software security (for example,software safety, reliability, and fault tolerance) include the notion ofproper functioning in the face of unintentional failures or accidentsand inadvertent misuse and abuse, as well as reducing softwaredefects and weaknesses to the greatest extent possible regardless oftheir cause. This book addresses the narrower view.

The goal of software security engineering is to build better, defect-freesoftware. Software-intensive systems that are constructed using moresecurely developed software are better able to do the following:

• Continue operating correctly in the presence of most attacks byeither resisting the exploitation of weaknesses in the software byattackers or tolerating the failures that result from such exploits

• Limit the damage resulting from any failures caused by attack-triggered faults that the software was unable to resist or tolerateand recover as quickly as possible from those failures

No single practice offers a universal “silver bullet” for software secu-rity. With this caveat in mind, Software Security Engineering: A Guide forProject Managers provides software project managers with sound prac-tices that they can evaluate and selectively adopt to help reshape theirown development practices. The objective is to increase the securityand dependability of the software produced by these practices, bothduring its development and during its operation.

What Readers Can Expect

Readers will increase their awareness and understanding of the secu-rity issues in the design and development of software. The book’s con-tent will help readers recognize how software development practicescan either contribute to or detract from the security of software.

The book (and material referenced on the Build Security In Web sitedescribed later in this preface) will enable readers to identify and com-pare potential new practices that can be adapted to augment a

Prefacexvi

project’s current software development practices, thereby greatlyincreasing the likelihood of producing more secure software andmeeting specified security requirements. As one example, assurancecases can be used to assert and specify desired security properties,including the extent to which security practices have been successfulin satisfying security requirements. Assurance cases are discussed inChapter 2 (Section 2.4, “How to Assert and Specify Desired SecurityProperties”).

Software developed and assembled using the practices described inthis book should contain significantly fewer exploitable weaknesses.Such software can then be relied on to more capably resist or tolerateand recover from attacks and, therefore, to function more securely inan operational environment. Project managers responsible for ensur-ing that software and systems adequately address their securityrequirements throughout the SDLC should review, select, and tailorguidance from this book, the Build Security In Web site, and thesources cited throughout this book as part of their normal project man-agement activities.

The five key take-away messages for readers of this book are as follows:

1. Software security is about more than eliminating vulnerabilitiesand conducting penetration tests. Project managers need to take asystematic approach to incorporate the sound practices discussedin this book into their development processes (all chapters).

2. Network security mechanisms and IT infrastructure security ser-vices do not sufficiently protect application software from securityrisks (Chapters 1 and 2).

3. Software security initiatives should follow a risk managementapproach to identify priorities and determine what is “goodenough,” while understanding that software security risks willinevitably change throughout the SDLC (Chapters 1, 4, and 7).

4. Developing secure software depends on understanding the opera-tional context in which it will be used (Chapter 6).

5. Project managers and software engineers need to learn to thinklike an attacker to address the range of things that software shouldnot do and identify how software can better resist, tolerate, andrecover when under attack (Chapters 2, 3, 4, and 5).

Preface xvii

Who Should Read This Book

Software Security Engineering: A Guide for Project Managers is primarilyintended for project managers who are responsible for software devel-opment and the development of software-intensive systems. Leadrequirements analysts, experienced software and security architectsand designers, system integrators, and their managers should alsofind this book useful. It provides guidance for those involved in themanagement of secure, software-intensive systems, either developedfrom scratch or through the assembly, integration, and evolution ofacquired or reused software.

This book will help readers understand the security issues associatedwith the engineering of software and should help them identify prac-tices that can be used to manage and develop software that is better ableto withstand the threats to which it is increasingly subjected. It pre-sumes that readers are familiar with good general systems and softwareengineering management methods, practices, and technologies.

How This Book Is Organized

This book is organized into two introductory chapters, four technicalchapters, a chapter that describes governance and management con-siderations, and a concluding chapter on how to get started.

Chapter 1, Why Is Security a Software Issue?, identifies threats thattarget most software and the shortcomings of the software develop-ment process that can render software vulnerable to those threats. Itdescribes the benefits of detecting software security defects early inthe SDLC, including the current state of the practice for making thebusiness case for software security. It closes by introducing some prag-matic solutions that are further elaborated in the chapters that follow.

Chapter 2, What Makes Software Secure?, examines the core and influ-ential properties of software that make it secure and the defensive andattacker perspectives in addressing those properties, and discusses howdesirable traits of software can contribute to its security. The chapterintroduces and defines the key resources of attack patterns and assur-ance cases and explains how to use them throughout the SDLC.

Chapter 3, Requirements Engineering for Secure Software, describespractices for security requirements engineering, including processes

Prefacexviii

that are specific to eliciting, specifying, analyzing, and validating secu-rity requirements. This chapter also explores the key practice of mis-use/abuse cases.

Chapter 4, Secure Software Architecture and Design, presents thepractice of architectural and risk analysis for reviewing, assessing, andvalidating the specification, architecture, and design of a software sys-tem with respect to software security, and reliability.

Chapter 5, Considerations for Secure Coding and Testing, summa-rizes key practices for performing code analysis to uncover errors inand improve the quality of source code, as well as practices for secu-rity testing, white-box testing, black-box testing, and penetration test-ing. Along the way, this chapter references recently published workson secure coding and testing for further details.

Chapter 6, Security and Complexity: System Assembly Challenges,describes the challenges and practices inherent in the design, assem-bly, integration, and evolution of trustworthy systems and systems ofsystems. It provides guidelines for project managers to consider, rec-ognizing that most new or updated software components are typicallyintegrated into an existing operational environment.

Chapter 7, Governance, and Managing for More Secure Software,describes how to motivate business leaders to treat software securityas a governance and management concern. It includes actionable prac-tices for risk management and project management and for establish-ing an enterprise security framework.

Chapter 8, Getting Started, summarizes all of the recommended prac-tices discussed in the book and provides several aids for determiningwhich practices are most relevant and for whom, and where to start.

The book closes with a comprehensive bibliography and glossary.

Notes to the Reader

Navigating the Book’s Content

As an aid to the reader, we have added descriptive icons that mark thebook’s sections and key practices in two practical ways:

• Identifying the content’s relative “maturity of practice”:

The content provides guidance for how to think about a topicfor which there is no proven or widely accepted approach. The

L1

Preface xix

intent of the description is to raise awareness and aid thereader in thinking about the problem and candidate solutions.The content may also describe promising research results thatmay have been demonstrated in a constrained setting.The content describes practices that are in early (pilot) use andare demonstrating some successful results.The content describes practices that are in limited use in indus-try or government organizations, perhaps for a particular mar-ket sector.The content describes practices that have been successfullydeployed and are in widespread use. Readers can start usingthese practices today with confidence. Experience reports andcase studies are typically available.

• Identifying the designated audiences for which each chapter sec-tion or practice is most relevant:

Executive and senior managersProject and mid-level managersTechnical leaders, engineering managers, first-line managers,and supervisors

As the audience icons in the chapters show, we urge executive andsenior managers to read all of Chapters 1 and 8, plus the following sec-tions in other chapters: 2.1, 2.2, 2.5, 3.1, 3.7, 4.1, 5.1, 5.6, 6.1, 6.6, 7.1, 7.3,7.4, 7.6, and 7.7.

Project and mid-level managers should be sure to read all of Chapters1, 2, 4, 5, 6, 7, and 8, plus these sections in Chapter 3: 3.1, 3.3, and 3.7.

Technical leaders, engineering managers, first-line managers, andsupervisors will find useful information and guidance throughout theentire book.

Build Security In: A Key Resource

Since 2004, the U.S. Department of Homeland Security Software Assur-ance Program has sponsored development of the Build Security In (BSI)Web site (https://buildsecurityin.us-cert.gov/), which was one of thesignificant resources used in writing this book. BSI content is based onthe principle that software security is fundamentally a software engi-neering problem and must be managed in a systematic way throughoutthe SDLC.

L2

L3

L4

E

M

L

Prefacexx

BSI contains and links to a broad range of information about soundpractices, tools, guidelines, rules, principles, and other knowledge tohelp project managers deploy software security practices and buildsecure and reliable software. Contributing authors to this book and thearticles appearing on the BSI Web site include senior staff from theCarnegie Mellon Software Engineering Institute (SEI) and Cigital, Inc.,as well as other experienced software and security professionals.

Several sections in the book were originally published as articles inIEEE Security & Privacy magazine and are reprinted here with the per-mission of IEEE Computer Society Press. Where an article occurs inthe book, a statement such as the following appears in a footnote:

This section was originally published as an article in IEEESecurity & Privacy [citation]. It is reprinted here with permis-sion from the publisher.

These articles are also available on the BSI Web site.

Articles on BSI are referenced throughout this book. Readers can consultBSI for additional details, book errata, and ongoing research results.

Start the Journey

A number of excellent books address secure systems and softwareengineering. Software Security Engineering: A Guide for Project Managersoffers an engineering perspective that has been sorely needed in thesoftware security community. It puts the entire SDLC in the context ofan integrated set of sound software security engineering practices.

As part of its comprehensive coverage, this book captures both stan-dard and emerging software security practices and explains why theyare needed to develop more security-responsive and robust systems.The book is packed with reasons for taking action early and revisitingthese actions frequently throughout the SDLC.

This is not a book for the faint of heart or the neophyte softwareproject manager who is confronting software security for the first time.Readers need to understand the SDLC and the processes in use withintheir organizations to comprehend the implications of the varioustechniques presented and to choose among the recommended prac-tices to determine the best fit for any given project.

Preface xxi

Other books are available that discuss each phase of secure softwareengineering. Few, however, cover all of the SDLC phases in as conciseand usable a format as we have attempted to do here. Enjoy the journey!

Acknowledgments

We are pleased to acknowledge the support of many people who helpedus through the book development process. Our organizations, the CERTProgram at the Software Engineering Institute (SEI) and Cigital, Inc.,encouraged our authorship of the book and provided release time aswell as other support to make it possible. Pamela Curtis, our SEI techni-cal editor, diligently read and reread each word of the entire manuscriptand provided many valuable suggestions for improvement, as well ashelping with packaging questions and supervising development of fig-ures for the book. Jan Vargas provided SEI management support,tracked schedules and action items, and helped with meeting agendasand management. In the early stages of the process, Petra Dilone pro-vided SEI administrative support as well as configuration managementfor the various chapters and iterations of the manuscript.

We also appreciate the encouragement of Joe Jarzombek, the sponsor ofthe Department of Homeland Security Build Security In (BSI) Web site.The Build Security In Web site content is a key resource for this book.

Much of the material in this book is based on articles published withother authors on the BSI Web site and elsewhere. We greatly appreci-ated the opportunity to collaborate with these authors, and theirnames are listed in the individual sections that they contributed to,directly or indirectly.

We had many reviewers, whose input was extremely valuable and ledto many improvements in the book. Internal reviewers included CarolWoody and Robert Ferguson of the SEI. We also appreciate the inputsand thoughtful comments of the Addison-Wesley reviewers: ChrisCleeland, Jeremy Epstein, Ronda R. Henning, Jeffrey A. Ingalsbe, RonLichty, Gabor Liptak, Donald Reifer, and David Strom. We would like togive special recognition to Steve Riley, one of the Addison-Wesleyreviewers who reviewed our initial proposal and all iterations of themanuscript.

We would like to recognize the encouragement and support of ourcontacts at Addison-Wesley. These include Peter Gordon, publishing

Prefacexxii

partner; Kim Boedigheimer, editorial assistant; Julie Nahil, full-serviceproduction manager; and Jill Hobbs, freelance copyeditor. We alsoappreciate the efforts of the Addison-Wesley and SEI artists anddesigners who assisted with the cover design, layout, and figures.

1

Chapter 1

Why Is Security a Software Issue?

1.1 Introduction

Software is everywhere. It runs your car. It controls your cell phone.It’s how you access your bank’s financial services; how you receiveelectricity, water, and natural gas; and how you fly from coast tocoast [McGraw 2006]. Whether we recognize it or not, we all rely oncomplex, interconnected, software-intensive information systemsthat use the Internet as their means for communicating and trans-porting information.

Building, deploying, operating, and using software that has not beendeveloped with security in mind can be high risk—like walking a highwire without a net (Figure 1–1). The degree of risk can be compared tothe distance you can fall and the potential impact (no pun intended).

This chapter discusses why security is increasingly a software problem.It defines the dimensions of software assurance and software security. Itidentifies threats that target most software and the shortcomings of the

Selected content in this chapter is summarized and excerpted from Security in the Software Lifecy-cle: Making Software Development Processes—and Software Produced by Them—More Secure [Goertzel2006]. An earlier version of this material appeared in [Allen 2007].

Chapter 1 Why Is Security a Software Issue?2

software development process that can render software vulnerable tothose threats. It closes by introducing some pragmatic solutions that areexpanded in the chapters to follow. This entire chapter is relevant forexecutives (E), project managers (M), and technical leaders (L).

1.2 The Problem

Organizations increasingly store, process, and transmit their mostsensitive information using software-intensive systems that aredirectly connected to the Internet. Private citizens’ financial transac-tions are exposed via the Internet by software used to shop, bank,pay taxes, buy insurance, invest, register children for school, and joinvarious organizations and social networks. The increased exposurethat comes with global connectivity has made sensitive informationand the software systems that handle it more vulnerable to uninten-tional and unauthorized use. In short, software-intensive systems

Figure 1–1: Developing software without security in mind is like walking a high wire without a net

1.2 The Problem 3

and other software-enabled capabilities have provided more open,widespread access to sensitive information—including personalidentities—than ever before.

Concurrently, the era of information warfare [Denning 1998], cyberter-rorism, and computer crime is well under way. Terrorists, organizedcrime, and other criminals are targeting the entire gamut of software-intensive systems and, through human ingenuity gone awry, are beingsuccessful at gaining entry to these systems. Most such systems are notattack resistant or attack resilient enough to withstand them.

In a report to the U.S. president titled Cyber Security: A Crisis of Prioriti-zation [PITAC 2005], the President’s Information Technology AdvisoryCommittee summed up the problem of nonsecure software as follows:

Software development is not yet a science or a rigorous disci-pline, and the development process by and large is not con-trolled to minimize the vulnerabilities that attackers exploit.Today, as with cancer, vulnerable software can be invadedand modified to cause damage to previously healthy soft-ware, and infected software can replicate itself and be carriedacross networks to cause damage in other systems. Like can-cer, these damaging processes may be invisible to the lay per-son even though experts recognize that their threat isgrowing.

Software defects with security ramifications—including coding bugssuch as buffer overflows and design flaws such as inconsistent errorhandling—are ubiquitous. Malicious intruders, and the maliciouscode and botnets1 they use to obtain unauthorized access and launchattacks, can compromise systems by taking advantage of softwaredefects. Internet-enabled software applications are a commonlyexploited target, with software’s increasing complexity and extensibil-ity making software security even more challenging [Hoglund 2004].

The security of computer systems and networks has become increas-ingly limited by the quality and security of their software. Securitydefects and vulnerabilities in software are commonplace and canpose serious risks when exploited by malicious attacks. Over the pastsix years, this problem has grown significantly. Figure 1–2 shows thenumber of vulnerabilities reported to CERT from 1997 through 2006.Given this trend, “[T]here is a clear and pressing need to change the

1. http://en.wikipedia.org/wiki/Botnet

Chapter 1 Why Is Security a Software Issue?4

way we (project managers and software engineers) approach com-puter security and to develop a disciplined approach to softwaresecurity” [McGraw 2006].

In Deloitte’s 2007 Global Security Survey, 87 percent of survey respon-dents cited poor software development quality as a top threat in thenext 12 months. “Application security means ensuring that there issecure code, integrated at the development stage, to prevent potentialvulnerabilities and that steps such as vulnerability testing, applicationscanning, and penetration testing are part of an organization’s soft-ware development life cycle [SDLC]” [Deloitte 2007].

The growing Internet connectivity of computers and networks and thecorresponding user dependence on network-enabled services (such asemail and Web-based transactions) have increased the number andsophistication of attack methods, as well as the ease with which anattack can be launched. This trend puts software at greater risk.Another risk area affecting software security is the degree to whichsystems accept updates and extensions for evolving capabilities.Extensible systems are attractive because they provide for the addi-tion of new features and services, but each new extension adds new

Figure 1–2: Vulnerabilities reported to CERT

Total vulnerabilities reported(1997–2006): 30,264

9000

8000

7000

6000

5000

4000

3000

2000

1000

01997 1998 1999 2000 2001 2002 2003 2004 2005 2006

311 262 417

1090

2437

41293784 3780

5990

8064

1.2 The Problem 5

capabilities, new interfaces, and thus new risks. A final softwaresecurity risk area is the unbridled growth in the size and complexity ofsoftware systems (such as the Microsoft Windows operating system).The unfortunate reality is that in general more lines of code producemore bugs and vulnerabilities [McGraw 2006].

1.2.1 System Complexity: The Context within Which Software Lives

Building a trustworthy software system can no longer be predicatedon constructing and assembling discrete, isolated pieces that addressstatic requirements within planned cost and schedule. Each new orupdated software component joins an existing operational environ-ment and must merge with that legacy to form an operational whole.Bolting new systems onto old systems and Web-enabling old systemscreates systems of systems that are fraught with vulnerabilities. Withthe expanding scope and scale of systems, project managers need toreconsider a number of development assumptions that are generallyapplied to software security:

• Instead of centralized control, which was the norm for largestand-alone systems, project managers have to consider multipleand often independent control points for systems and systems ofsystems.

• Increased integration among systems has reduced the capability tomake wide-scale changes quickly. In addition, for independentlymanaged systems, upgrades are not necessarily synchronized.Project managers need to maintain operational capabilities withappropriate security as services are upgraded and new services areadded.

• With the integration among independently developed and oper-ated systems, project managers have to contend with a heteroge-neous collection of components, multiple implementations ofcommon interfaces, and inconsistencies among security policies.

• With the mismatches and errors introduced by independentlydeveloped and managed systems, failure in some form is morelikely to be the norm than the exception and so further complicatesmeeting security requirements.

There are no known solutions for ensuring a specified level or degreeof software security for complex systems and systems of systems,

Chapter 1 Why Is Security a Software Issue?6

assuming these could even be defined. This said, Chapter 6, Securityand Complexity: System Assembly Challenges, elaborates on thesepoints and provides useful guidelines for project managers to considerin addressing the implications.

1.3 Software Assurance and Software Security

The increasing dependence on software to get critical jobs done meansthat software’s value no longer lies solely in its ability to enhance orsustain productivity and efficiency. Instead, its value also derives fromits ability to continue operating dependably even in the face of eventsthat threaten it. The ability to trust that software will remain depend-able under all circumstances, with a justified level of confidence, is theobjective of software assurance.

Software assurance has become critical because dramatic increases inbusiness and mission risks are now known to be attributable toexploitable software [DHS 2003]. The growing extent of the resultingrisk exposure is rarely understood, as evidenced by these facts:

• Software is the weakest link in the successful execution of interde-pendent systems and software applications.

• Software size and complexity obscure intent and preclude exhaus-tive testing.

• Outsourcing and the use of unvetted software supply-chain com-ponents increase risk exposure.

• The sophistication and increasingly more stealthy nature of attacksfacilitates exploitation.

• Reuse of legacy software with other applications introduces unin-tended consequences, increasing the number of vulnerable targets.

• Business leaders are unwilling to make risk-appropriate invest-ments in software security.

According to the U.S. Committee on National Security Systems’“National Information Assurance (IA) Glossary” [CNSS 2006], soft-ware assurance is

the level of confidence that software is free from vulnerabilities,either intentionally designed into the software or accidentally

1.3 Software Assurance and Software Security 7

inserted at any time during its life cycle, and that the softwarefunctions in the intended manner.

Software assurance includes the disciplines of software reliability2 (alsoknown as software fault tolerance), software safety,3 and software secu-rity. The focus of Software Security Engineering: A Guide for Project Manag-ers is on the third of these, software security, which is the ability ofsoftware to resist, tolerate, and recover from events that intentionallythreaten its dependability. The main objective of software security is tobuild more-robust, higher-quality, defect-free software that continues tofunction correctly under malicious attack [McGraw 2006].

Software security matters because so many critical functions are com-pletely dependent on software. This makes software a very high-valuetarget for attackers, whose motives may be malicious, criminal, adver-sarial, competitive, or terrorist in nature. What makes it so easy forattackers to target software is the virtually guaranteed presence ofknown vulnerabilities with known attack methods, which can beexploited to violate one or more of the software’s security propertiesor to force the software into an insecure state. Secure software remainsdependable (i.e., correct and predictable) despite intentional efforts tocompromise that dependability.

The objective of software security is to field software-based systemsthat satisfy the following criteria:

• The system is as vulnerability and defect free as possible.• The system limits the damage resulting from any failures caused

by attack-triggered faults, ensuring that the effects of any attackare not propagated, and it recovers as quickly as possible fromthose failures.

• The system continues operating correctly in the presence of mostattacks by either resisting the exploitation of weaknesses in thesoftware by the attacker or tolerating the failures that result fromsuch exploits.

2. Software reliability means the probability of failure-free (or otherwise satisfactory) softwareoperation for a specified/expected period/interval of time, or for a specified/expected numberof operations, in a specified/expected environment under specified/expected operating condi-tions. Sources for this definition can be found in [Goertzel 2006], appendix A.1.

3. Software safety means the persistence of dependability in the face of accidents or mishaps—that is, unplanned events that result in death, injury, illness, damage to or loss of property, orenvironmental harm. Sources for this definition can be found in [Goertzel 2006], appendix A.1.

Chapter 1 Why Is Security a Software Issue?8

Software that has been developed with security in mind generallyreflects the following properties throughout its development life cycle:

• Predictable execution. There is justifiable confidence that the soft-ware, when executed, functions as intended. The ability of mali-cious input to alter the execution or outcome in a way favorable tothe attacker is significantly reduced or eliminated.

• Trustworthiness. The number of exploitable vulnerabilities is inten-tionally minimized to the greatest extent possible. The goal is noexploitable vulnerabilities.

• Conformance. Planned, systematic, and multidisciplinary activitiesensure that software components, products, and systems conformto requirements and applicable standards and procedures for spec-ified uses.

These objectives and properties must be interpreted and constrainedbased on the practical realities that you face, such as what constitutesan adequate level of security, what is most critical to address, andwhich actions fit within the project’s cost and schedule. These are riskmanagement decisions.

In addition to predictable execution, trustworthiness, and conform-ance, secure software and systems should be as attack resistant, attacktolerant, and attack resilient as possible. To ensure that these criteriaare satisfied, software engineers should design software componentsand systems to recognize both legitimate inputs and known attack pat-terns in the data or signals they receive from external entities (humansor processes) and reflect this recognition in the developed software tothe extent possible and practical.

To achieve attack resilience, a software system should be able torecover from failures that result from successful attacks by resumingoperation at or above some predefined minimum acceptable level ofservice in a timely manner. The system must eventually recover fullservice at the specified level of performance. These qualities and prop-erties, as well as attack patterns, are described in more detail inChapter 2, What Makes Software Secure?

1.3.1 The Role of Processes and Practices in Software Security

A number of factors influence how likely software is to be secure.For instance, software vulnerabilities can originate in the processes

1.4 Threats to Software Security 9

and practices used in its creation. These sources include the deci-sions made by software engineers, the flaws they introduce in spec-ification and design, and the faults and other defects they include indeveloped code, inadvertently or intentionally. Other factors mayinclude the choice of programming languages and developmenttools used to develop the software, and the configuration andbehavior of software components in their development and opera-tional environments. It is increasingly observed, however, that themost critical difference between secure software and insecure software liesin the nature of the processes and practices used to specify, design, anddevelop the software [Goertzel 2006].

The return on investment when security analysis and secure engineer-ing practices are introduced early in the development cycle rangesfrom 12 percent to 21 percent, with the highest rate of return occurringwhen the analysis is performed during application design [Berinato2002; Soo Hoo 2001]. This return on investment occurs because thereare fewer security defects in the released product and hence reducedlabor costs for fixing defects that are discovered later.

A project that adopts a security-enhanced software development processis adopting a set of practices (such as those described in this book’s chap-ters) that initially should reduce the number of exploitable faults andweaknesses. Over time, as these practices become more codified, theyshould decrease the likelihood that such vulnerabilities are introducedinto the software in the first place. More and more, research results andreal-world experiences indicate that correcting potential vulnerabilities asearly as possible in the software development life cycle, mainly through the adop-tion of security-enhanced processes and practices, is far more cost-effective thanthe currently pervasive approach of developing and releasing frequentpatches to operational software [Goertzel 2006].

1.4 Threats to Software Security

In information security, the threat—the source of danger—is often aperson intending to do harm, using one or more malicious softwareagents. Software is subject to two general categories of threats:

• Threats during development (mainly insider threats). A softwareengineer can sabotage the software at any point in its development

Chapter 1 Why Is Security a Software Issue?10

life cycle through intentional exclusions from, inclusions in, ormodifications of the requirements specification, the threat models,the design documents, the source code, the assembly and integra-tion framework, the test cases and test results, or the installationand configuration instructions and tools. The secure developmentpractices described in this book are, in part, designed to helpreduce the exposure of software to insider threats during its devel-opment process. For more information on this aspect, see “InsiderThreats in the SDLC” [Cappelli 2006].

• Threats during operation (both insider and external threats). Anysoftware system that runs on a network-connected platform islikely to have its vulnerabilities exposed to attackers during itsoperation. Attacks may take advantage of publicly known butunpatched vulnerabilities, leading to memory corruption, execu-tion of arbitrary exploit scripts, remote code execution, and bufferoverflows. Software flaws can be exploited to install spyware,adware, and other malware on users’ systems that can lie dormantuntil it is triggered to execute.4

Weaknesses that are most likely to be targeted are those found in thesoftware components’ external interfaces, because those interfacesprovide the attacker with a direct communication path to the soft-ware’s vulnerabilities. A number of well-known attacks target soft-ware that incorporates interfaces, protocols, design features, ordevelopment faults that are well understood and widely publicized asharboring inherent weaknesses. That software includes Web applica-tions (including browser and server components), Web services, data-base management systems, and operating systems. Misuse (or abuse)cases can help project managers and software engineers see their soft-ware from the perspective of an attacker by anticipating and definingunexpected or abnormal behavior through which a software featurecould be unintentionally misused or intentionally abused [Hope 2004].(See Section 3.2.)

Today, most project and IT managers responsible for system operationrespond to the increasing number of Internet-based attacks by relyingon operational controls at the operating system, network, and data-base or Web server levels while failing to directly address the insecurity

4. See the Common Weakness Enumeration [CWE 2007], for additional examples.

1.5 Sources of Software Insecurity 11

of the application-level software that is being compromised. Thisapproach has two critical shortcomings:

1. The security of the application depends completely on the robust-ness of operational protections that surround it.

2. Many of the software-based protection mechanisms (controls) caneasily be misconfigured or misapplied. Also, they are as likely tocontain exploitable vulnerabilities as the application software theyare (supposedly) protecting.

The wide publicity about the literally thousands of successful attackson software accessible from the Internet has merely made theattacker’s job easier. Attackers can study numerous reports of securityvulnerabilities in a wide range of commercial and open-source soft-ware programs and access publicly available exploit scripts. Moreexperienced attackers often develop (and share) sophisticated, tar-geted attacks that exploit specific vulnerabilities. In addition, thenature of the risks is changing more rapidly than the software can beadapted to counteract those risks, regardless of the software develop-ment process and practices used. To be 100 percent effective, defendersmust anticipate all possible vulnerabilities, while attackers need findonly one to carry out their attack.

1.5 Sources of Software Insecurity

Most commercial and open-source applications, middleware systems,and operating systems are extremely large and complex. In normalexecution, these systems can transition through a vast number of dif-ferent states. These characteristics make it particularly difficult todevelop and operate software that is consistently correct, let alone con-sistently secure. The unavoidable presence of security threats and risksmeans that project managers and software engineers need to payattention to software security even if explicit requirements for it havenot been captured in the software’s specification.

A large percentage of security weaknesses in software could beavoided if project managers and software engineers were routinelytrained in how to address those weaknesses systematically and consis-tently. Unfortunately, these personnel are seldom taught how todesign and develop secure applications and conduct quality assurance

Chapter 1 Why Is Security a Software Issue?12

to test for insecure coding errors and the use of poor developmenttechniques. They do not generally understand which practices areeffective in recognizing and removing faults and defects or in han-dling vulnerabilities when software is exploited by attackers. They areoften unfamiliar with the security implications of certain softwarerequirements (or their absence). Likewise, they rarely learn about thesecurity implications of how software is architected, designed, devel-oped, deployed, and operated. The absence of this knowledge meansthat security requirements are likely to be inadequate and that theresulting software is likely to deviate from specified (and unspecified)security requirements. In addition, this lack of knowledge prevents themanager and engineer from recognizing and understanding how mis-takes can manifest as exploitable weaknesses and vulnerabilities in thesoftware when it becomes operational.

Software—especially networked, application-level software—is mostoften compromised by exploiting weaknesses that result from the fol-lowing sources:

• Complexities, inadequacies, and/or changes in the software’s pro-cessing model (e.g., a Web- or service-oriented architecture model).

• Incorrect assumptions by the engineer, including assumptionsabout the capabilities, outputs, and behavioral states of the soft-ware’s execution environment or about expected inputs fromexternal entities (users, software processes).

• Flawed specification or design, or defective implementation of

– The software’s interfaces with external entities. Developmentmistakes of this type include inadequate (or nonexistent) inputvalidation, error handling, and exception handling.

– The components of the software’s execution environment(from middleware-level and operating-system-level to firm-ware- and hardware-level components).

• Unintended interactions between software components, includingthose provided by a third party.

Mistakes are unavoidable. Even if they are avoided during require-ments engineering and design (e.g., through the use of formal meth-ods) and development (e.g., through comprehensive code reviews andextensive testing), vulnerabilities may still be introduced into softwareduring its assembly, integration, deployment, and operation. No mat-ter how faithfully a security-enhanced life cycle is followed, as long as

1.6 The Benefits of Detecting Software Security Defects Early 13

software continues to grow in size and complexity, some number ofexploitable faults and other weaknesses are sure to exist.

In addition to the issues identified here, Chapter 2, What Makes Soft-ware Secure?, discusses a range of principles and practices, theabsence of which contribute to software insecurity.

1.6 The Benefits of Detecting Software Security Defects Early5

Limited data is available that discusses the return on investment (ROI)of reducing security flaws in source code (refer to Section 1.6.1 formore on this subject). Nevertheless, a number of studies have shownthat significant cost benefits are realized through improvements toreduce software defects (including security flaws) throughout theSDLC [Goldenson 2003]. The general software quality case is made inthis section, including reasonable arguments for extending this case toinclude software security defects.

Proactively tackling software security is often under-budgeted anddismissed as a luxury. In an attempt to shorten development schedulesor decrease costs, software project managers often reduce the timespent on secure software practices during requirements analysis anddesign. In addition, they often try to compress the testing schedule orreduce the level of effort. Skimping on software quality6 is one of theworst decisions an organization that wants to maximize developmentspeed can make; higher quality (in the form of lower defect rates) andreduced development time go hand in hand. Figure 1–3 illustrates therelationship between defect rate and development time.

Projects that achieve lower defect rates typically have shorter sched-ules. But many organizations currently develop software with defectlevels that result in longer schedules than necessary. In the 1970s,

5. This material is extracted and adapted from a more extensive article by Steven Lavenhar ofCigital, Inc. [BSI 18]. That article should be consulted for more details and examples. In addition,this article has been adapted with permission from “Software Quality at Top Speed” by SteveMcConnell. For the original article, see [McConnell 1996]. While some of the sources cited in thissection may seem dated, the problems and trends described persist today.

6. A similar argument could be made for skimping on software security if the schedule andresources under consideration include software production and operations, when securitypatches are typically applied.

Chapter 1 Why Is Security a Software Issue?14

studies performed by IBM demonstrated that software products withlower defect counts also had shorter development schedules [Jones1991]. After surveying more than 4000 software projects, CapersJones [1994] reported that poor quality was one of the most commonreasons for schedule overruns. He also reported that poor qualitywas a significant factor in approximately 50 percent of all canceledprojects. A Software Engineering Institute survey found that morethan 60 percent of organizations assessed suffered from inadequatequality assurance [Kitson 1993]. On the curve in Figure 1–3, the orga-nizations that experienced higher numbers of defects are to the left ofthe “95 percent defect removal” line.

The “95 percent defect removal” line is significant because that level ofprerelease defect removal appears to be the point at which projectsachieve the shortest schedules for the least effort and with the highestlevels of user satisfaction [Jones 1991]. If more than 5 percent of defectsare found after a product has been released, then the product is vul-nerable to the problems associated with low quality, and the organiza-tion takes longer to develop its software than necessary. Projects thatare completed with undue haste are particularly vulnerable to short-changing quality assurance at the individual developer level. Anydeveloper who has been pushed to satisfy a specific deadline or ship aproduct quickly knows how much pressure there can be to cut cornersbecause “we’re only three weeks from the deadline.” As many as fourtimes the average number of defects are reported for released software

Figure 1–3: Relationship between defect rate and development timePercentage of Defects Removed Before Release

Most organizationsare somewherearound this point

DevelopmentTime Fastest schedule

(“best” schedule)

95% 100%~~

1.6 The Benefits of Detecting Software Security Defects Early 15

products that were developed under excessive schedule pressure.Developers participating in projects that are in schedule trouble oftenbecome obsessed with working harder rather than working smarter,which gets them into even deeper schedule trouble.

One aspect of quality assurance that is particularly relevant duringrapid development is the presence of error-prone modules—that is,modules that are responsible for a disproportionate number of defects.Barry Boehm reported that 20 percent of the modules in a program aretypically responsible for 80 percent of the errors [Boehm 1987]. On itsIMS project, IBM found that 57 percent of the errors occurred in 7 per-cent of the modules [Jones 1991]. Modules with such high defect ratesare more expensive and time-consuming to deliver than less error-prone modules. Normal modules cost about $500 to $1000 per functionpoint to develop, whereas error-prone modules cost about $2000 to$4000 per function point to develop [Jones 1994]. Error-prone modulestend to be more complex, less structured, and significantly larger thanother modules. They often are developed under excessive schedulepressure and are not fully tested. If development speed is important,then identification and redesign of error-prone modules should be ahigh priority.

If an organization can prevent defects or detect and remove themearly, it can realize significant cost and schedule benefits. Studies havefound that reworking defective requirements, design, and code typi-cally accounts for 40 to 50 percent of the total cost of software develop-ment [Jones 1986b]. As a rule of thumb, every hour an organizationspends on defect prevention reduces repair time for a system in pro-duction by three to ten hours. In the worst case, reworking a softwarerequirements problem once the software is in operation typically costs50 to 200 times what it would take to rework the same problem duringthe requirements phase [Boehm 1988]. It is easy to understand whythis phenomenon occurs. For example, a one-sentence requirementcould expand into 5 pages of design diagrams, then into 500 lines ofcode, then into 15 pages of user documentation and a few dozen testcases. It is cheaper to correct an error in that one-sentence requirementat the time requirements are specified (assuming the error can be iden-tified and corrected) than it is after design, code, user documentation,and test cases have been written. Figure 1–4 illustrates that the longerdefects persist, the more expensive they are to correct.

The savings potential from early defect detection is significant:Approximately 60 percent of all defects usually exist by design time

Chapter 1 Why Is Security a Software Issue?16

[Gilb 1988]. A decision early in a project to exclude defect detectionamounts to a decision to postpone defect detection and correction untillater in the project, when defects become much more expensive andtime-consuming to address. That is not a rational decision when timeand development dollars are at a premium. According to software qual-ity assurance empirical research, $1 required to resolve an issue duringthe design phase grows into $60 to $100 required to resolve the sameissue after the application has shipped [Soo Hoo 2001].

When a software product has too many defects, including securityflaws, vulnerabilities, and bugs, software engineers can end upspending more time correcting these problems than they spent ondeveloping the software in the first place. Project managers canachieve the shortest possible schedules with a higher-quality productby addressing security throughout the SDLC, especially during theearly phases, to increase the likelihood that software is more securethe first time.

Figure 1–4: Cost of correcting defects by life-cycle phase

Requirements Architecture Design Implementation DeploymentEngineering and Testing and Operations

Phase in Which a Defect Is Corrected

Requirements Engineering

Architecture

Design

Implementationand Testing

Phase in Which a Defect Is Created

Cost to Correct

1.6 The Benefits of Detecting Software Security Defects Early 17

1.6.1 Making the Business Case for Software Security: Current State7

As software project managers and developers, we know that when wewant to introduce new approaches in our development processes,we have to make a cost–benefit argument to executive management toconvince them that this move offers a business or strategic return oninvestment. Executives are not interested in investing in new technicalapproaches simply because they are innovative or exciting. For profit-making organizations, we need to make a case that demonstrates wewill improve market share, profit, or other business elements. Forother types of organizations, we need to show that we will improveour software in a way that is important—in a way that adds to theorganization’s prestige, that ensures the safety of troops in the battle-field, and so on.

In the area of software security, we have started to see some evidenceof successful ROI or economic arguments for security administrativeoperations, such as maintaining current levels of patches, establishingorganizational entities such as computer security incident responseteams (CSIRTs) to support security investment, and so on [Blum 2006,Gordon 2006, Huang 2006, Nagaratnam 2005]. In their article “Tangi-ble ROI through Secure Software Engineering,” Kevin Soo Hoo andhis colleagues at @stake state the following:

Findings indicate that significant cost savings and otheradvantages are achieved when security analysis and secureengineering practices are introduced early in the develop-ment cycle. The return on investment ranges from 12 percentto 21 percent, with the highest rate of return occurring whenanalysis is performed during application design.Since nearly three-quarters of security-related defects aredesign issues that could be resolved inexpensively during theearly stages, a significant opportunity for cost savings existswhen secure software engineering principles are appliedduring design.

However, except for a few studies [Berinato 2002; Soo Hoo 2001], wehave seen little evidence presented to support the idea that investmentduring software development in software security will result in com-mensurate benefits across the entire life cycle.

7. Updated from [BSI 45].

Chapter 1 Why Is Security a Software Issue?18

Results of the Hoover project [Jaquith 2002] provide some case studydata that supports the ROI argument for investment in software secu-rity early in software development. In his article “The Security ofApplications: Not All Are Created Equal,” Jaquith says that “the best-designed e-business applications have one-quarter as many securitydefects as the worst. By making the right investments in applicationsecurity, companies can out-perform their peers—and reduce risk by80 percent.”

In their article “Impact of Software Vulnerability Announcements onthe Market Value of Software Vendors: An Empirical Investigation,”the authors state that “On average, a vendor loses around 0.6 percentvalue in stock price when a vulnerability is reported. This is equiva-lent to a loss in market capitalization values of $0.86 billion per vulner-ability announcement.” The purpose of the study described in thisarticle is “to measure vendors’ incentive to develop secure software”[Telang 2004].

We believe that in the future Microsoft may well publish data reflect-ing the results of using its Security Development Lifecycle [Howard2006, 2007]. We would also refer readers to the business context dis-cussion in chapter 2 and the business climate discussion in chapter 10of McGraw’s recent book [McGraw 2006] for ideas.

1.7 Managing Secure Software Development

The previous section put forth useful arguments and identified emerg-ing evidence for the value of detecting software security defects asearly in the SDLC as possible. We now turn our attention to some ofthe key project management and software engineering practices to aidin accomplishing this goal. These are introduced here and covered ingreater detail in subsequent chapters of this book.

1.7.1 Which Security Strategy Questions Should I Ask?

Achieving an adequate level of software security means more thancomplying with regulations or implementing commonly accepted bestpractices. You and your organization must determine your own defini-tion of “adequate.” The range of actions you must take to reduce soft-ware security risk to an acceptable level depends on what the product,

1.7 Managing Secure Software Development 19

service, or system you are building needs to protect and what it needsto prevent and manage.

Consider the following questions from an enterprise perspective.Answers to these questions aid in understanding security risks toachieving project goals and objectives.

• What is the value we must protect?• To sustain this value, which assets must be protected? Why must

they be protected? What happens if they’re not protected?• What potential adverse conditions and consequences must be pre-

vented and managed? At what cost? How much disruption can westand before we take action?

• How do we determine and effectively manage residual risk (therisk remaining after mitigation actions are taken)?

• How do we integrate our answers to these questions into an effec-tive, implementable, enforceable security strategy and plan?

Clearly, an organization cannot protect and prevent everything. Inter-action with key stakeholders is essential to determine the project’s risktolerance and its resilience if the risk is realized. In effect, security inthe context of risk management involves determining what could gowrong, how likely such events are to occur, what impact they willhave if they do occur, and which actions might mitigate or minimizeboth the likelihood and the impact of each event to an acceptable level.

The answers to these questions can help you determine how much toinvest, where to invest, and how fast to invest in an effort to mitigatesoftware security risk. In the absence of answers to these questions(and a process for periodically reviewing and updating them), you(and your business leaders) will find it difficult to define and deployan effective security strategy and, therefore, may be unable to effec-tively govern and manage enterprise, information, and softwaresecurity.8

The next section presents a practical way to incorporate a reasonedsecurity strategy into your development process. The framework

8. Refer to Managing Information Security Risks: The OCTAVE Approach [Alberts 2003] for moreinformation on managing information security risk; “An Introduction to Factor Analysis of Infor-mation Risk (FAIR)” [Jones 2005] for more information on managing information risk; and “RiskManagement Approaches to Protection” [NIAC 2005] for a description of risk managementapproaches for national critical infrastructures.

Chapter 1 Why Is Security a Software Issue?20

described is a condensed version of the Cigital Risk ManagementFramework, a mature process that has been applied in the field foralmost ten years. It is designed to manage software-induced businessrisks. Through the application of five simple activities (further detailedin Section 7.4.2), analysts can use their own technical expertise, rele-vant tools, and technologies to carry out a reasonable risk manage-ment approach.

1.7.2 A Risk Management Framework for Software Security9

A necessary part of any approach to ensuring adequate software secu-rity is the definition and use of a continuous risk management process.Software security risk includes risks found in the outputs and resultsproduced by each life-cycle phase during assurance activities, risksintroduced by insufficient processes, and personnel-related risks. Therisk management framework (RMF) introduced here and expanded inChapter 7 can be used to implement a high-level, consistent, iterativerisk analysis that is deeply integrated throughout the SDLC.

Figure 1–5 shows the RMF as a closed-loop process with five activitystages. Throughout the application of the RMF, measurement andreporting activities occur. These activities focus on tracking, display-ing, and understanding progress regarding software risk.

1.7.3 Software Security Practices in the Development Life Cycle

Managers and software engineers should treat all software faults andweaknesses as potentially exploitable. Reducing exploitable weak-nesses begins with the specification of software security requirements,along with considering requirements that may have been overlooked(see Chapter 3, Requirements Engineering for Secure Software). Soft-ware that includes security requirements (such as security constraintson process behaviors and the handling of inputs, and resistance to andtolerance of intentional failures) is more likely to be engineered toremain dependable and secure in the face of an attack. In addition,exercising misuse/abuse cases that anticipate abnormal and unex-pected behavior can aid in gaining a better understanding of how tocreate secure and reliable software (see Section 3.2).

9. This material is extracted and adapted from a more extensive article by Gary McGraw, Cigital,Inc. [BSI 33].

1.7 Managing Secure Software Development 21

Developing software from the beginning with security in mind is moreeffective by orders of magnitude than trying to validate, through test-ing and verification, that the software is secure. For example, attempt-ing to demonstrate that an implemented system will never accept anunsafe input (that is, proving a negative) is impossible. You can prove,however, using approaches such as formal methods and functionabstraction, that the software you are designing will never accept anunsafe input. In addition, it is easier to design and implement the sys-tem so that input validation routines check every input that the soft-ware receives against a set of predefined constraints. Testing the inputvalidation function to demonstrate that it is consistently invoked andcorrectly performed every time input enters the system is thenincluded in the system’s functional testing.

Analysis and modeling can serve to better protect your softwareagainst the more subtle, complex attack patterns involving externallyforced sequences of interactions among components or processes thatwere never intended to interact during normal software execution.Analysis and modeling can help you determine how to strengthen thesecurity of the software’s interfaces with external entities and increaseits tolerance of all faults. Methods in support of analysis and modeling

Figure 1–5: A software security risk management framework

Identify and Link the

Business and Technical

Risks

Measurement and Reporting

Understandthe Business

Context

Synthesize and Rank the Risks

Define the RiskMitigationStrategy

1 2 3 4

ArtifactAnalysis

Business Context

Carry OutFixes andValidate

5

Chapter 1 Why Is Security a Software Issue?22

during each life-cycle phase such as attack patterns, misuse and abusecases, and architectural risk analysis are described in subsequent chap-ters of this book.

If your development organization’s time and resource constraints pre-vent secure development practices from being applied to the entiresoftware system, you can use the results of a business-driven riskassessment (as introduced earlier in this chapter and further detailedin Section 7.4.2) to determine which software components should begiven highest priority.

A security-enhanced life-cycle process should (at least to some extent)compensate for security inadequacies in the software’s requirementsby adding risk-driven practices and checks for the adequacy of thosepractices during all software life-cycle phases. Figure 1–6 depicts oneexample of how to incorporate security into the SDLC using the con-cept of touchpoints [McGraw 2006; Taylor 2005]. Software securitybest practices (touchpoints shown as arrows) are applied to a set ofsoftware artifacts (the boxes) that are created during the softwaredevelopment process. The intent of this particular approach is that it is

Figure 1–6: Software development life cycle with defined security touchpoints [McGraw 2006]

Requirementsand

Use Cases

Architectureand Design

Test Plans Code Tests andTest Results

Feedbackfrom the

Field

SecurityRequirements

AbuseCases

RiskAnalysis

ExternalReview

Risk-BasedSecurity Tests

Code Review(Tools)

SecurityOperations

PenetrationTesting

RiskAnalysis

1.8 Summary 23

process neutral and, therefore, can be used with a wide range of soft-ware development processes (e.g., waterfall, agile, spiral, CapabilityMaturity Model Integration [CMMI]).

Security controls in the software’s life cycle should not be limited tothe requirements, design, code, and test phases. It is important to con-tinue performing code reviews, security tests, strict configuration con-trol, and quality assurance during deployment and operations toensure that updates and patches do not add security weaknesses ormalicious logic to production software.10 Additional considerations forproject managers, including the effect of software security require-ments on project scope, project plans, estimating resources, and prod-uct and process measures, are detailed in Chapter 7.

1.8 Summary

It is a fact of life that software faults, defects, and other weaknessesaffect the ability of software to function securely. These vulnerabilitiescan be exploited to violate software’s security properties and force thesoftware into an insecure, exploitable state. Dealing with this possibilityis a particularly daunting challenge given the ubiquitous connectivityand explosive growth and complexity of software-based systems.

Adopting a security-enhanced software development process thatincludes secure development practices will reduce the number ofexploitable faults and weaknesses in the deployed software. Correct-ing potential vulnerabilities as early as possible in the SDLC, mainlythrough the adoption of security-enhanced processes and practices, isfar more cost-effective than attempting to diagnose and correct suchproblems after the system goes into production. It just makes goodsense.

Thus, the goals of using secure software practices are as follows:

• Exploitable faults and other weaknesses are eliminated to thegreatest extent possible by well-intentioned engineers.

• The likelihood is greatly reduced or eliminated that maliciousengineers can intentionally implant exploitable faults and weak-nesses, malicious logic, or backdoors into the software.

10. See the Build Security In Deployment & Operations content area for more information [BSI 01].

Chapter 1 Why Is Security a Software Issue?24

• The software is attack resistant, attack tolerant, and attack resilientto the extent possible and practical in support of fulfilling the orga-nization’s mission.

To ensure that software and systems meet their security requirementsthroughout the development life cycle, review, select, and tailor guid-ance from this book, the BSI Web site, and the sources cited through-out this book as part of normal project management activities.

317

Index

Index terms should be read within the context of software security engineering. Forexample, “requirements engineering” refers to security requirements engineering.

Numbers90 percent right, 167“95 percent defect removal,” 14100-point method, 108

AAbsolute code metrics, 159–160Abstraction, 102Abuse cases. See Misuse/abuse casesAccelerated Requirements Method

(ARM), 103Access control, identity management, 197Accountability, 27, 283Active responses, 245Ad hoc testing, 170Adequate security, defining, 236–238, 278Age verification case study, 199AHP, 110Ambiguity analysis, 128–129, 283Analyzing

failures. See Failure analysisrisks. See Architectural risk analysis;

RMF (risk management framework)

source code. See Source code analysis; Testing security

Application defense techniques, 39–41Application security, 11, 12, 39Architectural risk analysis

architectural vulnerability assessmentambiguity, 128–129attack resistance, 127–128dependency, 129–130known bad practices, 127–128mapping threats and

vulnerabilities, 130

overview, 126–127vulnerability classification, 130

attack patterns, 147–148complete mediation principle, 142defense in depth principle, 140definition, 283economy of mechanism principle, 141failing security principle, 139–140“forest-level” view, 120–123least common mechanism principle,

141least privilege principle, 139never assuming... [safety] principle,

141–142one-page diagram, 122overview, 119–120project management, 247–249promoting privacy principle, 142psychological acceptability principle,

142reluctance to trust principle, 141risk exposure statement, 133–134risk impact determination, 132–134risk likelihood determination, 130–132risk mitigation plans

contents of, 134–136costs, 135definition, 123detection/correction strategies, 135risk impact, reducing, 135risk likelihood, reducing, 135

securing the weakest link principle, 140security guidelines, 143–147, 273security principles, 137–143, 273separation of privilege principle, 140software characterization, 120–123summary of, 273

Index318

Architectural risk analysis (Continued)threats

actors, 126analyzing, 123–126business impact, identifying,

132–133capability for, 131“hactivists,” 126from individuals, 126mapping, 130motivation, 131organized nonstate entities, 126script kiddies, 123state-sponsored, 126structured external, 126summary of sources, 123–125transnational, 126unstructured external, 126virtual hacker organizations, 126vulnerable assets, identifying, 132

useful artifacts, 121zones of trust, 123

Architectural vulnerability assessmentambiguity, 128–129attack resistance, 127–128dependency, 129–130known bad practices, 127–128mapping threats and vulnerabilities,

130overview, 126–127vulnerability classification, 130

Architecturedefinition, 288integration mechanism errors, 187

Architecture and design phasesattack patterns, 55–56issues and challenges, 117–118management concerns, 118objectives, 116risk assessment. See Architectural risk

analysisrole in developing secure software,

115–117source of vulnerabilities, 117summary of, 273

Arguments, assurance casesdeveloping, 66vs. other quality arguments, 70

overview, 62reusing, 69soundness of, 63

ARM (Accelerated Requirements Method), 103

Artifactsarchitectural risk analysis, 121description, 78developing, 88–89, 93–94example, 93–94for measuring security, 256–257

Assetscritical, 236definition, 236threats, identifying, 132

Assurancequality, 13–15role in security, 6–7security. See Measuring security;

Testing securitysoftware, 7, 289source code quality. See Source code

analysisAssurance cases

argumentsdeveloping, 66vs. other quality arguments, 70overview, 62reusing, 69soundness of, 63

benefits of, 69–70building, 62–63CC (Common Criteria), 69claims, 62–63definition, 61, 283diamond symbol, 66EAL (Evaluation Assurance Level), 69evidence, 62–63, 66example, 63–67flowchart of, 64–65GSN (Goal Structuring Notation), 65maintaining, 69–70mitigating complexity, 214–215notation for, 65parallelogram symbol, 66protection profiles, 69reasonable degree of certainty,

69–70

Index 319

recording defects, 63responding to detected errors, 63reusing elements of, 69rounded box symbol, 65in the SDLC, 67–68security evaluation standard, 69security-privacy, laws and

regulations, 68subclaims, 62summary of, 270symbols used in, 65–67

AT&T network outage, 188Attack patterns

architectural risk analysis, 147–148benefits of, 46components, 47–49, 52–53definition, 45, 283MITRE security initiatives, 47recent advances, 46resistance to using, 46in the SDLC

architecture phase, 55–56automated analysis tools, 57–58black-box testing, 59–60code reviews, 57–58coding phase, 57–58design phase, 55–56environmental testing, 59–60implementation phase, 57–58integration testing, 58operations phase, 59requirements phase, 53–54system testing, 58–59testing phase, 58–61unit testing, 58white-box testing, 59–60

specificity level, 51summary of, 270, 273testing security, 173

Attack resilience, 8, 283Attack resistance

architectural vulnerability assessment, 127–128

defensive perspective, 43definition, 8, 283

Attack resistance analysis, 283Attack surface, 35, 284Attack tolerance, 8, 43, 284

Attack trees, 284Attacker’s perspective. See also

Defensive perspective; Functional perspective

attacker’s advantage, 44–45definition, 189identity management, 198–201misuse/abuse cases, 80–81representing. See Attack patternsrequirements engineering, 76–77Web services, 192–196

Attackersbehavior, modeling, 188–189. See also

Attack patterns; Attacker’s perspective

motivation, 131script kiddies, 123unsophisticated, 123vs. users, 165virtual hacker organizations, 126

Attacks. See also Threats; specific attacksbotnets, 284cache poisoning, 285denial-of-service, 285DNS cache poisoning, 153, 286elevation (escalation) of privilege, 286illegal pointer values, 287integer overflow, 287known, identifying. See Attack

patternsmalware, 287phishing, 287recovering from. See Attack resiliencereplay, 288resistance to. See Attack resistancesources of. See Sources of problemsspoofing, 289SQL injection, 155, 289stack-smashing, 33, 289tampering, 289tolerating. See Attack toleranceusurpation, 196

Attributes. See Properties of secure software

Audits, 197, 260–261Authentication, 284Availability, 27, 284Availability threats, 200

Index320

BBank of America, security management,

226Benefits of security. See Costs/benefits of

security; ROI (return on investment)Best practices, 36–37, 284Bishop, Matt, 138, 202Black hats. See AttackersBlack-box testing

definition, 284in the SDLC, 177–178uses for, 59–60

Blacklist technique, 168–169Booch, Grady, 215Botnets, 3, 284Boundary value analysis, 170Brainstorming, 81BSI (Build Security In) Web site,

xix–xx, 46BST (Binary Search Tree), 107Buffer overflow, 154–155, 284–285Bugs in code. See also Flaws in code

buffer overflow, 154–155deadlocks, 156defect rate, 152–153definition, 152, 285exception handling, 154infinite loops, 156input validation, 153–154introduced by fixes, 169known, solutions for, 153–156race conditions, 156resource collisions, 156SQL injection, 155

Building security invs. adding on, 21misuse/abuse cases, 80processes and practices for, 41–42requirements engineering, 79–80

Build Security In (BSI) Web site, xix–xx, 46

Business stresses, 206

CCache poisoning, 285CAPEC (Common Attack Pattern

Enumeration and Classification), 46

Case studiesage verification, 199AT&T network outage, 188Bank of America, security

management, 226complexity, 184, 188, 199, 206exposure of customer data, 133governance and management, 226identity management, 199managing secure development, 226Microsoft, security management, 252Network Solutions data loss, 188power grid failure, 184, 206Skype phone failure, 184TJX security breach, 133voting machine failure, 184

Categorizing requirements, 272Causes of problems. See Sources of

problemsCC (Common Criteria), 69CDA (critical discourse analysis), 103CERT

reported vulnerabilities (1997–2006), 3–4

Secure Coding Initiative, 161sources of vulnerabilities, 161

Characteristics of secure software. SeeProperties of secure software

Claims, assurance cases, 62–63CLASP (Comprehensive Lightweight

Application Security Process), 77, 248

Code. See Source code analysisCode-based testing, 171. See also

White-box testingCoding phase, attack patterns, 57–58Coding practices

books about, 162–163overview, 160–161resources about, 161–163summary of, 274–275

Common Attack Pattern Enumeration and Classification (CAPEC), 46

Common Weakness Enumeration (CWE), 127–128

Complete mediation principle, 142Complexity

business stresses, 206

Index 321

case studiesage verification, 199AT&T network outage, 188identity management, 199Network Solutions data loss, 188power grid failure, 184, 206–207Skype phone failure, 184voting machine failure, 184

component interactions, 206conflicting/changing goals,

205, 213–214deep technical problems, 215–216discrepancies, 206–208evolutionary development,

205, 212–213expanded scope of security, 204–205global work process perspective, 208incremental development, 205, 212–213integrating multiple systems

delegating responsibilities, 209–210failure analysis, consolidating with

mitigation, 211generalizing the problem, 211–212interoperability, 209mitigation, consolidating with

failure analysis, 211mitigations, 209–212“Not my job” attitude, 210operational monitoring,

210–211, 215partitioning security analysis, 208recommended practices, 210–212

interaction stresses, 206less development freedom, 205mitigating

CbyC (Correctness by Construction) method, 216

change support, 215changing/conflicting goals, 214–215continuous risk assessment, 215coordinating security efforts,

219, 277deep technical problems, 216–217end-to-end analysis, 216, 218, 276failure analysis, 218–219, 276–277failure analysis, consolidating with

mitigation, 211flexibility, 215

integrating multiple systems, 209–213

mitigation, consolidating with failure analysis, 211

operational monitoring, 215recommended practices, 218–219summary of, 275tackling hard problems first,

216, 218, 276unavoidable risks, 218–219, 276

overview, 203–205partitioning security analysis, 208people stresses, 206–207recommended practices, 218–219reduced visibility, 204service provider perspective, 208as source of problems, 5–6stress-related failures, 206unanticipated risks, 204, 207user actions, 207wider spectrum of failures, 204

Component interaction errors, 188Component interactions, 206Component-specific integration errors,

187Comprehensive Lightweight

Application Security Process (CLASP), 77, 248

Concurrency management, 144–147Confidentiality, 26, 285Conformance, 8, 285Consolidation, identity management,

190Control-flow testing, 170Controlled Requirements Expression

(CORE), 101–102Convergence, 261–262Coordinating security efforts, 277CORE (Controlled Requirements

Expression), 101–102Core security properties, 26–28. See also

Influential security properties; Properties of secure software

Correctnessdefinition, 30, 285influence on security, 30–34specifications, 30–34under unanticipated conditions, 30

Index322

Costs/benefits of security. See also ROI (return on investment)

budget overruns, 74business case, 17–18business impacts, 133defect rate, effect on development

time, 14lower stock prices, 18preventing defects

“95 percent defect removal,” 14early detection, 13–18error-prone modules, 15inadequate quality assurance, 14vs. repairing, 15

repairing defectscosts of late detection, 15–16by life-cycle phase, 16over development time, 14–15vs. preventing, 15

requirements defects, 74risk mitigation plans, 135schedule problems

canceled projects, 14defect prevention vs. repair, 15poor quality, effect on, 14–16requirements defects, 74time overruns, 14

skimping on quality, 13–14Coverage analysis, 175–176Crackers. See AttackersCritical assets, definition, 236Critical discourse analysis (CDA), 103Cross-site scripting, 285Culture of security, 221–225, 227CWE (Common Weakness

Enumeration), 127–128

DData-flow testing, 170Deadlocks, 156Deception threats, 195Decision table testing, 170Defect density, 257Defect rates

bugs in code, 152–153and development time, 14error-prone modules, 15

flaws in code, 152–153and repair time/cost, 15

Defectsdefinition, 3, 285preventing. See Preventing defectsrepairing. See Repairing defects

Defense in depth, 140, 285Defensive perspective. See also

Attacker’s perspective; Functional perspective

application defense techniques, 39–41application security, 39architectural implications, 37–43attack resistance, 43attack tolerance, 43expected issues, 37–39software development steps, 37unexpected issues, 39–43

Delegating responsibilities, 209–210Denial-of-service attack, 285Dependability, 29–30, 286Dependency, 129–130Dependency analysis, 286Design phase. See Architecture and

design phasesDeveloping software. See SDLC

(software development life cycle)Diamond symbol, assurance cases, 66Discrepancies caused by complexity,

206–208Disruption threats, 195DNS cache poisoning, 286Domain analysis, 102Domain provisioning, 197–198

EEAL (Evaluation Assurance Level), 69Early detection, cost benefits, 13–18Economy of mechanism principle, 141Elevation (escalation) of privilege, 286Elicitation techniques

comparison chart, 105evaluating, 103–104selecting, 89, 97

Eliciting requirementsmisuse/abuse cases, 100–101overview, 100

Index 323

SQUARE, 89, 97–98summary of, 272tools and techniques, 100–106

Endpoint attacks, 201End-to-end analysis, 276Enterprise software security framework

(ESSF). See ESSF (enterprise software security framework)

Environmental testing, 59–60Equivalence partitioning, 170Error-prone modules, 15Errors

architecture integration mechanisms, 187

categories of, 187–188component interaction, 188component-specific integration, 187definition, 186, 286operations, 188specific interfaces, 187system behavior, 188usage, 188

ESSF (enterprise software security framework)

assurance competency, 234best practices, lack of, 228–229creating a new group, 228decision making, 229definition, 230focus, guidelines, 233goals, lack of, 227–228governance competency, 235knowledge management, 234pitfalls, 227–229required competencies, 233–235roadmaps for, 235security touchpoints, 234summary of, 280tools for, 229training, 234“who, what, when” structure, 230–233

Evaluation Assurance Level (EAL), 69Evidence, assurance cases, 62–63, 66Evolutionary development, 205, 212–213Examples of security problems. See Case

studiesException handling, 154Executables, testing, 175–176

Exploits, 286Exposure of information, 194

FFagan inspection, 90, 98, 286Failing security principle, 139–140Failure, definition, 186, 286Failure analysis

attacker behavior, 188–189consolidating with mitigation,

211hardware vs. software, 205–208recommended practices, 218–219summary of, 276–277

Fault-based testing, 171Faults, 186, 286Fault-tolerance testing, 170Flaws in code, 152–153, 286. See also Bugs

in codeFODA (feature-oriented domain

analysis), 102“Forest-level” view, 120–123Formal specifications, 78Functional perspective. See also

Attacker’s perspective; Defensive perspective

definition, 189identity management

access control, 197auditing, 197challenges, 197–198consolidation, 190domain provisioning, 197–198identity mapping, 197interoperability, 190legislation and regulation, 198objectives, 190privacy concerns, 198reporting, 197technical issues, 197–198trends, 197–198

Web services, 190–192Functional security testing

ad hoc testing, 170boundary value analysis, 170code-based testing, 171control-flow testing, 170data-flow testing, 170

Index324

Functional security testing (Continued)decision table testing, 170equivalence partitioning, 170fault-based testing, 171fault-tolerance testing, 170known code internals. See White-box

testinglimitations, 168–169load testing, 171logic-based testing, 170model-based testing, 170overview, 167–168performance testing, 171protocol performance testing, 171robustness testing, 170specification-based testing, 170state-based testing, 170test cases, 170, 173, 274–275tools and techniques, 170–171unknown code internals.

See Black-box testingusage-based testing, 171use-case-based testing, 171

GGeneralizing the problem, 211–212Global work process perspective, 208Goal Structuring Notation (GSN), 65Governance and management. See also

Project managementadequate security, defining,

236–238, 278adopting a security framework. See

ESSF (enterprise software security framework)

analysis, 21–22assets, definition, 236at Bank of America, 226building in vs. adding on, 21case study, 226critical assets, definition, 236definition, 223–224, 288effective, characteristics of, 224–225,

279maturity of practice

audits, 260–261convergence, 261–262exemplars, 265legal perspective, 263

operational resilience, 261–262overview, 259protecting information, 259–260software engineering perspective,

263–264modeling, 21–22overview, 221–222process, definition, 237protection strategies, 236risks

impact, 237–238likelihood, 237–238measuring, 242–243reporting, 242–243tolerance for, 237–238

RMF (risk management framework)business context, understanding,

239–240business risks, identifying, 240–241fixing problems, 242identifying risks, 240–241mitigation strategy, defining,

241–242multi-level loop nature of, 243–244overview, 238–239prioritizing risks, 241synthesizing risks, 241technical risks, identifying, 240–241validating fixes, 242

security, as cultural norm, 221–222, 279security strategy, relevant questions,

18–20security-enhanced SDLC, 20–23summary of, 278–280touchpoints, 22–23

Greenspan, Alan, 134GSN (Goal Structuring Notation), 65

HHackers. See Attackers“Hactivists,” 126. See also AttackersHardened services and servers, 200–201Hardening, 286Howard, Michael, 138, 264

IIBIS (issue-based information systems),

102Icons used in this book, xix, 268

Index 325

Identity managementavailability threats, 200case study, 199endpoint attacks, 201functional perspective

access control, 197auditing, 197challenges, 197–198consolidation, 190domain provisioning, 197–198identity mapping, 197interoperability, 190legislation and regulation, 198objectives, 190privacy concerns, 198reporting, 197technical issues, 197–198trends, 197–198

hardened services and servers, 200–201incident response, 201information leakage, 199–200risk mitigation, 200–201security vs. privacy, 199software development, 201–203usability attacks, 201

Identity mapping, 197Identity spoofing, 286Illegal pointer values, 287Implementation phase, attack patterns,

57–58Incident response, 201Incremental development, 205, 212–213Infinite loops, 156Information warfare, 3Influential security properties. See also

Core security properties; Properties of secure software

complexity, 35–36correctness, 30–34definition, 28dependability, 29–30predictability, 34reliability, 34–35safety, 34–35size, 35–36traceability, 35–36

Information leakage, 199–200Input validation, 153–154Inspections

code. See Source code analysis

Fagan, 90, 98, 286measuring security, 258requirements, 90, 98–99, 272SQUARE (Security Quality

Requirements Engineering), 90, 98–99, 272

Integer overflow, 287Integrating multiple systems

delegating responsibilities, 209–210failure analysis, consolidating with

mitigation, 211generalizing the problem, 211–212interoperability, 209mitigation, consolidating with failure

analysis, 211mitigations, 209–212“Not my job” attitude, 210operational monitoring, 210–211,

215partitioning security analysis, 208recommended practices, 210–212

Integration testing, 58, 176Integrity, 27, 287Interaction stresses, 206Interoperability

identity management, 190integrating multiple systems, 209Web services, 190–191

Issue-based information systems (IBIS), 102

JJAD (Joint Application Development),

102

KKnown

attacks, identifying. See Attackpatterns

bad practices, 127–128bugs, solutions for, 153–156code internals, testing. See White-box

testingvulnerabilities, 187

LLeast common mechanism principle,

141Least privilege principle, 139

Index326

LeBlanc, David, 138Legal perspective, 263Libraries, testing, 175–176Lipner, Steve, 264Load testing, 171Logic-based testing, 170

MMacLean, Rhonda, 226Make the Client Invisible pattern,

49–51, 56Malware, 287Managing security. See Governance and

management; Project managementManual code inspection, 157Mapping

identities, 197threats and vulnerabilities, 130

Maturity of practice, governance and management

audits, 260–261convergence, 261–262exemplars, 265legal perspective, 263operational resilience, 261–262overview, 259protecting information, 259–260software engineering perspective,

263–264McGraw, Gary, 138Measuring security. See also Source code

analysis; Testing securitydefect density, 257inspections, 258Making Security Measurable

program, 47objectives, 255–256overview, 254–255phase containment of defects, 257process artifacts, 256–257process measures, 256–257product measures, 257–259summary of, 280

Messages, Web services, 191–192, 194Methods. See Tools and techniquesMetric code analysis, 159–160Microsoft, security management, 252Minimizing threats. See Mitigation

Misuse/abuse casesattacker’s perspective, 80–81brainstorming, 81classification, example, 96–97creating, 81–82definition, 31, 287eliciting requirements, 100–101example, 82–84security, built in vs. added on, 79–80system assumptions, 80–81templates for, 83

Mitigation. See also Risk mitigation planscausing new risks, 254complexity. See Complexity,

mitigatingconsolidating with failure analysis, 211definition, 287identity management risks, 200–201integrating multiple systems, 209–212threats, 194–196

MITRE security initiatives, 47Model-based testing, 170Modeling, 90, 102

NNegative requirements, 77, 172–173Network Solutions data loss, 188Neumann, Peter, 184Never assuming... [safety] principle,

141–142Nonfunctional requirements, 75–76Non-repudiation, 27, 287“Not my job” attitude, 210Notation for assurance cases, 65Numeral assignment, 107

OOperational monitoring, 210–211, 215Operational resilience, 261–262Operations errors, 188Operations phase, attack patterns, 59Organized nonstate threats, 126Origins of problems. See Sources of

problems

PParallelogram symbol, assurance cases, 66Partitioning security analysis, 208

Index 327

Passive responses, 245Penetration testing, 178–179People stresses, 206–207Performance testing, 171Perspectives on security

attacker’s point of view. SeeAttacker’s perspective

complexity, 208defender’s point of view. See

Defensive perspectivefunctional point of view. See

Functional perspectiveglobal work process, 208legal, 263service provider, 208software engineering, 263–264

Phase containment of defects, 257Phishing, 287Planning game, 107–108Positive requirements, 167–168Power grid failure, 184, 206Predictability, 34Predictable execution, 8, 287Preventing defects. See also

Mitigation“95 percent defect removal,” 14costs of

“95 percent defect removal,” 14early detection, 13–18error-prone modules, 15inadequate quality assurance, 14vs. repair, 15

Prioritizingrequirements

overview, 106requirements prioritization

framework, 109SQUARE, 90, 98summary of, 272tools and techniques, 106–111

risks, 241Privacy, 198–199. See also Identity

managementProblems, causes. See Sources of

problemsProcess, definition, 237Processes and practices

role in security, 8–9in the SDLC, 20–23

Project management. See alsoGovernance and management

active responses, 245architectural risk analysis, 247–249continuous risk management,

247–249, 278knowledge and expertise, 250–251measuring security

defect density, 257inspections, 258objectives, 255–256overview, 254–255phase containment of defects, 257process artifacts, 256–257process measures, 256–257product measures, 257–259summary of, 280

at Microsoft, 252miscommunications, 246–247passive responses, 245planning, 246–250recommended practices, 266required activities, 249–250resource estimation, 251–253resources, 250–251risks, 253–254scope, 245–246security practices, 247–249tools, 250

Promoting privacy principle, 142Properties of secure software

accountability, 27asserting. See Assurance casesattack resilience, 8attack resistance, 8attack tolerance, 8availability, 27confidentiality, 26conformance, 8core properties, 26–28influencing, 36–37influential properties

complexity, 35–36correctness, 30–34definition, 28dependability, 29–30predictability, 34reliability, 34–35safety, 34–35

Index328

Properties of secure software (Continued)size, 35–36traceability, 35–36

integrity, 27non-repudiation, 27perspectives on. See Attacker’s

perspective; Defensive perspective

predictable execution, 8specifying. See Assurance casessummary of, 270trustworthiness, 8

Protection profiles, 69, 287Protection strategies, 236Protocol performance testing, 171Prototyping, 90Psychological acceptability principle, 142

QQFD (Quality Function Deployment),

101Qualities. See Properties of secure

softwareQuality assurance, 13–15. See also

Measuring security; Source code analysis; Testing security

Quality requirements. See Requirements engineering

RRace conditions, 156Reasonable degree of certainty, 69–70, 288Recognize, 288Recover, 288Red teaming, 288Reduced visibility, 204Reducing threats. See MitigationRefinement, 102Relative code metrics, 159–160Reliability, 7, 34–35, 289Reluctance to trust principle, 141Repairing defects, 14–16Replay attack, 288Reporting, identity management, 197Repudiation, 288Requirements engineering. See also

SQUARE (Security Quality Requirements Engineering)

artifactsdescription, 78developing, 88–89, 93–94example, 93–94

attacker’s perspective, 76–77common problems, 74–75elicitation techniques

comparison chart, 105evaluating, 103–104selecting, 89, 97

eliciting requirementsmisuse/abuse cases, 100–101overview, 100summary of, 272tools and techniques, 100–106

formal specifications, 78importance of, 74–75negative requirements, 77nonfunctional requirements, 75–76prioritizing requirements

overview, 106recommendations for managers,

111–112requirements prioritization

framework, 109SQUARE, 90, 98summary of, 272tools and techniques, 106–111

quality requirements, 75–76recommendations for managers,

111–112security, built in vs. added on, 79–80security requirements, 76–78summary of, 271–272tools and techniques, 77–78. See also

Attack patterns; Misuse/abuse cases

Requirements phaseattack patterns, 53–54positive vs. negative requirements,

53–54security testing, 173–174vague specifications, 53–54

Requirements prioritization framework, 109

Requirements triage, 108–109Resilience

attack, 8, 283operational, 261–262

Index 329

Resist, 288Resource collisions, 156Resources for project management,

250–253Return on investment (ROI), 9, 13, 18. See

also Costs/benefits of securityReusing assurance cases arguments, 69Reviewing source code. See Source code

analysisRisk assessment

in architecture and design. SeeArchitectural risk analysis

preliminary, 173SQUARE

description, 89sample output, 94–97techniques, ranking, 94–96

summary of, 271tools and techniques, 94–97

Risk-based testing, 169–172, 275Risk exposure statement, 133–134Risk management framework (RMF). See

RMF (risk management framework)Risk mitigation plans. See also Mitigation

contents of, 134–136costs, 135definition, 123detection/correction strategies, 135risk impact, reducing, 135risk likelihood, reducing, 135

Risksexposure, lack of awareness, 6impact, 132–134, 237–238likelihood, 130–132, 237–238managing. See RMF (risk

management framework)measuring, 242–243project management, 253–254protective measures. See Mitigation;

Risk mitigation plansreporting, 242–243tolerance for, 237–238unavoidable, 276Web services, 193–194

Rittel, Horst, 102RMF (risk management framework)

business context, understanding, 239–240

business risks, identifying, 240–241

continuous management, 247–249, 278

fixing problems, 242identifying risks, 240–241mitigation strategy, defining, 241–242multi-level loop nature of, 243–244overview, 238–239prioritizing risks, 241synthesizing risks, 241technical risks, identifying, 240–241validating fixes, 242

Robustness testing, 170ROI (return on investment), 13, 17–18.

See also Costs/benefits of securityRounded box symbol, assurance cases,

65Rubin, Avi, 184

SSafety, 7, 34–35, 289Saltzer, Jerome, 138Schedules

canceled projects, 14costs of security problems, 14–16poor quality, effect on, 14–16prevention vs. repair, 15time overruns, 14

Schneier, Bruce, 138Schroeder, Michael, 138SCR (Software Cost Reduction), 78Script kiddies, 123. See also AttackersSDLC (software development life cycle).

See also specific activitiesarchitecture and design phases

attack patterns, 55–56issues and challenges, 117–118management concerns, 118objectives, 116risk assessment. See Architectural

risk analysisrole in developing secure software,

115–117source of vulnerabilities, 117

assurance cases, 67–68attack patterns

architecture phase, 55–56automated analysis tools, 57–58black-box testing, 59–60code reviews, 57–58

Index330

SDLC (Continued)coding phase, 57–58design phase, 55–56environmental testing, 59–60implementation phase, 57–58integration testing, 58operations phase, 59requirements phase, 53–54system testing, 58–59testing phase, 58–61unit testing, 58white-box testing, 59–60

black-box testing, 177–178coding phase, attack patterns, 57–58design phase. See Architecture and

design phasesimplementation phase, attack

patterns, 57–58managing secure development, 20–23operations phase, attack patterns, 59phase containment of defects, 257practices by life cycle phase,

summary of, 270–280processes and practices, 20–23requirements phase, 53–54, 173–174.

See also Requirements engineering

reviewing source code. See Source code analysis

testing for security. See Testing security

touchpoints, 22–23white-box testing, 175writing code. See Coding practices

Secure Coding Initiative, 161 goal, xvSecuring the weakest link principle, 140Security

application, 11, 12, 39vs. assurance, 6–7as cultural norm, 221–222, 279definition, 289development assumptions, 5goals, 7guidelines, 143–147, 273principles, 137–143, 273vs. privacy, 199nonsecure software, 3objective, 7

practices, summary of, 270–280problem overview

complexity of systems, 5–6report to the U.S. President, 3survey of financial services

industry, 4vulnerabilities reported to CERT

(1997-2006), 3–4strategy, 18–19system criteria, 7

Security architecture. See Architectural; Architecture and design

Security profiles, 288Security Quality Requirements

Engineering (SQUARE). SeeSQUARE (Security Quality Requirements Engineering)

Security touchpointsdefinition, 36–37, 290ESSF (enterprise software security

framework), 234in the SDLC, 22–23

Separation of privilege principle, 140Service provider perspective, 208Shirey’s threat categories, 192, 194–196Simple Script Injection pattern, 58Size, influence on security, 35–36. See also

ComplexitySkype phone failure, 184SOA (service-oriented architecture), 191SOAP (Simple Object Access Protocol),

192–193SOAR report, 78Social engineering, 189, 288Soft Systems Methodology (SSM), 101Software

assurance, 6–7, 289characterization, 120–123development. See SDLCreliability, 7, 34–35, 289safety, 7, 34–35, 289security. See Securitytrustworthy, 5

Software Cost Reduction (SCR), 78Software development life cycle (SDLC).

See SDLC (software development life cycle)

Software engineering perspective, 263–264

Index 331

“Software Penetration Testing,” 179“Software Security Testing,” 179Sound practice, 289Source code analysis

bugsbuffer overflow, 154–155deadlocks, 156defect rate, 152–153definition, 152exception handling, 154infinite loops, 156input validation, 153–154known, solutions for, 153–156race conditions, 156resource collisions, 156SQL injection, 155

flaws, 152–153process diagrams, 160source code review

absolute metrics, 159–160automatic analysis, 57–58manual inspection, 157metric analysis, 159–160overview, 156–157relative metrics, 159–160static code analysis tools, 157–159summary of, 274

vulnerabilities, 152–156. See also Bugsin code; Flaws in code

Sources of problems. See also Attacks;Complexity; Threats

architecture and design, 117bugs in code, 152–153CERT analysis, 161complexity of systems, 5–6flaws in code, 152–153implementation defects, 12incorrect assumptions, 12mistakes, 12–13requirements defects, 74specification defects, 12summary of, 12threats, 123–125unintended interactions, 12

Specific interface errors, 187Specification-based testing, 170Specifications

properties of secure software. SeeAssurance cases

security requirements. SeeRequirements engineering

Spoofing, 289SQL injection, 155, 289SQUARE (Security Quality

Requirements Engineering). See alsoRequirements engineering

artifacts, developing, 88–89, 93–94categorizing requirements,

89, 97–98, 272defining terms, 88, 92definition, 84elicitation techniques, selecting, 89, 97eliciting requirements, 89, 97–98goals, identifying, 88, 92history of, 84inspecting requirements,

90, 98–99, 272prioritizing requirements, 90, 98process overview, 85–87process steps, 88–90prototyping, 90results, expected, 90–91results, final, 99risk assessment

description, 89sample output, 94–97techniques, ranking, 94–96

sample outputs, 92–99summary of, 271

SSM (Soft Systems Methodology), 101Stack-smashing attack, 33, 289Standards

CC (Common Criteria), 69credit card industry, 260security evaluation, 69software engineering terminology, 34

State-based testing, 170State-sponsored threats, 126“Static Analysis for Security,” 179Static code analysis tools, 157–159Steven, John, 224–225, 281Stock prices, effects of security problems,

18Strategies for security, 18–20Stress testing, 177Stress-related failures, 206Structured external threats, 126Subclaims, 62

Index332

System behavior errors, 188System testing, 58–59, 176–179

TTackling hard problems first, 276Tampering, 289Techniques. See Tools and techniquesTelephone outages, 184, 188Templates

misuse/abuse cases, 83testing security, 172

Test cases, functional testing, 170, 173, 274–275

Testing security. See also Measuring security; Source code analysis

90 percent right, 167attack patterns, 58–61, 173black-box testing, 177–178blacklist technique, 168–169books and publications, 164,

179–180bugs, introduced by fixes, 169coverage analysis, 175–176executables, testing, 175–176functional

ad hoc testing, 170boundary value analysis, 170code-based testing, 171. See also

White-box testingcontrol-flow testing, 170data-flow testing, 170decision table testing, 170equivalence partitioning, 170fault-based testing, 171fault-tolerance testing, 170limitations, 168–169load testing, 171logic-based testing, 170model-based testing, 170overview, 167–168performance testing, 171protocol performance testing, 171robustness testing, 170specification-based testing, 170state-based testing, 170tools and techniques, 170–171usage-based testing, 171use-case-based testing, 171white-box testing, 171

goals of, 164integration testing, 176known internals. See White-box

testinglibraries, testing, 175–176limitations, 168–169methods for, 167negative requirements, 172–173penetration testing, 178–179positive requirements, 167–168preliminary risk assessment, 173required scenarios, 33requirements phase, 173–174resources about, 179–180risk-based, 169–172, 275scenarios, from past experiences,

172–173vs. software testing, 165–167stress testing, 177summary of, 274–275system testing, 176–179templates for, 172threat modeling, 173throughout the SDLC

black-box testing, 177–178coverage analysis, 175–176executables, testing, 175–176integration testing, 176libraries, testing, 175–176penetration testing, 178–179preliminary risk assessment, 173requirements phase, 173–174stress testing, 177system testing, 176–179unit testing, 174–175white-box testing, 175

unique aspects, summary of, 274unit testing, 174–175unknown internals. See Black-box

testingusers vs. attackers, 165white-box testing, 175

Theory-W, 108Threats. See also Attacks; Sources of

problemsactors, 126analyzing, 123–126, 289business impact, identifying, 132–133capability for, 131

Index 333

categories of, 9–10, 194–196deception, 195definition, 289development stage, 9–10disruption, 195exposure of information, 194“hactivists,” 126from individuals, 126mapping, 130modeling, 173, 290motivation, 131operations stage, 10organized nonstate entities, 126protective measures. See Mitigationscript kiddies, 123Shirey’s categories, 192, 194–196state-sponsored, 126structured external, 126summary of sources, 123–125transnational, 126unstructured external, 126usurpation, 196virtual hacker organizations, 126vulnerable assets, identifying, 132weaknesses. See Vulnerabilities

TJX security breach, 133Tolerance

attacks, 8, 43, 284definition, 290faults, 170risks, 237–238

Tools and techniques. See also specifictools and techniques

100-point method, 108AHP, 110ARM (Accelerated Requirements

Method), 103automated analysis, 57–58bad practices reference, 127–128BST (binary search tree), 107CDA (critical discourse analysis), 103CLASP (Comprehensive Lightweight

Application Security Process), 77, 248

CORE (Controlled Requirements Expression), 101–102

CWE (Common Weakness Enumeration), 127–128

domain analysis, 102eliciting requirements, 100–106ESSF (enterprise software security

framework), 229FODA (feature-oriented domain

analysis), 102functional security testing, 170–171IBIS (issue-based information

systems), 102JAD (Joint Application

Development), 102numeral assignment, 107planning game, 107–108prioritizing requirements, 106–111project management, 250prototyping, 90QFD (Quality Function Deployment),

101requirements engineering, 77–78. See

also SQUARErequirements prioritization

framework, 109requirements triage, 108–109risk assessment, 94–97SSM (Soft Systems Methodology), 101static code analysis, 157–159Theory-W, 108Wiegers’ method, 109win-win, 108

Touchpointsdefinition, 36–37, 290ESSF (enterprise software security

framework), 234in the SDLC, 22–23

Traceability, 35–36Transnational threats, 126Trust boundaries, 290Trustworthiness, 8, 290

UUnanticipated risks, 204, 207Unintended interactions, 12Unit testing, 58, 174–175Unstructured external threats, 126Usability attacks, 201Usage errors, 188Usage-based testing, 171Use cases, 290. See also Misuse/abuse cases

Index334

Use-case diagrams. See Misuse/abuse cases

Use-case-based testing, 171User actions, effects of complexity, 207Users vs. attackers, 165Usurpation threats, 196

VViega, John, 138Voting machine failure, 184Vulnerabilities. See also Architectural

vulnerability assessmentarchitectural vulnerability

assessment, 130classifying, 130in code, 152. See also Bugs in code;

Flaws in codedefinition, 290known, list of, 187perceived size, 33reported to CERT (1997-2006), 3–4tolerating, based on size, 32–34

Vulnerable assets, identifying, 132

WWeaknesses. See VulnerabilitiesWeb services

attacker’s perspective, 192–196

distributed systems risks, 193–194functional perspective, 190–192goal of, 191interoperability, 190–191message risks, 194messages, 191–192risk factors, 193–194Shirey’s threat categories, 192,

194–196SOA (service-oriented architecture),

191SOAP (Simple Object Access

Protocol), 192–193White-box testing. See also Code-based

testingattack patterns, 59–60definition, 290functional security testing, 171in the SDLC, 175

Whitelist, 290“Who, what, when” structure,

230–233Wicked problems, 102Wiegers’ method, 109Win-win technique, 108

ZZones of trust, 123, 290