a taxonomy of computer program security flaws

44
A Taxonomy of Computer Program Security Flaws CARL E. LANDWEHR, ALAN R. BULL, JOHN P. MCDERMOTT, AND WILLIAM S. CHOI Informatmn Technology Division, Naval Research Laboratory, Washington, D. C. 20375-5337 An organized record of actual flaws can be useful to computer system designers, programmers, analysts, administrators, and users. This survey provides a taxonomy for computer program security flaws, with an Appendix that documents 50 actual security flaws. These flaws have all been described previously in the open literature, but in widely separated places. For those new to the field of computer security, they provide a good introduction to the characteristics of security flaws and how they can arise. Because these flaws were not randomly selected from a valid statistical sample of such flaws, we make no strong claims concerning the likely distribution of actual security flaws within the taxonomy. However, this method of organizing security flaw data can help those who have custody of more representative samples to organize them and to focus their efforts to remove and, eventually, to prevent the introduction of security flaws, Categories and Subject Descriptors: D.2.OISoftware Engineering]: General—profectzon mechanisms; D.2.9[Software Engineering]: Management—lIf& cycle; software configuration management; D.4.6[Operating Systems]: Security and Protection—access controls; authentzcatzon; information fZow controls; inuasive software; K.6.3[Management of Computing and Information Systems]: Software Management—software development; software maintenance, K.6.5{Management of Computing and Information Systems]: Security and Protection—authentication; inuasiue software General Terms: Security Additional Key Words and Phrases: Error/defect classification, security flaw, taxonomy INTRODUCTION Knowing how systems have failed can help us build systems that resist failure. Petroski [1992] makes this point elo- quently in the context of engineering de- sign, and although software failures may be less visible than those of the bridges he describes, they can be equally damag- ing. But the history of software failures, apart from a few highly visible ones [Leveson and Turner 1992; Spafford 1989] is relatively undocumented. This survey collects and organizes a number of actual security flaws that have caused failures, so that designers, programmers, and analysts may do their work with a more precise knowledge of what has gone before. Computer security flaws are any condi- tions or circumstances that can result in denial of service, unauthorized disclo- sure, unauthorized destruction of data, or unauthorized modification of data [Landwehr 1981]. Our taxonomy at- tempts to organize information about flaws so that, as new flaws are added, readers will gain a fuller understanding of which parts of systems and which parts of the system life cycle are generating more security flaws than others. This in- formation should be useful not only to ACM Computing Surveys, Vol. 26, No. 3. September 1994

Upload: others

Post on 24-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

A Taxonomy of Computer Program Security Flaws

CARL E. LANDWEHR, ALAN R. BULL, JOHN P. MCDERMOTT, AND WILLIAM S. CHOI

Informatmn Technology Division, Naval Research Laboratory, Washington, D. C. 20375-5337

An organized record of actual flaws can be useful to computer system designers,

programmers, analysts, administrators, and users. This survey provides a taxonomy

for computer program security flaws, with an Appendix that documents 50 actual

security flaws. These flaws have all been described previously in the open literature,

but in widely separated places. For those new to the field of computer security, they

provide a good introduction to the characteristics of security flaws and how they can

arise. Because these flaws were not randomly selected from a valid statistical sample

of such flaws, we make no strong claims concerning the likely distribution of actual

security flaws within the taxonomy. However, this method of organizing security flaw

data can help those who have custody of more representative samples to organize them

and to focus their efforts to remove and, eventually, to prevent the introduction of

security flaws,

Categories and Subject Descriptors: D.2.OISoftware Engineering]:

General—profectzon mechanisms; D.2.9[Software Engineering]: Management—lIf&

cycle; software configuration management; D.4.6[Operating Systems]: Security and

Protection—access controls; authentzcatzon; information fZow controls; inuasive

software; K.6.3[Management of Computing and Information Systems]: Software

Management—software development; software maintenance, K.6.5{Management of

Computing and Information Systems]: Security and Protection—authentication;

inuasiue software

General Terms: Security

Additional Key Words and Phrases: Error/defect classification, security flaw, taxonomy

INTRODUCTION

Knowing how systems have failed canhelp us build systems that resist failure.Petroski [1992] makes this point elo-quently in the context of engineering de-sign, and although software failures maybe less visible than those of the bridgeshe describes, they can be equally damag-ing. But the history of software failures,apart from a few highly visible ones[Leveson and Turner 1992; Spafford1989] is relatively undocumented. Thissurvey collects and organizes a numberof actual security flaws that have causedfailures, so that designers, programmers,

and analysts may do their work with amore precise knowledge of what has gonebefore.

Computer security flaws are any condi-tions or circumstances that can result indenial of service, unauthorized disclo-sure, unauthorized destruction of data,or unauthorized modification of data[Landwehr 1981]. Our taxonomy at-tempts to organize information aboutflaws so that, as new flaws are added,readers will gain a fuller understandingof which parts of systems and which partsof the system life cycle are generatingmore security flaws than others. This in-formation should be useful not only to

ACM Computing Surveys, Vol. 26, No. 3. September 1994

212 “ Carl E. Landwehr et al,

CONTENTS

INTRODUCTION

What m a Security Flaw m a Program?

Why Look for Security Flaws?

in Computer Programs

1 PREVIOUS WORK

2 TAXONOMY

2 1 By Genesis

22 By Time of Introduction

23 By Location

3 DISCUSSION

3.1 Llmltatlons

32 Inferences

APPENDIX SELECTED SECURITY FLAWS

ACKNOWLEDGMENTS

REFERENCES

designers, but also to those faced withthe difficult task of assessing the secu-rity of a system already built. To assessaccurately the security of a computersystem, an analyst must find its vulnera-bilities. To do this, the analyst mustunderstand the system thoroughly andrecognize that computer security flawsthat threaten system security may existanywhere in the system.

There is a legitimate concern that thiskind of information could assist thosewho would attack computer systems.Partly for this reason, we have limitedthe cases described here to those thatalready have been publicly documentedelsewhere and are relatively old. We donot suggest that we have assembled arepresentative random sample of allknown computer security flaws, but wehave tried to include a wide variety. Weoffer the taxonomy for the use of thosewho are presently responsible for re-pelling attacks and correcting flaws.Their data, organized this way and ab-stracted, could be used to focus efforts toremove security flaws and prevent theirintroduction.

Other taxonomies [Brehmer and Carl1993; Chillarege et al. 1992; Florae 1992]have recently been developed for organiz-ing data about software defects andanomalies of all kinds. These are primar-ily oriented toward collecting data during

the software development process for thepurpose of improving it. We are primar-ily concerned with security flaws that aredetected only after the software has beenreleased for operational use; our taxon-omy, while not incompatible with theseefforts, reflects this perspective.

What is a Security Flaw in a Program?

This question is akin to “what is a bug?”.In fact, an inadvertently introduced secu-rity flaw in a program is a bug. Gener-ally, a security flaw is a part of a pro-gram that can cause the system toviolate its security requirements. Findingsecurity flaws, then, demands someknowledge of system security require-ments. These requirements vary accord-ing to the system and the application, sowe cannot address them in detail here.Usually, they concern identification andauthentication of users, authorization ofparticular actions, and accountability foractions taken.

We have tried to keep our use of theterm “flaw” intuitive without conflictingwith standard terminology. The IEEE

Standard Glossary of Software Engineer-

ing Terminology [IEEE Computer Soci-ety 1990] includes the followingdefinitions:

error: human action that produces anincorrect result (such as software con-taining a fault).

fault: an incorrect step, process, or datadefinition in a computer program, and,

failure: the inability of a system orcomponent to perform its requiredfunctions within specified performancerequirements.

A failure may be produced when a faultis encountered. This glossary lists bug asa synonym for both error and fault. Weuse fZaw as a synonym for bug, hence (inIEEE terms) as a synonym for fault, ex-cept that we include flaws that have beeninserted into a system intentionally, aswell as accidental ones.

IFIP WG1O.4 has also published a tax-onomy and definitions of terms [Laprie

ACM Computmg Surveys, Vol 26, No 3, September 1994

et al. 1992] in this area. These definefaults as the cause of errors that maylead to failures. A system fails when thedelivered service no longer complies withthe specification. This definition of “error”seems more consistent with its use in“error detection and correction” as ap-plied to noisy communication channels orunreliable memory components than theIEEE one. Again, our notion of flaw cor-responds to that of a fault, with the pos-sibility that the fault may be introducedeither accidentally or maliciously.

Why Look for Security Flaws in ComputerPrograms?

Early work in computer security wasbased on the paradigm of “penetrate andpatch”: analysts searched for securityflaws and attempted to remove them.Unfortunately, this task was, in mostcases, unending: more flaws alwaysseemed to appear [Neumann 1978; Schell1979]. Sometimes the fix for a flaw intro-duced new flaws, and sometimes flawswere found that could not be repairedbecause system operation depended onthem (e.g., cases 13 and B1 in theAppendix).

This experience led researchers to seekbetter ways of building systems to meetsecurity requirements in the first placeinstead of attempting to mend the flawedsystems already installed. Although somesuccess has been attained in identifyingbetter strategies for building systems[Department of Defense 1985; Landwehr1983], these techniques are not univer-sally applied. More importantly, they donot eliminate the need to test a newlybuilt or modified system (for example, tobe sure that flaws avoided in initial spec-ification have not been introduced inimplementation).

1. PREVIOUS WORK

Most of the serious efforts to locate secu-rity flaws in computer programs throughpenetration exercises have used the FlawHypothesis Methodology developed in theearly 1970s [Linde 1975]. This method

Program Security Flaws ● 213

requires system developers first (1) tobecome familiar with the details of theway the system works (its control struc-ture), then (2) to generate hypotheses asto where flaws might exist in a system,

(3) to use system documentation and teststo confirm the presence of a flaw, and (4)to generalize the confirmed flaws and usethis information to guide further efforts.Although Linde [1975] provides lists ofgeneric system functional flaws andgeneric operating system attacks, he doesnot provide a systematic organization forsecurity flaws.

In the mid-70s both the Research inSecured Operating Systems (RISOS) pro-ject, conducted at Lawrence LivermoreLaboratories, and the Protection Analy-sis project, conducted at the InformationSciences Institute of the Universityof Southern California (USC /1S1), at-tempted to characterize operating systemsecurity flaws. The RISOS final report[Abbott et al. 1976] describes seven cate-gories of operating system security flaws:

(1)

(2)(3)

(4)

(5)

(6)

(7)

incomplete parameter validation,

inconsistent parameter validation,

implicit sharing of privileged/con-fidential data,

asynchronous validation/inadequateserialization,

inadequate identification/authenti-cation/authorization,

violable prohibition/limit, and

exploitable logic error.

The report describes generic examples foreach flaw category and provides reason-ably detailed accounts for 17 actual flawsfound in three operating systems:IBM OS/MVT, Univac 1100 Series, andTENEX. Each flaw is assigned to one ofthe seven categories.

The goal of the Protection Analysis (PA)project was to collect error examples andabstract patterns from them that, it washoped, would be useful in automating thesearch for flaws. According to the finalreport [Bisbey and Hollingworth 1978],more than 100 errors that could permitsystem penetrations were recorded from

ACM Computing Surveys, Vol. 26, No. 3, September 1994

214 ● Carl E. Landwehr et al.

six different operating systems (GCOS,MULTICS, and Unix, in addition to thoseinvestigated under RISOS). Unfortu-nately, this error database was neverpublished and no longer exists [Bisbey1990]. However, the researchers did pub-lish some examples, and they did developa classification scheme for errors. Ini-tially, they hypothesized 10 error cate-gories; these were eventually reorganizedinto four “global” categories:

. domain errors, including errors of ex-posed representation, incomplete de-struction of data within a deallocatedobject, or incomplete destruction of itscontext,

. validation errors, including failure tovalidate operands or to handle bound-ary conditions properly in queue man-agement,

Q naming errors, including aliasing andincomplete revocation of access to adeallocated object, and

* serialization errors, including multiplereference errors and interrupted atomicoperations.

Although the researchers felt that theyhad developed a very successful methodfor finding errors in operating systems,the technique resisted automation. Re-search attention shifted from findingflaws in systems to developing methodsfor building systems that would be free ofsuch errors.

Our goals are more limited than thoseof these earlier efforts in that we seekprimarily to provide an understandablerecord of security flaws that have oc-curred. They are also more ambitious, inthat we seek to categorize not only thedetails of the flaw, but also the genesis ofthe flaw and the time and place itentered the system.

2. TAXONOMY

A taxonomy is not simply a neutralstructure for categorizing specimens. Itimplicitly embodies a theory of the uni-verse from which those specimens are

drawn. It defines what data are to berecorded and how like and unlike speci-mens are to be distinguished. In creatinga taxonomy of computer program secu-rity flaws, we are in this way creating atheory of such flaws, and if we seek an-swers to particular questions from acollection of flaw instances, we must or-ganize the taxonomy accordingly.

Because we are fundamentally con-cerned with the problems of building andoperating systems that can enforce secu-rity policies, we ask three basic questionsabout each observed flaw:

● How did it enter the system?

● When did it enter the system?

● Where in the system is it manifest?

Each of these questions motivates a sub-section of the taxonomy, and each flaw isrecorded in each subsection. By readingcase histories and reviewing the distribu-tion of flaws according to the answers tothese questions, designers, programmers,analysts, administrators, and users will,we hope, be better able to focus theirrespective efforts to avoid introducing se-curity flaws during system design andimplementation, to find residual flawsduring system evaluation or certification,and to administer and operate systemssecurely.

Figures 1-3 display the details of thetaxonomy by genesis (how), time of intro-duction (when), and location (where).Note that the same flaw will appear atleast once in each of these categories.Divisions and subdivisions are providedwithin the categories; these, and theirmotivation, are described in detail later.Where feasible, these subdivisions definesets of mutually exclusive and collec-tively exhaustive categories. Often, how-ever, especially at the finer levels, such apartitioning is infeasible, and complete-ness of the set of categories cannot beassured. In general, we have tried to in-clude categories only where they mighthelp an analyst searching for flaws or adeveloper seeking to prevent them.

The description of each flaw categoryrefers to applicable cases (listed in the

ACM Computmg Surveys, Vol 26, No. 3, September 1994

Genesis

Intentional

Inadvertent

Program Security Flaws ● 215

Case

Count ID’s——

t---

Non-Replicating

Trojan Horse

Malicious Replicating(virus)

I

I Trapdoor

Validation Error (Incomplete / Inconsistent)

Domain Error (Including Object Re-use, Residuals,

and Exposed Representation Errors)

Serirdizarion/aliasing (Including TOCT”l’OU Errors)

Identification/Authentication Inadequate

Boundary Condition Violation (Including ResourceExhaustion and Violable Constraint Errors)

Other Exploitable Logic Error

(J1,PC2,PC4,, MAI ,MA2,CA1,

AT1

2} Wl)(ulo)

1 18

1 DTI

2 19,D2

14,15,MT1,MU2,10 MU4,MU8,U7,

U11,U12,U13

13,16,MT2,7 MT3,MU3,

UN1,D1

2 11,12

4 MT4,MU5,MU6,U9

4MU7,MU9,U8,1NI

Figure 1. Security flaw taxonomy: Flaws by Genesis. Parenthesized entries indicate secondary assign-ments.

Appendix). Open-literature reports of se-curity flaws are often abstract and fail toprovide a realistic view of system vulner-abilities. Where studies do provide exam-ples of actual vulnerabilities in existingsystems, they are sometimes sketchy andincomplete lest hackers abuse the infor-mation. Our criteria for selecting casesare:

(1) the case must present a particular

type of vulnerability clear~y enoughthat a scenario or program thatthreatens system security can be un-derstood by the classifier and

(2) the potential damage resulting fromthe vulnerability described must bemore than superficial.

Each case includes the name of the au-thor or investigator, the type of systeminvolved, and a description of the flaw.

A given case may reveal several kindsof security flaws. For example, if a sys-tem programmer inserts a Trojan horsethat exploits a covert channell to disclosesensitive information, both the Trojanhorse and the covert channel are flaws inthe operational system; the former willprobably have been introduced mali-ciously, the latter inadvertently. Ofcourse, any system that permits a user toinvoke an uncertified program is vulner-able to Trojan horses. Whether the factthat a system permits users to installprograms also represents a security flawis an interesting question. The answerseems to depend on the context in whichthe question is asked. Permitting users

1Covert channel: a communication path in a com-puter system not intended as such by the system’s

designers.

ACM Computing Surveys, Vol. 26, No. 3, September 1994

216 Q Carl E. Landwehr et al.

Case

Count ID’s.—

11,12,13,14,15,16,17,19,MT2,

22 ~6gyuN1,,

U6,Li7,U9,U10,U13,U14,D2,IN1

MT1,MT4,MU1,MU2,MU5,MU7,

15 MU8,DTI, U2,U3,U4,U5,U8U11,U12

1 U1

Requirement/Specification/Design

DuringDevelopment

Time ofIntroduction Source Code

Object Code

DuringMaintenance I D1 ,MU3,

3 MU9

18,PCl ,PC2,PC3,PC4,MA1

9 M*Z,CA,,

AT1

During

Operation

Figure2. Security flaw taxonomy :Flawsby tlmeofmtroduction.

Case

xmt ID’s —

U5,U13,PC2,

9 PC4,MA1,MA2,ATI,CA1I System Initlalizatlon

OperatingSystem

m

2 MT3,MU5

16,19,MTI,MT2,O MU2,MU3,MU4,

MU6,MU7,UN1

3 12,13,14Device Management(including I/0, networking)

File Management 611,15,MU8,U2,U3,U9

Software I Identificahon/AuthenticationLocation

I Other / Unknown 1 MT4

17,Bi,U4,U7,O U8,U1O,U12,

U14,PCI, PC3

1 UI

I Privileged Utditiessupport

Unprivileged Utdities

Application 1 18

Hardware 3 MU9,D2,1N1

Figure 3. Security flaw taxonomy: Flaws by location.

ACM Computing Surveys, Vol 26. No 3, September 1994

of, say, an air traffic control system or,less threateningly, an airline reservation

system, to install their own programsseems intuitively unsafe; it is a flaw. Onthe other hand, preventing owners of PCsfrom installing their own programs would

seem ridiculously restrictive.The cases selected for the Appendix

are a small sample, and we cautionagainst unsupported generalizationsbased on the flaws they exhibit. In par-ticular, readers should not interpret theflaws recorded in the Appendix as indica-tions that the systems in which theyoccurred are necessarily more or less se-cure than others. In most cases, the ab-sence of a system from the Appendix sim-ply reflects the fact that it has not beentested as thoroughly or had its flaws doc-umented as openly as those we have cited.Readers are encouraged to communicateadditional cases to the authors so thatwe can better understand where securityflaws really occur.

The balance of this section describesthe taxonomy in detail. The case histo-ries can be read prior to the details of thetaxonomy, and readers may wish to readsome or all of the Appendix at this point.Particularly if you are new to computersecurity issues, the case descriptions areintended to illustrate the subtlety andvariety of security flaws that have actu-ally occurred, and an appreciation ofthem may help you grasp the taxonomiccategories.

2.1 By Genesis

How does a security flaw find its wayinto a program? It may be introducedintentionally or inadvertently. Differentstrategies can be used to avoid, detect, orcompensate for accidental flaws as op-posed to those inserted intentionally, Forexample, if most security flaws turn outto be accidentally introduced, increasingthe resources devoted to code reviews andtesting may be reasonably effective inreducing the number of flaws. But if mostsignificant security flaws are introducedmaliciously, these techniques are muchless likely to help, and it may be more

Program Security Flaws “ 217

productive to take measures to hiremore trustworthy programmers, devotemore effort to penetration testing, andinvest in virus detection packages. Ourgoal in recording this distinction is, ulti-mately, to collect data that will provide abasis for deciding which strategies to usein a particular context.

Characterizing intention is tricky:some features intentionally placed inprograms can at the same time introducesecurity flaws inadvertently (e.g., a fea-ture that facilitates remote debugging orsystem maintenance may at the sametime provide a trapdoor to a system).Where such cases can be distinguished,they are categorized as intentional butnonmalicious. Not wishing to endow pro-grams with intentions, we use the terms“malicious flaw,”” malicious code,” and soon, as shorthand for flaws, code, etc.,that have been introduced into a systemby an individual with malicious intent.Although some malicious flaws could bedisguised as inadvertent flaws, this dis-tinction should be possible to make inpractice—inadvertently created Trojanhorse programs are hardly likely! Inad-vertent flaws in requirements or specifi-cations manifest themselves ultimatelyin the implementation; flaws may also beintroduced inadvertently duringmaintenance.

Both malicious flaws and nonmaliciousflaws can be difficult to detect, the for-mer because they have been intention-ally hidden and the latter because resid-ual flaws may be more Iikely to occur inrarely invoked parts of the software. Onemay expect malicious code to attempt tocause significant damage to a system,but an inadvertent flaw that is exdoitedby a malicious intruder can be ~quall~dangerous.

2.1.1 Malicious Flaws

Malicious flaws have acquired colorfunames, including Trojan horse, trap

door, time-bomb, and logic-bomb. Theterm “Trojan horse” was introduced byDan Edwards and recorded by Anderson[1972] to characterize a particular com-

ACM Computing Surveys, Vol. 26, No. 3, September 1994

218 “ Carl E. Landwehr et al.

puter security threat; it has been rede-fined many times [Anderson 1972;Denning 1982; Gasser 1988; Landwehr1981]. It refers generally to a programthat masquerades as a useful service butexploits rights of the program’suser—rights not possessed by the authorof the Trojan horse—in a way the userdoes not intend.

Since the author of malicious codeneeds to disgaise it somehow so that itwill be invoked by a nonmalicious user(unless the author means also to invokethe code, in which case he or she presum-ably already possesses the authorizationto perform the intended sabotage), al-most any malicious code can be called aTrojan horse. A Trojan horse that repli-cates itself by copying its code into otherprogram files (see case MA1) is com-monly referred to as a uirus [Cohen 1984;Pfleeger 1989]. One that replicates itselfby creating new processes or files to con-tain its code, instead of modifying exist-ing storage entities, is often called aworm [Schoch and Hupp 1982]. Denning[1988] provides a general discussion ofthese terms; differences of opinion aboutthe term applicable to a particular flawor its exploitations sometimes occur[Cohen 1984; Spafford 1989].

A trapdoor is a hidden piece of codethat responds to a special input, allowingits user access to resources without pass-ing through the normal security enforce-ment mechanism (see case Ul). Forexample, a programmer of automatedteller machines (ATMs) might be re-quired to check a personal identificationnumber (PIN) read from a card againstthe number keyed in by the user. If thenumbers match, the user is to be permit-ted to enter transactions. By adding adisjunct to the condition that implementsthis test, the programmer can provide atrapdoor, shown in italics below:

if PINcard ==PINkeyed OR PIiVkeyed =

9999 then {permit transactions}

In this example, 9999 would be a univer-sal PIN that would work with any bankcard submitted to the ATM. Of course thecode in this example would be easy for a

code reviewer, although not an ATM user,to spot, so a malicious programmer wouldneed to take additional steps to hide thecode that implements the trapdoor. Ifpasswords are stored in a system filerather than on a user-supplied card, aspecial password known to an intrudermixed in a file of legitimate ones mightbe difficult for reviewers to find.

It might be argued that a login pro-gram with a trapdoor is really a Trojanhorse in the sense defined above, but thetwo terms are usually distinguished[Denning 1982]. Thompson [1984] de-scribes a method for building a Trojanhorse compiler that can install both itselfand a trapdoor in a Unix password-checking routine in future versions of theUnix system.

A time-bomb or logic-bomb is a piece

of code that remains dormant in the hostsystem until a certain “detonation” timeor event occurs (see case 18). When trig-gered, a time-bomb may deny service bycrashing the system, deleting files, or de-grading system response time. A time-bomb might be placed within either areplicating or nonreplicating Trojanhorse.

2.1.2 Intentional, Nonmalicious Flaws

A Trojan horse program may convey sen-sitive information to a penetrator overcouert channels. A covert channel is sim-ply a path used to transfer informationin a way not intended by the system’sdesigners [Lampson 1973]. Since covertchannels, by definition, are channels notplaced there intentionally, they shouldperhaps appear in the category of inad-vertent flaws. We categorize them as in-tentional but nonmalicious flaws becausethey frequently arise in resource-sharingservices that are intentionally part of thesystem. Indeed, the most difficult ones toeliminate are those that arise in the ful-fillment of essential system require-ments. Unlike their creation, their ex-ploitation is likely to be malicious.Exploitation of a covert channel usuallyinvolves a service program, most likely aTrojan horse. Generally, this program has

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

access to confidential data and can en-code that data for transmission over the

covert channel. It also will contain areceiver program that “listens” to thechosen covert channel and decodes themessage for a penetrator. If the serviceprogram could communicate confidentialdata directly to a penetrator without be-ing monitored, of course, there would beno need for it to use a covert channel.

Covert channels are frequently classi-fied as either storage or timing channels.A storage channel transfers informationthrough the setting of bits by one pro-gram and the reading of those bits byanother. What distinguishes this casefrom that of ordinary operation is thatthe bits are used to convey encoded infor-mation. Examples would include using afile intended to hold only audit informa-tion to convey user passwords—using thename of a file or perhaps status bitsassociated with it that can be read by allusers to signal the contents of the file.Timing channels convey information bymodulating some aspect of system behav-ior over time, so that the program receiv-ing the information can observe systembehavior (e.g., the system’s paging rate,the time a certain transaction requires toexecute, the time it takes to gain accessto a shared bus) and infer protectedinformation.

The distinction between storage andtiming channels is not sharp. Exploita-tion of either kind of channel requiressome degree of synchronization betweenthe sender and receiver. It requires alsothe ability to modulate the behavior ofsome shared resource. In practice, covertchannels are often distinguished on thebasis of how they can be detected: thosedetectable by information flow analysisof specifications or code are consideredstorage channels.

Other kinds of intentional but nonma-licious security flaws are possible. Func-tional requirements that are writtenwithout regard to security requirementscan lead to such flaws; one of the flawsexploited by the “Internet worm”[Spafford 1989] (case U1O) could beplaced in this category.

Program Security Flaws “ 219

2.1.3 Inadvertent Fla ws

Inadvertent flaws may occur in require-ments; they may also find their way intosoftware during specification and coding.Although many of these are detected andremoved through testing, some flaws canremain undetected and later cause prob-lems during operation and maintenanceof the software system. For a softwaresystem composed of many modules andinvolving many programmers, flaws areoften difficult to find and correct becausemodule interfaces are inadequately docu-mented and because global variables areused. The lack of documentation is espe-cially troublesome during maintenancewhen attempts to fix existing flaws oftengenerate new flaws because maintainerslack understanding of the system as awhole. Although inadvertent flaws maynot pose an immediate threat to the se-curity of the system, the weakness re-sulting from a flaw may be exploited byan intruder (see case D 1).

There are many possible ways to orga-nize flaws within this category. Recently,Chillarege et al. [1992] and Sullivan andChillarege [1992] published classifica-tions of defects (not necessarily securityflaws) found in commercial operatingsystems and databases. Florae’s [1992]framework supports counting problemsand defects but does not attempt to char-acterize defect types. The efforts ofBisbey and Hollingworth [1978] and Ab-bott [1976], reviewed in Section 1, pro-vide classifications specifically for secu-rity flaws.

Our goals for this part of the taxonomyare primarily descriptive: we seek a clas-sification that provides a reasonablemap of the terrain of computer programsecurity flaws, permitting us to groupintuitively similar kinds of flaws andseparate different ones. Providing secureoperation of a computer often corre-sponds to building fences between differ-ent pieces of software (or different in-stantiation of the same piece of soft-ware), to building gates in those fences,and to building mechanisms to controland monitor traffic through the gates.

ACM Computing Surveys, Vol. 26, No. 3, September 1994

220 “ Carl E. Landwehr et al.

Our taxonomy, which draws primarily onthe work of Bisbey and Abbott, reflectsthis view. Knowing the type and distribu-tion of actual, inadvertent flaws amongthese kinds of mechanisms should pro-vide information that will help designers,programmers, and analysts focus theiractivities.

Inadvertent flaws can be classified asflaws related to the following:

e

0

0

0

0

validation errors,

domain errors,

serialization/ aliasing errors,

errors of inadequateidentification/ authentication,

boundary condition errors, and

other exploitable logic errors.

Validation flaws may be likened to alazy gatekeeper: one who fails to checkall the credentials of a traveler seekingto pass through a gate. They occur whena program fails to check that the param-eters supplied or returned to it conformto its assumptions about them, or whenthese checks are misplaced, so they areineffectual. These assumptions may in-clude the number of parameters pro-vided, the type of each, the location ormaximum length of a buffer, or the ac-cess permissions on a file. We lump to-gether cases of incomplete validation(where some but not all parameters arechecked) and inconsistent validation(where different interface routines to acommon data structure fail to apply thesame set of checks).

Domain flaws, which correspond to“holes in the fences,” occur when the in-tended boundaries between protectionenvironments are porous. For example, auser who creates a new file and discoversthat it contains information from a filedeleted by a different user has discovereda domain flaw. (This kind of error issometimes referred to as a problem withobject reuse or with residuals.) We alsoinclude in this category flaws of exposed

representation [Bisbey and Hollingworth1978] in which the lower-level represen-tation of an abstract object, intended to

be hidden in the current domain, is infact exposed (see cases B1 and DT1). Er-rors classed by Abbot as “implicit shar-ing of privileged\ confidential data” willgenerally fall in this category.

A serialization flaw permits the asyn-chronous behavior of different systemcomponents to be exploited to cause asecurity violation. In terms of the “fences”and “gates” metaphor, these reflect a for-getful gatekeeper—one who perhapschecks all credentials, but then gets dis-tracted and forgets the result. Theseflaws can be particularly difficult to dis-cover. A security-critical program mayappear to validate all of its parameterscorrectly, but the flaw permits the asyn-chronous behavior of another program tochange one of those parameters after ithas been checked but before it is used.Many time -of-check-to-time-of-use

(TOCTTOU) flaws will fall in this cate-gory, although some may be classed asvalidation errors if asynchrony is not in-volved. We also include in this categoryaliasing flaws, in which the fact that twonames exist for the same object can causeits contents to change unexpectedly and,consequently, invalidate checks alreadyapplied to it.

An identification\ authentication flawis one that permits a protected operationto be invoked without sufficiently check-ing the identity and authority of the in-voking agent. These flaws could perhapsbe counted as validation flaws, since pre-sumably some routine is failing to vali-date authorizations properly. However, asufficiently large number of cases haveoccurred in which checking the identityand authority of the user initiating anoperation has in fact been neglected tokeep this as a separate category.

Typically, boundary condition flaws re-flect omission of checks to assure thatconstraints (e.g., on table size, file alloca-tion, or other resource consumption) arenot exceeded. These flaws may lead tosystem crashes or degraded service, orthey may cause unpredictable behavior.A gatekeeper who, when his queue be-comes full, decides to lock the gate andgo home, might represent this situation.

ACM Computmg Surveys, Vol 26, No 3, September 1994

Program Security Flaws ● 221

Finally, we include as a catchall a cate-gory for other exploitable logic errors.Bugs that can be invoked by users tocause system crashes, but that do notinvolve boundary conditions, would beplaced in this category, for example.

2.2 By Time of Introduction

The software engineering literature in-cludes a variety of studies (e.g., Weissand Basili [1985] and Chillarege et al.

[1992]) that have investigated the gen-eral question of how and when errors areintroduced into software, Part of the mo-tivation for these studies has been toimprove the process by which software isdeveloped: if the parts of the softwaredevelopment cycle that produce the mosterrors can be identified, efforts to im-prove the software development processcan be focused to prevent or remove theseerrors. But is the distribution of when inthe life cycle security flaws are intro-duced the same as the distribution forerrors generally? Classifying identifiedsecurity flaws, both intentional and inad-vertent, according to the phase of thesystem life cycle in which they were in-troduced can help us find out.

Models of the system life cycle and thesoftware development process have pro-liferated in recent years. To permit us tocategorize security flaws from a wide va-riety of systems, we need a relativelysimple and abstract structure that willaccommodate a variety of such models.Consequently, at the highest level wedistinguish only three different phases inthe system life cycle when security flaws

may be introduced: the developmentphase, which covers all activities up tothe release of the initial operational ver-sion of the software, the maintenance

phase, which covers activities leading tochanges in the software performed underconfiguration control after the initial re-lease, and the operational phase, whichcovers activities to patch software whileit is in operation, including unauthorizedmodifications (e.g., by a virus). Althoughthe periods of the operational and main-tenance phases are likely to overlap, if

not coincide, they reflect distinct activi-ties, and the distinction seems to fit bestin this part of the overall taxonomy.

2.2.1 During Development

Although iteration among the phases ofsoftware development is a recognized factof life, the different phases still comprisedistinguishable activities. Requirementsare defined; specification are developedbased on new (or changed) requirements;source code is developed from specifica-tions; and object code is generated fromthe source code. Even when iterationamong phases is made explicit in soft-ware process models, these activities arerecognized, separate categories of effort,so it seems appropriate to categorizeflaws introduced during software devel-opment as originating in requirements

and specifications, source code, or object

code.

Requirements and Specifications

Ideally, software requirements describewhat a particular program or system ofprograms must do. How the program orsystem is organized to meet those re-quirements (i.e., the software design)is typically recorded in a variety of docu-ments, referred to collectively asspecifications. Although we would like todistinguish flaws arising from faulty re-quirements from those introduced inspecifications, this information is lackingfor many of the cases we can report, sowe ignore that distinction in this work.

A major flaw in a requirement is notunusual in a large software system. Ifsuch a flaw affects security, and its cor-rection is not deemed to be cost effective,the system and the flaw may remain. Forexample, an early multiprogramming op-erating system performed some I/O-re-lated functions by having the supervisorprogram execute code located in usermemory while in supervisor state (i.e.,with full system privileges). By the timethis was recognized as a security flaw, itsremoval would have caused major incom-patibilities with other software, and it

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

222 “ Carl E. Landwehr et al.

was not fixed. Case 13 reports a relatedflaw.

Requirements and specifications arerelatively unlikely to contain maliciouslyintroduced flaws. Normally they are re-viewed extensively, so a specification fora trapdoor or a Trojan horse would haveto be well disguised to avoid detection.More likely are flaws that arise becauseof competition between security require-ments and other functional requirements(see case 17). For example, security con-cerns might dictate that programs neverbe modified at an operational site. But ifthe delay in repairing errors detected insystem operation is perceived to be toogreat, there will be pressure to providemechanisms in the specification to per-mit on-site reprogramming or testing (seecase U1O). Such mechanisms can providebuilt-in security loopholes. Also possibleare inadvertent flaws that arise becauseof missing requirements or undetectedconflicts among requirements.

Source Code

The source code implements the designof the software system given by the spec-ifications. Most flaws in source code,whether inadvertent or intentional, canbe detected through a careful examin-ation of it. The classes of inadvertent flawsdescribed previously apply to source code.

Inadvertent flaws in source code arefrequently a by-product of inadequatelydefined module or process interfaces.Programmers attempting to build a sys-tem from inadequate specifications arelikely to misunderstand the meaning (ifnot the type) of parameters to be passedacross an interface or the requirementsfor synchronizing concurrent processes.These misunderstandings manifestthemselves as source code flaws. Wherethe source code is clearly implemented asspecified, we assign the flaw to the speci-fication (cases 13 and MU6, for example).Where the flaw is manifest in the codeand we also cannot confirm that it corre-sponds to the specification, we assign theflaw to the source code (see cases MU1,U4, and U8). Readers should be aware of

the difficulty of making some of theseassignments.

Intentional but nonmalicious flaws canbe introduced in source code for severalreasons. A programmer may introducemechanisms that are not included in thespecification but that are intended to helpin debugging and testing the normal op-eration of the code. However, if the testscaffolding circumvents security controlsand is left in place in the operationalsystem, it provides a security flaw. Ef-forts to be “efficient” can also lead tointentional but nonmalicious source codeflaws, as in case DT1. Programmers mayalso decide to provide undocumented fa-cilities that simplify maintenance butprovide security loopholes—the inclusionof a “patch area” that facilitates repro-gramming outside the scope of the config-uration management system would fallin this category.

Technically sophisticated maliciousflaws can be introduced at the sourcecode level. A programmer working at thesource code level, whether an authorizedmember of a development team or anintruder, can invoke specific operationsthat will comprise system security. Al-though malicious source code can bedetected through manual review of soft-ware, much software is developed with-out any such review; source code is fre-quently not provided to purchasers ofsoftware packages (even if it is supplied,the purchaser is unlikely to have theresources necessary to review it for mali-cious code). If the programmer is awareof the review process, he may well beable to disguise the flaws he introduces.

A malicious source code flaw may beintroduced directly by any individual whogains write access to source code files,but source code flaws can also be intro-duced indirectly. For example, if a pro-grammer who is authorized to writesource code files unwittingly invokes aTrojan horse editor (or compiler, linker,loader, etc.), the Trojan horse could usethe programmer’s privileges to modifysource code files. Instances of subtle indi-rect tampering with source code aredifficult to document, but Trojan horse

ACM Computing Surveys, Vol 26, No 3, September 1994

Program Security Flaws ● 223

programs that grossly modify all a user’sfiles, and hence the source code files,have been created (see cases PC1 andPC2).

Object Code

Object code programs are generated bycompilers or assemblers and representthe machine-readable form of the sourcecode. Because most compilers and assem-blers are subjected to extensive testingand formal validation procedures beforerelease, inadvertent flaws in object pro-grams that are not simply a translationof source code flaws are rare, particularlyif the compiler or assembler is matureand has been widely used. When sucherrors do occur as a result of errors in acompiler or assembler, they show them-selves typically through incorrect behav-ior of programs in unusual cases, so theycan be quite difficult to track down andremove.

Because this kind of flaw is rare, theprimary security concern at the objectcode level is with malicious flaws. Be-cause object code is difficult for a humanto make sense of (if it were easy, soft-ware companies would not have differentpolicies for selling source code and objectcode for their products), it is a good hid-ing place for malicious security flaws

(again, see case U1 [Thompson 1984]).Lacking system and source code docu-

mentation, an intruder will have a hardtime patching source code to introduce asecurity flaw without simultaneously al-tering the visible behavior of the pro-gram. The insertion of a malicious objectcode module or replacement of an exist-ing object module by a version of it thatincorporates a Trojan horse is a morecommon threat. Writers of self-replicat-ing Trojan horses (viruses) [Pfleeger1989] have typically taken this approach:a bogus object module is prepared andinserted in an initial target system. Whenit is invoked, perhaps during system bootor running as a substitute version of anexisting utility, it can search the disksmounted on the system for a copy of itselfand, if it finds none, insert one. If it finds

a related, uninfected version of a pro-gram, it can replace it with an infectedcopy. When a user unwittingly moves aninfected program to a different systemand executes it, the virus gets anotherchance to propagate itself. Instead of re-placing an entire program, a virus mayappend itself to an existing object pro-gram, perhaps, perhaps as a segment tobe executed first (see cases PC4 and CA1).Creating a vir’ls generally requires someknowledge of the operating system andprogramming conventions of the targetsystem; viruses, especially those intro-duced as object code, typically cannotpropagate to different host hardware oroperating systems.

2.2.2 During Maintenance

Inadvertent flaws introduced duringmaintenance are often attributable to themaintenance programmer’s failure to un-derstand the system as a whole. Sincesoftware production facilities often havea high personnel turnover rate, and be-cause system documentation is often in-adequate, maintenance actions can haveunpredictable side effects. If a flaw isfixed on an ad hoc basis without perform-ing a backtracking analysis to determinethe origin of the flaw, it will tend toinduce other flaws, and this cyclewill continue. Software modified duringmaintenance should be subjected to thesame review as newly developed soft-ware; it is subject to the same kinds offlaws. Case D1 shows graphically thatsystem upgrades, even when performedin a controlled environment and with thebest of intentions, can introduce newflaws. In this case, a flaw was inadver-tently introduced into a subsequent re-lease of a DEC operating system follow-ing its successful evaluation at the C2level of the Trusted Computer SystemEvaluation Criteria (TCSEC) [Depart-ment of Defense 19851.

System analysts should also be awareof the possibility of malicious intrusionduring the maintenance stage. In fact,viruses are more likely to be present dur-ing the maintenance stage, since viruses

ACM Computing Surveys, Vol. 26, No. 3, September 1994

224 ● Carl E. Landwehr et al.

by definition spread the infection throughexecutable codes.

2.2.3 During Operation

The well-publicized instances of virusprograms [Denning 1988; Elmer-Dewitt1988; Ferbrache 1992] dramatize theneed for the security analyst to considerthe possibilities for unauthorized modifi-cation of operational software during itsoperational use. Viruses are not the onlymeans by which modifications can occur:depending on the controls in place in asystem, ordinary users may be able tomodify system software or install re-placements; with a stolen password, anintruder may be able to do the samething. Furthermore, software broughtinto a host from a contaminated source

(e.g., software from a public bulletinboard that has, perhaps unknown to itsauthor, been altered) may be able tomodify other host software without au-thorization (see case MA1).

2.3 By Location

A security flaw can be classified accord-ing to where in the system it is intro-duced or found. Most computer securityflaws occur in software, but flaws affect-ing security may occur in hardware aswell. Although this taxonomy addressessoftware flaws principally, programs canwith increasing facility be cast in hard-ware. This fact and the possibility thatmalicious software may exploit hardwareflaws motivate a brief section addressingthem. A flaw in a program that has beenfrozen in silicon is still a program flaw tous; it would be placed in the appropriatecategory under “Operating System”rather than under “Hardware.” We re-serve the use of the latter category forcases in which hardware exhibits secu-rity flaws that did not originate as errorsin programs.

2.3.1 Software Flaws

In classifying the place a software flaw isintroduced, we adopt the view of a secu-rity analyst who is searching for such

flaws. Thus we ask: “Where should onelook first?”

Because the operating system typicallydefines and enforces the basic securityarchitecture of a system—the fences,gates, and gatekeepers—flaws in thosesecurity-critical portions of the operatingsystem are likely to have the most far--reaching effects, so perhaps this is thebest place to begin. But the search needsto be focused. The taxonomy for this areasuggests particular system functions thatshould be scrutinized closely. In somecases, implementation of these functionsmay extend outside the operating systemperimeter into support and applicationsoftware; in this case, that software mustalso be reviewed.

Software flaws can occur in operatingsystem programs, support soflware, orapplication (user ) software. This is arather coarse division, but even so theboundaries are not always clear.

Operating System Programs

Operating system functions normally in-clude memory and processor allocation,process management, device handling,file management, and accounting, al-though there is no standard definition.The operating system determines howthe underlying hardware is used to de-fine and separate protection domains,authenticate users, control access, andcoordinate the sharing of all system re-sources. In addition to functions that maybe invoked by user calls, traps, or inter-rupts, operating systems often includeprograms and processes that operate onbehalf of all users. These programs pro-vide network access and mail service,schedule invocation of user tasks, andperform other miscellaneous services.Systems must often grant privileges tothese utilities that they deny to individ-ual users. Finally, the operating systemhas a large role to play in system initial-ization. Although in a strict sense ini-tialization may involve programs andprocesses outside the operating systemboundary, this software is usually in-tended to be run only under highly con-trolled circumstances and may have

ACM Computmg Surveys, Vol 26, No 3, September 1994

Program Security Flaws ● 225

many special privileges, so its seems ap-propriate to include it in this category.

We categorize operating system secu-rity flaws according to whether they oc-cur in the functions for

● system initialization,

● memory management,

~ process management,

● device managementworking),

~ file management, or

(including net-

. identification/authentication.

We include an other/unknown categoryfor flaws that do not fall into any of thepreceding classes. It would be possible toorient this portion of the taxonomy morestrongly toward specific, security-relatedfunctions of the operating system: accesschecking, domain definition and separa-tion, object reuse, and so on. We havechosen the categorization above partlybecause it comes closer to reflecting theactual layout of typical operating sys-tems, so that it will correspond moreclosely to the physical structure of thecode a reviewer examines. The code foreven a single security-related function issometimes distributed in several sepa-rate parts of the operating system (re-gardless of whether this ought to be so).In practice, it is more likely that a re-viewer will be able to draw a single circlearound all of the process managementcode than around all of the discretionaryaccess control code. A second reason forour choice is that the first taxonomy (bygenesis) provides, in the subarea of inad-vertent flaws, a structure that reflectssome security functions, and repeatingthis structure would be redundant.

System initialization, although it maybe handled routinely, is often complex.Flaws in this area can occur either be-cause the operating system fails to estab-lish the initial protection domains asspecified (for example, it may set upownership or access control informationimproperly) or because the system ad-ministrator has not specified a secureinitial configuration for the system. Incase U5, improperly set permissions on

the mail directory led to a securitybreach. Viruses commonly try to attachthemselves to system initialization code,since this provides the earliest and mostpredictable opportunity to gain control ofthe system (see cases PC 1–4, for exam-ple).

Memory management and processmanagement are functions the operatingsystem provides to control storage spaceand CPU time. Errors in these functionsmay permit one process to gain access toanother improperly, as in case 16, or todeny service to others, as in case MU5.

Device management often includescomplex programs that operate in paral-lel with the CPU. These factors make thewriting of device-handling programs bothchallenging and prone to subtle errorsthat can lead to security flaws (see case12). Often, these errors occur when the1/0 routines fail to respect parametersprovided them (case U12) or when they

validate parameters provided in storagelocations that can be altered, directly orindirectly, by user programs after checksare made (case 13).

File systems typically use the process,memory, and device management func-tions to create long-term storage struc-tures. With few exceptions, the operatingsystem boundary includes the file sys-tem, which often implements access con-trols to permit users to share and protecttheir files. Errors in these controls, or inthe management of the underlying files,can easily result in security flaws (seecases 11, MU8, and U2).

Usually, the identification and authen-tication functions of the operating systemmaintain special files for user IDs andpasswords and provide functions to checkand update those files as appropriate. Itis important to scrutinize not only thesefunctions, but also all of the possible portsof entry into a system to ensure thatthese functions are invoked before a useris permitted to consume or control othersystem resources.

Suppo17 Software

Support software comprises compilers,editors, debuggers, subroutine or macro

ACM Computing Surveys, Vol. 26, No. 3, September 1994

226 “ Carl E. Landwehr et al.

libraries, database management systems,and any other programs not properlywithin the operating system boundarythat many users share. The operatingsystem may grant special privileges tosome such programs; these we call privi-leged utilities. In Unix, for example, any“setuid program owned by “root,” in ef-fect, runs with access-checking controlsdisabled. This means that any such pro-gram will need to be scrutinized for secu-rity flaws, since during its execution oneof the fundamental security-checkingmechanisms is disabled.

Privileged utilities are often complexand sometimes provide functions thatwere not anticipated when the operatingsystem was built. These characteristicsmake them difficult to develop and likelyto have flaws that, because they are alsogranted privileges, can compromise secu-rity. For example, daemons, which mayact on behalf of a sequence of users andon behalf of the system as well, may haveprivileges for reading and writing specialsystem files or devices (e.g., communica-tion lines, device queues, mail queues) aswell as for files belonging to individualusers (e.g., mailboxes). They frequentlymake heavy use of operating system fa-cilities, and their privileges may turn asimple programming error into a pene-tration path. Flaws in daemons providingremote access to restricted system capa-bilities have been exploited to permitunauthenticated users to execute arbi-trary system commands (case U12) andto gain system privileges by writing thesystem authorization file (case U13).

Even unprivileged software can repre-sent a significant vulnerability becausethese programs are widely shared, andusers tend to rely on them implicitly. Thedamage inflicted by flawed, unprivilegedsupport software (e.g., by an embeddedTrojan horse) is normally limited to theuser who invokes that software. In somecases, however, since it may be used tocompile a new release of a system, sup-port software can even sabotage operat-ing system integrity (case Ul). Inadver-tent flaws in support can also causesecurity flaws (case 17); intentional but

nonmalicious flaws in support softwarehave also been recorded (case Bl).

Application Software

We categorize programs that have nospecial system privileges and are notwidely shared as application software.Damage caused by inadvertent softwareflaws at the application level is usuallyrestricted to the executing process, sincemost operating systems can prevent oneprocess from damaging another. Thisdoes not mean that application softwarecannot do significant damage to a user’sown stored files, however, as many vic-tims of Trojan horse and virus programshave painfully discovered. An applicationprogram generally executes with all theprivileges of the user who invokes it, in-cluding the ability to modify permissions,read, write, or delete any files that userowns. In the context of most personalcomputers now in use, this means thatan errant or malicious application pro-gram can, in fact, destroy all the infor-mation on an attached hard disk orwriteable floppy disk.

Inadvertent flaws in application soft-ware that cause program termination orincorrect output, or can generate unde-sirable conditions such is infinite looping,have been discussed previously. Mali-cious intrusion at the application soft-ware level usually requires access to thesource code (although a virus could con-ceivably attach itself to application objectcode) and can be accomplished in variousways, as discussed in Section 2.2.

2.3.2 Hardware

Issues of concern at the hardware levelinclude the design and implementation ofprocessor hardware, microprograms, andsupporting chips, and any other hard-ware or firmware functions used torealize the machine’s instruction set ar-chitecture. It is not uncommon for evenwidely distributed processor chips to beincompletely specified, to deviate fromtheir specifications in special cases, or toinclude undocumented features. Inadver-

ACM Computing Surveys, Vol. 26, No 3, September 1994

Program Security Flaws 9 227

tent flaws at the hardware level can causeproblems such as improper synchroniza-tion and execution, bit loss during datatransfer, or incorrect results after execu-tion of arithmetic or logical instructions

(see case MU9). Intentional but nonmali-cious flaws can occur in hardware, par-ticularly if the manufacturer includesundocumented features (for example, toassist in testing or quality control). Hard-ware mechanisms for resolving resourcecontention efficiently can introduce covertchannels (see case D2). Generally, mali-cious modification of installed hardware

(e.g., installing a bogus replacement chipor board) requires physical access tohardware components, but microcodeflaws can be exploited without physicalaccess. An intrusion at the hardware levelmay result in improper execution of pro-grams, system shutdown, or, conceivably,the introduction of subtle flaws ex-ploitable by software.

3. DISCUSSION

We have suggested that a taxonomy de-fines a theory of a field, but an unpopu-lated taxonomy teaches us little. For thisreason, the security flaw examples in theAppendix are as important to this surveyas the taxonomy. Reviewing the exam-ples should help readers understand thedistinctions that we have made amongthe various categories and how to applythose distinctions to new examples. Inthis section, we comment briefly on thelimitations of the taxonomy and the setof examples, and we suggest techniquesfor summarizing flaw data that could helpanswer the questions we used in Section2 to motivate the taxonomy.

3.1 Limitations

The development of this taxonomy fo-cused largely, though not exclusively, onflaws in operating systems. We have nottried to distinguish or categorize themany kinds of security flaws that mightoccur in application programs fordatabase management, word processing,electronic mail, and so on. We do not

suggest that there are no useful struc-tures to be defined in those areas; rather,we encourage others to identify and doc-ument them. Although operating systemstend to be responsible for enforcing fun-damental system security boundaries, thedetailed, application-dependent accesscontrol policies required in a particularenvironment are in practice often left tothe application to enforce. In this case,application system security policies canbe compromised even when operatingsystem policies are not.

While we hope that this taxonomy willstimulate others to collect, abstract, andreport flaw data, readers should recog-nize that this is an approach for evaluat-ing problems in systems as they havebeen built. Used intelligently, informa-tion collected and organized this way canhelp us build stronger systems in thefuture, but some factors that affect thesecurity of a system are not captured bythis approach. For example, any systemin which there is a great deal of softwarethat must be trusted is more likelyto contain security flaws than one inwhich only a relatively small amount ofcode could conceivably cause securitybreaches.

Security failures, like accidents, oftenare triggered by an unexpected combina-tion of events. In such cases, the assign-ment of a flaw to a category may rest onrelatively fine distinctions. So, we shouldavoid drawing strong conclusions fromthe distribution of a relatively smallnumber of flaw reports.

Finally, the flaws reported in the Ap-pendix are selected, not random or com-prehensive, and they are not recent.Flaws in networks and applications arebecoming increasingly important, and thedistribution of flaws among the cate-gories we have defined may not be sta-tionary. So, any conclusions based strictlyon the flaws captured in the Appendixmust remain tentative.

3.2 Inferences

Despite these limitations, it is importantto consider what kinds of inferences we

ACM Computing Surveys, Vol. 26, No. 3, September 1994

228 “ Carl E. Landwehr et al.

Other Intentional

Covert Timing Chan.

Covert Storage Chan.

Time I Lagic Bomb

(e Trapdoor

%’c

Virus

G Trojan horse

: Other Inadvertent—IL Sd. Condltlon VIOI

ldentiflcatlon/Auth.

Serialization/Allas,

Domam

Validation

L❑ Rqmnts/Spec/Design

o Source Code

x Object Code

O Maintenance

A Operation

.,,

,. +3

-0-- -- --

n

El

o

0

uo

I I I I 1 I I I I 1 I

El

D

I I 1 1 I I I I I I I

o

x

A

System Me- PrO - De- Flle Ident /Other/ Priv, Unprw Appl I- Hard

Imr. mory cess vice Mgmt Auth. Unknown Util. Utll - ca- ware

Mgmt Mgmt Mgmt Itles ttles tlons

Flaw Location

Figure 4. Example flaws, Genesis vs. location, over life-cycle

could draw from a set of flaw data orga-nized according to the taxonomy. Proba-bly the most straightforward way todisplay such data is illustrated in Fig-ures 1–3. By listing the case identifiersand counts within each category, the fre-quency of flaws across categories isroughly apparent, and this display canbe used to give approximate answers tothe three questions that motivated thetaxonomy: how, when, and where do se-curity flaws occur? But this straightfor-ward approach makes it difficult to per-ceive relationships among the three tax-onomies: determining whether there isany relationship between the time a flawis introduced and its location in the sys-tem, for example, is relatively difficult.

To provide more informative views ofcollected data, we propose the set of scat-ter plots shown in Figures 4–7. Figure 4captures the position of each case in allthree of the taxonomies (by genesis, time,and location). Flaw location and genesis

are plotted on the x and y axes, respec-tively, while the symbol plotted reflectsthe time the flaw was introduced. Bychoosing an appropriate set of symbols,we have made it possible to distinguishcases that differ in any single parameter.If two cases are categorized identically,however, their points will coincide ex-actly, and only a single symbol will ap-pear in the plot. Thus from Figure 4 wecan distinguish those combinations of allcategories that never occur from thosethat do, but information about the rela-tive frequency of cases is obscured.

Figures 5–7 remedy this problem. Ineach of these figures, two of the threecategories are plotted on the x and yaxes, and the number of observationscorresponding to a pair of categories con-trols the diameter of the circle used toplot that point. Thus a large circle indi-cates several different flaws in a givencategory, and a small circle indicates onlya single occurrence. If a set of flaw data

ACM Computmg Surveys, Vol. 26, No, 3, September 1994

Program Security Flaws ● 229

Flew Genes~I I I 1 I I I 1 I I I,,, 0 N=l

Other Intentional:;

:, 60:, ~Q:::

Coverl Timmg Chan - ----- ------- - 0------- ; ----. -.--!--- jO N=2

---- }.. . .. . .... . . . .... .. . . .. . . . . .. .. . .. .::,:,

Covert Storage Charl. -::!: 0;::0 O N=3Time I Logic Bomb - -- .~...... ..: . . .. . . .. . .. .. . ..; .. .. ... . .. . . . ..;. .:. . ..7. -$-i --y

Trapdoor -:: :: :, :, () N=4

0“””””:”’”””-:-----;-----:---:-----:, :,., :,Virus - --- :.

. ..Q .. ;. . .. . . . . .. O N=~

Trojan horse :. ,,:, 0::; rOther Inadvertent - -- +----6- .. ..---+ ---------- ------ /... + ‘--; -- ------ -

:,,EM, Condition VIOL -:OOQ ;O:, :,

ldentlficatlon/Auth. - -. .. ..Q .. . .... :.... ... ... ... .. .. .. ... . . .. ....&.... .T . .. . .. . .. . ... .. Q.. .. .. . . . . ..+ . .... .. :..

Sertallzatlon/Ailaa. b~,;~!::

Domam - - ----+ ------ -+------- : --------~--------! -------: -- . . .. .. .. .. .. .. .. .. .. . ... .. . .

Val!dat}on o :000 +; 0!:;

II I 1 I I i 1 I I I

System Me- Pro- De. Fde ldent./Other/ Prw. Unprlv. App!i-Hard-

Imt. mory cess vice Mgmt Auth, (Jnknownlftlt - Utll - ca. ware

Mgmt Mgmt MgmtFlaw Location

ities itiea tlons

Figure5. Example flaws: Genesis vs.location; Nequals number ofexamples in Appendix.

Flaw Genesis -

Other Intentional 0“Covert Timing Chan,- -- --

..Q. . .. ..O . .. . . . .; .:

Covert Storage Chan o

Time I Logic Bomb - - .- ;.-- -Q --

Trapdoor

Virus - - -0- -- ------- 0 ““”Trojan horse o

0 N=l

O N=2

0 N=4

() N=5

o N=6

IOther Inadvertent - --- - -0 -------- --- --; -- ----(- -- -- ---

Sal. Condltmn VIOI. o 0’”

ldentlflcatlon/Auth. - --’ - p-------o -------;- --- -- -J

Serial! zatlon/Allas. 0!:Domain - -

8

. . . . . . . .. . . . . ..Q. . . . . . . . . . . .. . . . -

Valldatlon0111

Rqmt/ Source Ob)ect Malnte-

Spec/Opera-

Gxka Cc&l nance tionDesign Time In Life.C@e when FIaW we.s Introduced

Figure 6. Example flaws: Genesis vs. time introduced; N equals number of examples in Appendix.

reveals a few large-diameter circles, ef- security flaws generally. What actionsforts at flaw removal or prevention might might be indicated? The three large cir-be targeted on the problems these circles cles in the lower left corner of Figure 6reflect. Suppose for a moment that data might, for example, be taken as a signalplotted in Figures 4–7 were in fact a that more emphasis should be placed onvalid basis for inferring the origins of domain definition and on parameter vali-

ACM Computing Surveys, Vol. 26, No. 3, September 1994

230 ● Carl E. Landwehr et al.

I I I I IFlaw Location O N=l

Hardware o 0 0 N=2Appllcatlons o

Unpnv. Ufllltles0 N=3

PrIv Uilllfies - -09::

-0 O N=5

Other/Unknown

.0 ‘=’Ident lAuth G “ “~ “

File Mgml;0

Dewca Mgmt 0’::

P recess Mgmto 0: Q

Memory Mgmt o Q .,

System Iruf o 0Q

I I I 1

Rqmt/ Source Object Mainte- Opera-Specl Cc& C&s nance tlonDesign Time in Life-Cycle When Flaw Was Introduced

Figure 7. Example flaws: Location vs. time of introduction; N equals number of examples in Appendix.

dation during the early stages ofsoftware development.

Because we do not claim that this se-lection of security flaws is statisticallyrepresentative, we cannot use these plotsto draw strong conclusions about how,when, or where security flaws are mostlikely to be introduced. However, we be-lieve that the kinds of plots shown wouldbe an effective way to abstract and pre-sent information from more extensive,and perhaps more sensitive, data sets.

We also have some observations basedon our experiences in creating the taxon-omy and applying it to these examples. Itseems clear that security breaches, likeaccidents, typically have several causes.Often, unwarranted assumptions aboutsome aspect of system behavior lead tosecurity flaws. Problems arising fromasynchronous modification of a previ-ously checked parameter illustrate thispoint: the person who coded the checkassumed that nothing could cause thatparameter to change before its use—when an asynchronously operatingprocess could in fact do so. Perhaps themost dangerous assumption is that secu-rity need not be addressed—that the en-vironment is fundamentally benign, or

that security can be added later. BothUnix and PC operating systems illus-trate clearly the cost of this assumption.One cannot be surprised when systemsdesigned without particular regard to se-curity requirements exhibit securityflaws. Those who use such systems livein a world of potentially painful sur-prises.

APPENDIX: SELECTED SECURITY FLAWS

The following case studies exemplify se-curity flaws. Without making claims asto the completeness or representative-ness of this set of examples, we believethey will help designers know what pit-falls to avoid and security analysts knowwhere to look when examining code,specifications, and operational guidancefor security flaws.

All of the cases documented here (ex-cept possibly one) reflect actual flaws inreleased software or hardware. For eachcase, a source (usually with a referenceto a publication) is cited, the software/hardware system in which the flaw oc-curred is identified, the flaw and its ef-fects are briefly described, and the flawis categorized according to the taxonomy.

ACM Computmg Surveys, Vol 26, No. 3, September 1994

Program Security Flaws ● 231

Table 1. The Codes Used to Refer to Systems

Flaw Page Flaw Page Flaw Pagecode System no. code System no. code System no.

11

1213

14

15161718

19

MT1MT2MT3MT4MU1MU2MU3

MU4

IBM 0S/360

IBM VM/370IBM VM\370

IBM VM/370IBM MVS

IBM MVSIBM MVSIBMIBM KVM/370

MTSMTSMTSMTSMulticsMulticsMulticsMultics

232232233233234234234235235235236236236237237238

238

MU5MU6MU7

MU8

MU9B1

UN1DT1

U1

U2U3U4U5U6U7U8

U9

Multics

MulticsMultics

Multics

MulticsBurroughsUnivac

DEC TenexUnix

UnixUnixUnixUnixUnixUnixUnixUnix

238239

239239

239240240

241242

242243243244244245245246

Ulo

UllU12

U13U14

D1D2IN1Pc 1

PC2PC3PC4MA1MA2CA1AT1

Unix

UnixUnix

UnixUnixDEC VMSDEC Security Kernel

Intel 80386/7IBM PC

IBM PCIBM PCIBM PCApple MacintoshApple Macintosh

Commodore AmigaAtari

246246247

247248

248249249

250251251251252252252253

Where it has been difficult to deter-mine with certainty the time or place aflaw was introduced, the most probablecategory (in the judgment of the authors)has been chosen, and the uncertainty isannotated by a question mark (?). In somecases, a flaw is not fully categorized. Forexample, if the flaw was introducedduring the requirements/specificationphase, then the place in the code wherethe flaw is located may be omitted.

The cases are grouped according to thesystem on which they occurred. (Unix,which accounts for about a third of theflaws reported here, is considered a sin-gle system.) The systems are orderedroughly chronologically. Since readersmay not be familiar with the details ofall of the architectures included here,brief introductory discussions of relevantdetails are provided as appropriate.

Table 1 lists the code used to refer toeach flaw and the number of the page onwhich the flaw is described.

IBM /360 and /370 Systems

In the IBM System/360 and /370 archi-tecture, the Program Status Word (PSW)defines the key components of the systemstate. These include the current machinestate (problem state or supervisor state)and the current storage key. Two instruc-tion subsets are defined: the problem

state instruction set, which excludesprivileged instructions (loading the PSW,initiating 1/0 operations, etc.) and thesupervisor state instruction set, whichincludes all instructions, Attempting toexecute a privileged operation while inproblem state causes an interrupt. Aproblem state program that wishes toinvoke a privileged operation does so nor-mally by issuing the Supervisor Call

(SVC) instruction, which also causes aninterrupt.

Main storage is divided into 4KB pages;each page has an associated 4-bit storagekey. Typically, user memory pages areassigned storage key 8, while a systemstorage page will be assigned a storagekey from O to 7. A task executing with anonzero key is permitted unlimited ac-cess to pages with storage keys thatmatch its own. It can also read pageswith other storage keys that are notmarked as fetch-protected. An attempt towrite into a page with a nonmatchingkey causes an interrupt. A task executingwith a storage key of zero is allowedunrestricted access to all pages, regard-less of their key or fetch-protect status.Most operating system functions executewith a storage key of zero.

The 1/0 subsystem includes a varietyof channels that are, in effect, separate,special-purpose computers that can beprogrammed to perform data transfers

ACM Computing Surveys, Vol. 26, No. 3, September 1994

232 ● Carl E. Landwehr et al.

between main storage and auxiliary de-vices (tapes, disks, etc.). These channelprograms are created dynamically by de-vice driver programs executed by theCPU. The channel is started by issuing aspecial CPU instruction that provides thechannel with an address in main storagefrom which to begin fetching its instruc-tions. The channel than operates in par-allel with the CPU and has independentand unrestricted access to main storage.Thus, any controls on the portions ofmain storage that a channel could reador write must be embedded in the chan-nel programs themselves. This paral-lelism, together with the fact thatchannel programs are sometimes (inten-tionally) self-modifying, provides com-plexity that must be carefully controlledif security flaws are to be avoided.

0S/360 and MVS (Multiple VirtualStorages) are multiprogramming operat-ing systems developed by IBM for thishardware architecture. The Time Shar-ing Option (TSO) under MVS permitsusers to submit commands to MVS frominteractive terminals. VM/370 is a vir-tual machine monitor operating systemfor the same hardware, also developed byIBM. The KVM/370 system was devel-oped by the U.S. Department of Defenseas a high-security version of VM\370.MTS (Michigan Terminal System), devel-oped by the University of Michigan, is anoperating system designed especially tosupport both batch and interactive use ofthe same hardware.

MVS supports a category of privileged,non-MVS programs through its Autho-rized Program Facility (APF). APF pro-grams operate with a storage key of 7 orless and are permitted to invoke opera-tions (such as changing to supervisormode) that are prohibited to ordinaryuser programs. In effect, APF programsare assumed to be trustworthy, and theyact as extensions to the operating sys-tem. An installation can control whichprograms are included under APF. RACF(Resource Access Control Facility) andTop Secret are security packages de-signed to operate as APF programsunder MVS.

Case: 11

Source: Tanenbaum A. S., OperatingSystems Design and Implementation.Prentice-Hall, 1987.

System: IBM 0S/360

Description: In 0S/360 systems, thefile-access-checking mechanism could besubverted. When a password was re-quired for access to a file, the filenamewas read, and the user-supplied pass-word was checked. If it was correct, thefile name was reread, and the file wasopened. It was possible, however, for theuser to arrange that the filename bealtered between the first and secondreadings. First, the user would initiate aseparate background process to read datafrom a tape into the storage location thatwas also used to store the filename. Theuser would then request access to a filewith a known password. The systemwould verify the correctness of the pass-word. While the password was beingchecked, the tape process replaced theoriginal filename with a file for whichthe user did not have the password, andthis file would be opened. The flaw isthat the user can cause parameters to bealtered after they have been checked (thiskind of flaw is sometimes called a time-of-check-to-time-of-use (TO CTTOU)flaw). It could probably have been cor-rected by copying the parameters intooperating system storage that the usercould not cause to be altered.

Genesis: Inadvertent: Serialization

Time: During Development: Require-ment/Specification/Design

Place: Operating System: File Manage-ment

Case: 12

Source: Attanasio, C. R., Markstein, P.W., and Phillips, R. J. Penetrating anoperating system: A study of VM/370integrity. IBM Syst. J. (1976), 102–116.

System: IBM VM/370

Description: By carefully exploiting an“oversight in condition-code checking,” a

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

Program Security Flaws ● 233

retrofit in the basic VM/370 design, andthe fact that CPU and 1/0 channel pro-grams could execute simultaneously, apenetrator could gain control of the sys-tem. Further details of this flaw are notprovided in the cited source, but it ap-pears that a logic error (“oversight incondition-code checking”) was at leastpartly to blame.

Genesis: Inadvertent: Serialization

Time: During Development: Require-ment/Specification/Design

Place: Operating System: Device Man-agement

Case: 13

Source: Attanasio, C. R., Markstein,P. W., and Phillips, R. J. Penetrating anoperating system: A study of VM/370integrity. Bill Syst. J. (1976), 102–116.

System: IBM VM/370

Description: As a virtual machine mon-itor, VM/370 was required to provide1/0 services to operating systems exe-cuting in individual domains under itsmanagement, so that their 1/0 routineswould operate almost as if they wererunning on the bare IBM/370 hardware.Because the 0S/360 operating system

(specifically, the Indexed Sequential Ac-cess Method (ISAM) routines) exploitedthe ability of 1/0 channel programs tomodify themselves during execution,VM/370 included an arrangementwhereby portions of channel programswere executed from the user’s virtualmachine storage rather than fromVM/370 storage. This permitted a pene-trator, mimicking an 0S/360 channelprogram, to modify the commands in userstorage before they were executed by thechannel and thereby to overwrite arbi-trary portions of VM/370.

Genesis: Inadvertent: Domain(?) Thisflaw might also be classed as (Inten-tional, Nonmalicious, Other), if it is con-sidered to reflect a conscious compromisebetween security and both efficiency inchannel program execution and compati-bility with an existing operating system.

Time: During Development: Require-ment/Specification/Design

Place: Operating System: Device Man-agement

Case: 14

Source: Attanasio, C. R., Markstein,P. W., and Phillips, R. J. Penetrating anoperating system: A study of VM/370integrity. IBM Syst. J. (1976), 102–1 16.

System: IBM VM/370

Description: In performing static anal-ysis of a channel program issued by aclient operating system for the purpose oftranslating it and issuing it to the chan-nel, VM/370 assumed that the meaningof a multiword channel command re-mained constant throughout the execu-tion of the channel program. In fact,channel commands vary in length, andthe same word might, during executionof a channel program, act both as a sepa-rate command and as the extension ofanother (earlier) command, since a chan-nel program could contain a backwardbranch into the middle of a previous mul-tiword channel command. By careful con-struction of channel programs to exploitthis blind spot in the analysis, a usercould deny service to other users (e.g., byconstructing a nonterminating channelprogram), read restricted files, or evengain complete control of the system.

Genesis: Inadvertent: Validation (?) Theflaw seems to reflect an omission in thechannel program analysis logic. Perhapsadditional analysis techniques couldbe devised to limit the specific set ofchannel commands permitted, but deter-mining whether an arbitrary channelprogram halts or not appears equivalentto solving the Turing machine haltingproblem. On this basis, this could also beargued to be a design flaw.

Time: During Development: Require-merit/Specification/ Design

Place: Operating System: Device Man-agement

ACM Computing Surveys, Vol. 26, No. 3, September 1994

234 ● Carl E. Landwehr et al.

Case: 15

Source: Opaska, W. A security loopholein the MVS operating system. Comput.Fraud Sec. Bull. (May 1990), 4-5.

System: IBM\370 MVS(TSO)

Description: Time Sharing Option(TSO) is an interactive development sys-tem that runs on top of MVS. Input/Output operations are only allowed onallocated files. When files are allocated(via the TSO ALLOCATE function), forreasons of data integrity the requestinguser or program gains exclusive use ofthe file. The flaw is that a user is allowedto allocate files whether or not he or shehas access to the files. A user can use theALLOCATE function on files such asSMF (System Monitoring Facility)records, the TSO log-on procedure lists,the ISPF user profiles, and the produc-tion and test program libraries to denyservice to other users.

Genesis: Inadvertent: Validation (?) Theflaw apparently reflects omission of anaccess permission check in program logic.

Time: During Development: Require-ment/Specification/Design (?) Withoutaccess to design information, we cannotbe certain whether the postulated omis-sion occurred in the coding phase or priorto it.

Place: Operating System: File Manage-ment

Case: 16

Source: Paans, R. and Bonnes, G. Sur-reptitious security violation in the MVSoperating system. In Security, IIWP/Sec’83, V. Fak, Ed. North Holland, 1983,95-105.

System: IBM MVS (TSO)

Description: Although TSO attemptedto prevent users from issuing commandsthat would operate concurrently witheach other, it was possible for a programinvoked from TSO to invoke multitask-ing. Once this was achieved, another TSOcommand could be issued invoking a

program that executed under the Autho-rized Program Facility (APF). The con-current user task could detect when theAPF program began authorized execu-tion (i.e., with storage key value less than8). At this point the entire user’s addressspace (including both tasks) was effec-tively privileged, and the user-controlledtask could issue privileged operations andsubvert the system. The flaw here seemsto be that when one task gained APFprivilege, the other task was able to do soas well—that is, the domains of the twotasks were insufficiently separated.

Genesis: Inadvertent: Domain

Time: Development: Requirement/Specification/Design (?)

Place: Operating System: Process Man-agement

Case: 17

Source: Paans, R. and Bonnes, G. Sur-reptitious security violation in the MVSoperating system. In Security, IFIP/Sec’83, V. Fak, Ed. North Holland, 1983,95-105.

System: IBM MVS

Description: Commercial softwarepackages, such as database managementsystems, must often be installed so thatthey execute under the Authorized Pro-gram Facility. In effect, such programsoperate as extensions of the operatingsystem, and the operating system per-mits them to invoke operations that areforbidden to ordinary programs. The soft-ware package is trusted not to use theseprivileges to violate protection require-ments. In some cases, however, (the ref-erenced source cites as examples theCullinane IDMS database system andsome routines supplied by CambridgeSystems Group for servicing SupervisorCall (SVC) interrupts) the package maymake operations available to its usersthat do permit protection to be violated.This problem is similar to the problem offaulty Unix programs that run as SUIDprograms owned by root (see case U5):there is a class of privileged programs

ACM Computing Surveys, Vol 26, No 3, September 1994

developed and maintained separatelyfrom the operating system proper that

can subvert operating system protectionmechanisms. It is also similar to thegeneral problem of permitting “trustedapplications.” It is difficult to point tospecific flaws here without examiningsome particular APF program in detail.Among others, the source cites an SVCprovided by a trusted application thatpermits an address space to be switchedfrom non-APF to APF status; subse-quently all code executed from that ad-dress space can subvert system protec-tion. We use this example to characterizethis kind of flaw.

Genesis: Intentional: Nonmalicious:Other(?) Evidently, the SVC performedthis function intentionally, but not forthe purpose of subverting system protec-tion, even though it had that effect. Mightalso be classed as Inadvertent: Domain.

Time: Development: Requirement/Specification/Design (?) (During devel-opment of the trusted application)

Place: Support: Privileged Utilities

Case: 18

Source: Burgess, J. Searching for a bet-ter computer shield. The WashingtonPost, Nov. 13, 1988, HI.

System: IBM

Description: A disgruntled employeecreated a number of programs that eachmonth were intended to destroy largeportions of data and then copy them-selves to other places on the disk. Hetriggered one such program after beingfired from his job, and was later con-victed of this act. Although this certainlyseems to be an example of a maliciouscode introduced into a system, it is notclear what, if any, technical flaw led tothis violation. It is included here simplyin order to provide one example of a“time-bomb.”

Genesis: Intentional: Malicious: Logic/Time-Bomb

Time: During Operation

Program Security Flaws ● 235

Place: Application (?)

Case: 19

Source: Schaefer, M., Gold, B., Linde,R., and Scheid, J. Program confinementin KVM/370. In Proc. ACM NationalConf. Oct. 1977.

System: KVM/370

Description: Because virtual machinesshared a common CPU under a round-robin scheduling discipline and hadaccess to a time-of-day clock, it was pos-sible for each virtual machine to detectat what rate it received service from theCPU. One virtual machine could signalanother by either relinquishing the CPUimmediately or using its full quantum; ifthe two virtual machines operated at dif-ferent security levels, information couldbe passed illicitly in this way. A straight-forward, but costly, way to close thischannel is to have the scheduler waituntil the quantum is expired to dispatchthe next process.

Genesis: Intentional: Nonmalicious:Covert timing channel.

Time: During Development: Require-ments/Specification/Design. This chan-nel occurs because of a design choice inthe scheduler algorithm.

Place: Operating System: Process Man-agement (Scheduling)

Case: MT1

Source: Hebbard, B., et al. A penetra-tion analysis of the Michigan TerminalSystem. ACM SIGOPS Oper. Syst. Rev.14, 1 (Jan. 1980), 7-20.

System: Michigan Terminal System

Description: A user could trick systemsubroutines into changing bits in thesystem segment that would turn off allprotection checking and gain completecontrol over the system. The flaw was inthe parameter-checking method used by(several) system subroutines. These sub-routines retrieved their parameters viaindirect addressing. The subroutinewould check that the (indirect) parame-

ACM Computing Surveys, Vol. 26, No. 3, September 1994

236 ● Carl E. Landwehr et al.

ter addresses lay within the user’s stor-age area. If not, the call was rejected, butotherwise the subroutine proceeded.However, a user could defeat this checkby constructing a parameter that pointedinto the parameter list storage area it-self. When such a parameter was used bythe system subroutine to store returnedvalues, the (previously checked) parame-ters would be altered, and subsequentuse of those parameters (during the sameinvocation) could cause the system tomodify areas (such as system storage) towhich the user lacked write permission.The flaw was exploited by finding sub-routines that could be made to return atleast two controllable values: the firstone to modify the address where the sec-ond one would be stored, and the secondone to alter a sensitive system variable.This is another instance of a time-of-check-to-time-of-use problem.

Genesis: Inadvertent: Validation

Time: During Development: Source Code(?) (Without access to design information,

we can not be sure that the parameter-checking mechanisms were adequate asdesigned.)

Place: Operating System: Process Man-agement

Case: MT2

Source: Hebbard, B., et al. A penetra-tion analysis of the Michigan TerminalSystem. ACM SIGOPS Oper. Syst. Reu.14, 1 (Jan. 1980), 7-20.

System: Michigan Terminal System

Description: A user could direct the op-erating system to place its data (specifi-cally, addresses for its own subsequentuse) in an unprotected location. By alter-ing those addresses, the user could causethe system to modify its sensitive vari-ables later so that the user would gaincontrol of the operating system.

Genesis: Inadvertent: Domain

Time: During Development: Require-ment/Specification/Design

Place: Operating System: Process Man-agement

Case: MT3

Source: Hebbard, B., et al. A penetra-tion analysis of the Michigan TerminalSystem. ACM SIGOPS Oper. Syst. Rev.14, 1 (Jan. 1980), 7-20.

System: Michigan Terminal System

Description: Certain sections of mem-ory readable by anyone contained sensi-tive information including passwords andtape identification. Details of this flaware not provided in the source cited; pos-sibly this represents a failure to clearshared input/output areas before theywere reused.

Genesis: Inadvertent. Domain (?)

Time: During Development: Require-

ment/Specification/Design (?)

Place: Operating System: Memory Man-agement (possibly also Device Manage-ment)

Case: MT4

Source: Hebbard, B., et al. A penetra-tion analysis of the Michigan TerminalSystem. ACM SIGOPS Oper. Syst. Rev.14, 1 (Jan. 1980), 7-20.

System: Michigan Terminal System

Description: A bug in the MTS supervi-sor could cause it to loop indefinitely inresponse to a “rare” instruction sequencethat a user could issue. Details of the bugare not provided in the source cited.

Genesis: Inadvertent: Boundary Condi-tion Violation

Time: During Development. Source Code(?)

Place: Operating System: Other/Un-known

Multics (GE-645 and Successors)

The Multics operating system was devel-oped as a general-purpose “informationutility” and successor to MIT’s Compati-ble Time Sharing System (CTSS) as a

ACM Computmg Surveys, Vol. 26, No, 3, September 1994

supplier of interactive computing ser-vices. The initial hardware for the sys-tem was the specially designed GeneralElectric GE-645 computer. Subsequently,Honeywell acquired GE’s computing divi-sion and developed the HIS 6180 and itssuccessors to support Multics. The hard-ware supported “master” mode, in whichall instructions were legal, and a “slave”mode, in which certain instructions (suchas those that modify machine registersthat control memory mappings) are pro-hibited. Additionally, the hardware of theHIS 6180 supported eight “rings” of pro-tection (implemented by software in theGE-645), to permit greater flexibility inorganizing programs according to theprivileges they required. Ring O was themost privileged ring, and it was expectedthat only operating system code wouldexecute in ring O. Multics also included ahierarchical scheme for files and directo-ries similar to that which has becomefamiliar to users of the Unix system, butMultics file structures were integratedwith the storage hierarchy, so that fileswere essentially the same as segments.Segments currently in use were recordedin the Active Segment Table (AST). De-nial of service flaws like the ones listedfor Multics below could probably be foundin a great many current systems.

Case: MU1

Source: Tanenbaum, A. S. OperatingSystems Design and Implementation.Prentice-Hall, 1987.

System: Multics

Description: Perhaps because it wasdesigned with interactive use as the pri-mary consideration, initially Multics per-mitted batch jobs to read card decks intothe file system without requiring anyuser authentication. This made it possi-ble for anyone to insert a file in anyuser’s directory through the batchstream. Since the search path for locat-ing system commands and utility pro-grams normally began with the user’slocal directories, a Trojan horse versionof (for example) a text editor could beinserted and would very likely be exe-

Program Security Flaws ● 237

cuted by the victim, who would be un-aware of the change. Such a Trojan horsecould simply copy the file to be edited (orchange its permissions) before invokingthe standard system text editor.

Genesis: Inadvertent: Inadequate Iden-tification/Authentication. According toone of the designers, the initial designactually called for the virtual card deckto be placed in a protected directory, andmail would be sent to the recipient an-nouncing that the file was available forcopying into his or her space. Perhapsthe implementer found this mechanismtoo complex and decided to omit the pro-tection. This seems simply to be an errorof omission of authentication checks forone mode of system access.

Time: During Development: Source Code

Place: Operating System: Identifica-tion/Authentication

Case: MU2

Source: Karger, P. A., and Schell, R. R.Multics Security Evaluation; Vulnerabil-ity Analysis. ESD-TR-74-193, Vol II, U.S.Air Force Electronic Systems Div. (ESD),Hanscom AFB, Mass. June 1974.

System: Multics

Description: When a program exe-cuting in a less privileged ring passesparameters to one executing in a moreprivileged ring, the more privileged pro-gram must be sure that its caller has therequired read or write access to the pa-rameters before it begins to manipulatethose parameters on the caller’s behalf.Since ring-crossing was implemented insoftware in the GE-645, a routine to per-form this kind of argument validationwas required. Unfortunately, this pro-gram failed to anticipate one of the sub-tleties of indirect addressing modesavailable on the Multics hardware, so theargument validation routine could bespoofed.

Genesis: Inadvertent: Validation. Failedto check arguments completely.

Time: During Development: Source Code

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

238 ● Carl E. Landwehr et al.

Place: Operating System: Process Man-agement

Case: MU3

Source: Karger, P. A., and Schell, R. R.Multics Security Evaluation: Vulnerabil-ity Analysis, ESD-TR-74-193, Vol II, June1974.

System: Multics

Description: In early designs of Mul-tics, the stack base (sb) register couldonly be modified in master mode. AfterMultics was released to users, this re-striction was found unacceptable, andchanges were made to allow the sb reg-ister to be modified in other modes.However, code remained in place, whichassumed that the sb register could onlybe changed in master mode. It was possi-ble to exploit this flaw and insert a trap-door. In effect, the interface betweenmaster mode and other modes waschanged, but some code that depended onthat interface was not updated.

Genesis: Inadvertent: Domain. Thecharacterization of a domain waschanged, but code that relied on the for-mer definition was not modified asneeded.

Time: During Maintenance: Source Code

Place: Operating System: Process Man-agement

Case: MU4

Source: Karger, P. A. and Schell, R. R.Multics Security Evaluation: Vulnerabil-ity Analysis. EST-TR-74-193, Vol II, June1974.

System: Multics

Description: Originally, Multics de-signers had planned that only processesexecuting in ring O would be permitted tooperate in master mode. However, on theGE-645, code for the signaler module,which was responsible for processingfaults to be signaled to the user and re-quired master mode privileges, was per-mitted to run in the user ring for reasonsof efficiency. When entered, the signaler

checked a parameter, and if the checkfailed, it transferred, via a linkage regis-ter, to a routine intended to bring downthe system. However, this transfer wasmade while executing in master modeand assumed that the linkage registerhad been set properly. Because the sig-naler was executing in the user ring, itwas possible for a penetrator to set thisregister to a chosen value and then makean (invalid) call to the signaler. Afterdetecting the invalid call, the signalerwould transfer to the location chosen bythe penetrator while still in master mode,permitting the penetrator to gain controlof the system.

Genesis: Inadvertent: Validation

Time: During Development: Require-ment/Specification/Design

Place: Operating System: Process Man-agement

Case: MU5

Source: Gligor, V. D. Some thoughts ondenial-of-service problems. Electrical En-gineering Dept., Univ. of Maryland, Col-lege Park, Md., Sept. 1982.

System: Multics

Description: A problem with the ActiveSegment Table (AST) in Multics version18.0 caused the system to crash in cer-tain circumstances. It was required thatwhenever a segment was active, all di-rectories superior to the segment also beactive. If a user created a directory treedeeper than the AST size, the AST wouldoverflow with unremovable entries. Thiswould cause the system to crash.

Genesis: Inadvertent: Boundary Condi-tion Violation: Resource Exhaustion. Ap-parently, programmers omitted a checkto determine when the AST size limitwas reached.

Time: During Development: Source Code

Place: Operating System: Memory Man-agement

Case: MU6

ACM Computing Surveys, Vol. 26, No 3, September 1994

Source: Gligor, V. D. Some thoughts on

denial-of-service problems. Electrical En-gineering Dept., Univ. of Maryland, Col-lege Park, Md., Sept. 1982.

System: Multics

Description: Because Multics originallyimposed a global limit on the total num-ber of login processes, but no other re-striction on an individual’s use of loginprocesses, it was possible for a singleuser to Iogin repeatedly and thereby blocklogins by other authorized users. A sim-ple (though restrictive) solution to thisproblem would have been to limit indi-vidual logins as well.

Genesis: Inadvertent: Boundary Condi-tion Violation: Resource Exhaustion

Time: During Development: Require-ment/Specification/Design

Place: Operating System: Process Man-agement

Case: MU7

Source: Gligor, V. D. Some thoughts ondenial-of-service problems. Electrical En-gineering Dept., Univ. of Maryland, Col-lege Park, Md., Sept. 1982.

System: Multics

Description: In early versions of Mul-tics, if a user generated too much storagein his or her process directory, an excep-tion was signaled. The flaw was that thesignaler used the wrong stack, therebycrashing the system.

Genesis: Inadvertent: Other ExploitableLogic Error

Time: During Development: Source Code

Place: Operating System: Process Man-agement

Case : MU8

Source: Gligor, V. D. Some thoughts ondenial-of-service problems. Electrical En-gineering Dept., Univ. of Maryland, Col-lege Park, Md., Sept. 1982.

System: Multics

Description: In early versions of Mul-tics, if a directory contained an entry fora segment with an all-blank name, the

Program Security Flaws “ 239

deletion of that directory would cause asystem crash. The specific flaw thatcaused a crash is not known, but, in ef-fect, the system depended on the user toavoid the use of all-blank segment names.

Genesis: Inadvertent: Validation

Time: During Development: Source Code

Place: Operating System: File Manage-ment. (In Multics, segments were equiva-lent to files.)

Case: MU9

Source: Karger, P. A., and Schell, R. R.Multics Security Evaluation: Vulnerabil-ity Analysis. ESD-TR-74-193, Vol II, June1974.

System: Multics

Description: A piece of software writ-ten to test Multics hardware protectionmechanisms (called the Subverter by itsauthors) found a hardware flaw in theGE-645: if an execute instruction in onesegment had as its target an instructionin location zero of a different segment,

and the target instruction used indexregister, but not base register modifica-tions, then the target instruction exe-cuted with protection checking disabled.By choosing the target instruction judi-ciously, a user could exploit this flaw togain control of the machine. When in-formed of the problem, the hardwarevendor found that a field service changeto fix another problem in the machinehad inadvertently added this flaw. Thechange that introduced the flaw was infact installed on all other machines ofthis type.

Genesis: Inadvertent: Other

Time: During Maintenance: Hardware

Place: Hardware

Burroughs B6700

Burroughs advocated a philosophy inwhich users of its systems were expectednever to write assembly language pro-grams, and the architecture of manyBurroughs computers was strongly influ-

ACM Computing Surveys, Vol 26, No. 3, September 1994

240 * Carl E. Landwehr et al.

enced by the idea that they would pri-marily execute programs that had beencompiled (especially ALGOL programs).

Case: B1

Source Wilkinson, A. L., et al. A pene-tration analysis of a Burroughs large sys-tem. ACM SIGOPS Oper. Syst. Rev. 15,1 (Jan, 1981), 14-25.

System: Burroughs B6700

Description: The hardware of the Bur-roughs B6700 controlled memory accessaccording to bounds registers that a pro-gram could set for itself. A user whocould write programs to set those regis-ters arbitrarily could effectively gain con-trol of the machine, To prevent this, thesystem implemented a scheme designedto assure that only object programs gen-erated by authorized compilers (whichwould be sure to include code to set thebounds registers properly) would ever beexecuted. This scheme required that ev-ery file in the system have an associatedtype. The loader would check the type ofeach file submitted to it in order to besure that it was of type “code-file,” andthis type was only assigned to files pro-duced by authorized compilers. Thus itwould be possible for a user to create anarbitrary file (e.g., one that containedmalicious object code that reset thebounds registers and assumed control ofthe machine), but unless its type codewere also assigned to be “code- file,” itstill could not be loaded and executed.Although the normal file-handling rou-tines prevented this, there were utilityroutines that supported writing files totape and reading them back into the filesystem. The flaw occurred in the routinesfor manipulating tapes: it was possible tomodify the type label of a file on tape sothat it became “code-f ile.” Once this wasaccomplished, the file could be retrievedfrom the tape and executed as a validprogram.

Genesis: Intentional: Nonmalicious:Other. System support for tape drivesgenerally requires functions that permitusers to write arbitrary bit patterns on

tapes. In this system, providing thesefunctions sabotaged security.

Time: During Development: Require-ment/Specification/Design

Place: Support: Privileged Utilities

Univac 1108

This large-scale mainframe provided

timesharing computing resources tomany laboratories and universities in the1970s. Its main storage was divided into“banks” of some integral multiple of 512words in length. Programs normally hadtwo banks: an instruction (I-) bank and adata (D-) bank. An I-bank containing areentrant program would not be expectedto modify itselfi a D-bank would bewritable. However, hardware storage

protection was organized so that a pro-

gram would either have write permissionfor both its I-bank and D-bank or nei-ther.

Case: UN1

Source: Stryker, D. Subversion of a“secure” operating system, NRL Memo.Rep. 2821, June, 1974.

System: Univac 1108/Exec 8

Description: The Exec 8 operating sys-tem provided a mechanism for users toshare reentrant versions of system utili-ties, such as editors, compilers, anddatabase systems, that were outside theoperating system proper. Such routineswere organized as “Reentrant Processors”or REPs. The user would supply data forthe REP in his or her own D-bank; allcurrent users of a REP would share a

common I-bank for it, Exec 8 also in-cluded an error recovery scheme thatpermitted any program to trap errors (i.e.,to regain control when a specified error,such as divide by zero or an out-of-boundsmemory reference, occurs). When thedesignated error-handling programgained control, it would have access tothe context in which the error occurred.On gaining control, an operating systemcall (or a defensively coded REP) wouldimmediately establish its own context for

ACM Computing Surveys, Vol. 26, No 3, September 1994

Program Security Flaws ● 241

trapping errors. However, many REPs didnot do this. Soj it was possible for amalicious user to establish an error-handling context, prepare an out-of-bounds D-bank for the victim REP, andinvoke the REP, which immediatelycaused an error. The malicious code re-gained control at this point with bothread and write access to both the REPsI-and D-banks. It could then alter theREP’s code (e.g., by adding Trojan horsecode to copy a subsequent user’s files intoa place accessible to the malicious user).This Trojan horse remained effective aslong as the modified copy of the REP

(which is shared by all users) remainedin main storage. Since the REP was sup-posed to be reentrant, the modified ver-sion would never be written back out to afile, and when the storage occupied bythe modified REP was reclaimed, all evi-dence of it would vanish. The flaws inthis case are in the failure of the REP toestablish its error handling and in the

hardware restriction that I- and D-bankshave the same write protection. Theseflaws were exploitable because the samecopy of the REP was shared by all users.A fix was available that relaxed thehardware restriction.

Genesis: Inadvertent: Domain: It waspossible for the user’s error handler togain access to the REPs domain.

Time: During Development: Require-

ments/Specification/Design

Place: Operating System: Process Man-agement. (Alternatively, this could beviewed as a hardware design flaw.)

DEC PDP-10

The DEC PDP-10 was a medium-scalecomputer that became the standard sup-plier of interactive computing facilitiesfor many research laboratories in the1970’s. DEC offered the TOPS-10 operat-ing system for it; the TENEX operatingsystem was developed by Bolt, Beranek,and Newman, Inc. (BBN), to operate inconjunction with a paging box and minormodifications to the PDP-10 processoralso developed by BBN.

Case: DT1

Source: Tanenbaum, A. S. OperatingSystems Design and Implementation.Prentice-Hall, 1987, and Abbott, R. P., etal. Security analysis and enhancementsof computer operating systems. Final Rep.the RISOS Project, NBSIR-76-1041, Na-tional Bureau of Standards, April 1976,

(NTIS PB-257 087), 49-50.

System: TENEX

Description: In TENEX systems, pass-words were used to control access to files.By exploiting details of the storage allo-

cation mechanisms and the password-checking algorithm, it was possible toguess the password for a given file. Theoperating system checked passwordscharacter by character, stopping as soonas an incorrect character was encoun-tered. Further, it retrieved the char-acters to be checked sequentially fromstorage locations chosen by the user. Toguess a password, the user placed a trialpassword in memory so that the firstunknown character of the password occu-pied the final byte of a page of virtualstorage resident in main memory, andthe following page of virtual storage wasnot currently in main memory. In re-sponse to an attempt to gain access tothe file in question, the operating systemwould check the password supplied. Ifthe character before the page boundarywas incorrect, password checking wasterminated before the following page wasreferenced, and no page fault occurred.But if the character just before the pageboundary was correct, the system wouldattempt to retrieve the next characterand a page fault would occur. By check-ing a system-provided count of the num-ber of page faults this process hadincurred just before and again just afterthe ~assword check. the user could de-duce’ whether or not a page fault hadoccurred during the check, and, hence,whether or not the guess for the nextcharacter of the password was correct. Ineffect, this technique reduces the searchspace for an N-character password overan alphabet of size m from N“ to Nm.

ACM Computing Surveys, Vol. 26, No 3, September 1994

242 ● Carl E. Landwehr et al.

The flaw was that the password waschecked character by character from theuser’s storage. Its exploitation requiredthat the user also be able to position astring in a known location with respectto a physical page boundary and that aprogram be able to determine (or dis-cover) which pages are currently inmemory.

Genesis: Intentional: Nonmalicious:Covert Storage Channel (could also beclassed as Inadvertent: Domain: ExposedRepresentation)

Time: During Development: Source Code

Place: Operating System: Identifica-tion\Authentication

Unix

The Unix operating system was origi-nally developed at Bell Laboratories as a“single-user Multics” to run on DECminicomputers (PDP-8 and successors).Because of its original goals—to provideuseful, small-scale, interactive comput-ing to a single user in a cooperative labo-ratory environment—security was not astrong concern in its initial design. Unixincludes a hierarchical file system withaccess controls, including a designatedowner for each file, but for a user withuserID “root” (also known as the “super-user”), access controls are turned off.Unix also supports a feature known as“setUID” or “SUID.” If the file from which

a Program IS loaded for execution ismarked “setUID,” then it will executewith the privileges of the owner of thatfile, rather than the privileges of the userwho invoked the program. Thus a pro-gram stored in a file that is owned by“root” and marked “setUID” is highlyprivileged (such programs are often re-ferred to as being “setUID to root”). Sev-eral of the flaws reported here occurredbecause programs that were “setUID toroot” failed to include sufficient internalcontrols to prevent themselves from be-ing exploited by a penetrator. This is notto say that the setUID feature is only ofconcern when “root” owns the file inquestion: any user can cause the setUID

bit to be set on files he or she creates. Auser who permits others to execute theprograms in such a file without exercis-ing due caution may have an unpleasantsurprise.

Case: U1

Source: Thompson, K. Reflections ontrusting trust. Commun. ACM 27,8 (Aug.1984), 761-763.

System: Unix

Description: Ken Thompson’s ACMTuring Award Lecture describes a proce-dure that uses a virus to install a trap-door in the Unix login program. The virusis placed in the C compiler and performstwo tasks. If it detects that it is compil-ing a new version of the C compiler, thevirus incorporates itself into the objectversion of the new C compiler. This en-sures that the virus propagates to newversions of the C compiler. If the virusdetermines it is compiling the Iogin pro-gram, it adds a trapdoor to the objectversion of the login program. The objectversion of the login program then con-tains a trapdoor that allows a specifiedpassword to work for a specific account.Whether this virus was ever actually in-stalled as described has not been re-vealed. We classify this according to thevirus in the compiler; the trapdoor couldbe counted separately.

Genesis: Intentional: Replicating Tro-jan horse (virus)

Time: During Development: Object Code

Place: Support: Unprivileged Utilities(compiler)

Case: U2

Source: Tanenbaum, A. S. OperatingSystems Design and Implementation.Prentice-Hall, 1987.

System: Unix

Description: The “lpr” program is aUnix utility that enters a file to be printedinto the appropriate print queue. The -roption to lpr causes the file to be re-moved once it has been entered into theprint queue. In early versions of Unix,

ACM Computing Surveys, Vol 26, No. 3, September 1994

the -r option did not adequately checkthat the user invoking lpr -r had therequired permissions to remove the spec-ified file, so it was possible for a user toremove, for instance, the password fileand prevent anyone from logging into thesystem.

Genesis: Inadvertent: Identification andAuthentication. Apparently, lpr was aSetUID (SUID) program owned by root

(i.e., it executed without access controls)and so was permitted to delete any fileon the system. A missing or improperaccess check probably led to this flaw.

Time: During Development: Source Code

Place: Operating System: File Manage-ment

Case: U3

Source: Tanenbaum, A. S. OperatingSystems Design and Implementation.Prentice-Hall, 1987.

System: Unix

Description: In some versions of Unix,“mkdir” was an SUID program owned byroot. The creation of a directory requiredtwo steps. First, the storage for the direc-tory was allocated with the “mknod” sys-tem call. The directory created would beowned by root. The second step of “mkdir”was to change the owner of the newlycreated directory from “root” to the ID ofthe user who invoked “mkdir.” Becausethese two steps were not atomic, it waspossible for a user to gain ownership ofany file in the system, including thepassword file. This could be done as fol-lows: the “mkdir” command would be ini-tiated, perhaps as a background process,and would complete the first step, creat-ing the directory, before being sus-pended. Through another process, theuser would then remove the newly cre-ated directory before the suspended pro-cess could issue the “chown” commandand would create a link to the systempassword file with the same name as thedirectory just deleted. At this time theoriginal “mkdir” process would resumeexecution and complete the “mkdir” invo-

Program Security Flaws ● 243

cation by issuing the “chown” command.However, this command would now havethe effect of changing the owner of thepassword file to be the user who hadinvoked “mkdir.” As the owner of thepassword file, that user could now re-move the password for root and gain su-peruser status.

Genesis: Intentional: Nonmalicious:Other. (Might also be classified as Inad-vertent: Serialization.) The developerprobably realized the need for (and lackof) atomicity in mkdir, but could not finda way to provide this in the version ofUnix with which he or she was working.Later versions of Unix (Berkeley Unix)introduced a system call to achieve this.

Time: During Development: Source Code

Place: Operating System: File Manage-ment. The flaw is really the lack of aneeded facility at the system call inter-face.

Case: U4

Source: Discolo, A. V. 4.2 BSD Unix se-curity. Computer Science Dept., Univ. ofCalifornia, Santa Barbara, April 26, 1985.

System: Unix

Description: Using the Unix command“sendmail,” it was possible to display anyfile in the system. Sendmail has a -Coption that allows the user to specify theconfiguration file to be used. If lines inthe file did not match the required syn-tax for a configuration file, sendmail dis-played the offending lines. Apparently,sendmail did not check to see if the userhad permission to read the file in ques-tion, so to view a file for which he or shedid not have permission (unless it hadthe proper syntax for a configuration file),a user could simply give the command“sendmail -Cfile _ name.”

Genesis: Inadvertent: Identification andAuthentication. The probable cause ofthis flaw is a missing access check, incombination with the fact that the send-mail program was an SUID program

ACM Computing Surveys, Vol. 26, No. 3, September 1994

244, ● Carl E. Landwehr et al.

owned by root, and so was allowed tobypass all access checks.

Time: During Development: Source Code

Place: Support: Privileged Utilities

Case: U5

Source: Bishop, M. Security problemswith the UNIX operating system. Com-puter Science Dept., Purdue Univ., WestLafayette, Ind., March 31, 1982.

sy~>tem: Unix

Description: Improper use of an SUIDprogram and improper setting of permis-sions on the mail directory led to thisflaw, which permitted a user to gain fullsystem privileges. In some versions ofUnix, the mail program changed theowner of a mail file to be the recipient ofthe mail. The flaw was that the mailprogram did not remove any preexistingSUID permissions that file had when itchanged the owner. Many systems wereset up so that the mail directory waswritable by all users. Consequently, itwas possible for user X to remove anyother user’s mail file. The user X wish-ing superuser privileges would removethe mail file belonging to root and re-place it with a file containing a copy of/bib/csh (the command interpreter orshell). This file would be owned by X,who would then change permissions onthe file to make it SUID and executableby all users. X would then send a mailmessage to root. When the mail messagewas received, the mail program wouldplace it at the end of root’s current mailfile (now containing a copy of /bin/cshand owned by X) and then change theowner of root’s mail file to be root (viaUnix command “chown”). The changeowner command did not, however, alterthe permissions of the file, so there nowexisted an SUID program owned by rootthat could be executed by any user. UserX would then invoke the SUID programin root’s mail file and have all the privi-leges of superuser.

Genesis: Inadvertent: Identification andAuthentication. This flaw is placed here

because the programmer failed to checkthe permissions on the file in relation tothe requester’s identity. Other flaws con-tribute to this one: having the mail direc-tory writeable by all users is in itself aquestionable approach. Blame could alsobe placed on the developer of the “chown”function. It would seem that it is never agood idea to allow an SUID program tohave its owner changed, and when“chown” is applied to an SUID program,many Unix systems now automaticallyremove all the SUID permissions fromthe file.

Time: During Development: Source Code

Place: Operating System: System Ini-tialization

Case: U6

Source: Bishop, M. Security problemswith the UNIX operating system. Com-puter Science Dept., Purdue Univ., WestLafayette, Ind., March 31, 1982.

System: Unix (Version 6)

Description: The “SU” command in Unixpermits a logged-in user to change his orher userID, provided the user can au-thenticate himself by entering the pass-word for the new userID. In Version 6Unix, however, if the “SU” program couldnot open the password file it would cre-ate a shell with real and effective UIDand GID set to those of root, providingthe caller with full system privileges.Since Unix also limits the number of filesan individual user can have open at onetime, “SU” could be prevented from open-ing the password file by running a pro-gram that opened files until the user’slimit was reached. By invoking “SU” atthis point, the user gained root privi-leges.

Genesis: Intentional: Nonmalicious:Other. The designers of “SU” may haveconsidered that if the system were in astate where the password file could notbe opened, the best option would be toinitiate a highly privileged shell to allowthe problem to be fixed. A check of de-fault actions might have uncovered this

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

Program Security Flaws “ 245

flaw. When a system fails, it should de-fault to a secure state.

Time: During Development: Design

Place: Operating System: Identifica-tion/Authentication

Case: U7

Source: Bishop, M. Security problemswith the Unix operating system. Com-puter Science Dept., Purdue Univ., WestLafayette, Ind., March 31, 1982.

System: Unix

Description: Uux is a Unix supportsoftware program that permits the re-mote execution of a limited set of Unixprograms. The command line to be exe-cuted is received by the uux program atthe remote system, parsed, checked tosee if the commands in the line are in theset uux is permitted to execute, and if so,a new process is spawned (with userIDuuicp) to execute the commands. Flaws inthe parsing of the command line, how-ever, permitted unchecked commands tobe executed. Uux effectively read the firstword of a command line, checked it, andskippe~ characters in the input line untila ““” “ “, or a “1” was encountered, signi-fyi;~ the end of this command. The firstword following the delimiter would thenbe read and checked, and the processwould continue in this way until the endof the command line was reached. Unfor-tunately, the set of delimiters was incom-plete (“&” and ‘“” were omitted), so acommand following one of the ignoreddelimiters would never be checked forlegality. This flaw permitted a user toinvoke arbitrary commands on a remotesystem (as user uucp). For example, thecommand

uux’’remote_ computer !rmail

rest_ of_ command

& command2°

would execute two commands on the re-mote system, but only the first (rmail)would be checked for legality.

Genesis: Inadvertent: Validation. Thisflaw seems simply to be an error in theimplementation of “UUX,” though it mightbe argued that the lack of a standardcommand line parser in Unix or the lackof a standard, shared set of commandtermination delimiters (to which “UUX”could have referred) contributed to theflaw.

Time: During Development: Require-merit/Specification\ Design (?) Deter-mining whether this was a specificationflaw or a flaw in programming is difficultwithout examination of the specification(if a specification ever existed) or an in-terview with the programmer.

Place: Support: Privileged Utilities

Case: U8

Source: Bishop, M. Security problemswith the UNIX operating system. Com-puter Science Dept., Purdue Univ., WestLafayette, Ind., March 31, 1982.

System: Unix

Description: On many Unix systems itis possible to forge mail. Issuing the fol-lowing command

mail userl < message—file > device—of—user2

creates a message addressed to userlwith contents taken from message _ filebut with a FROM field containingthe login name of the owner ofdevice _of_user2, so userl will receive amessage that is apparently from user2.This flaw is in the code implementing the“mail” program. It uses the Unix “get-login” system call to determine the senderof the mail message, but in this situa-tion, “getlogin” returns the Iogin nameassociated with the current standard out-put device (redefined by this command tobe device_ of user2) rather than the lo-gin name–of~he user who invoked the“mail.” While this flaw does not permit auser to violate access controls or gainsystem privileges, it is a significant secu-rity problem if one wishes to rely on the

ACM Computmg Surveys, Vol. 26, No. 3, September 1994

246 ● Carl E. Landwehr et al.

authenticity of Unix mail messages.(Even with this flaw repaired, however,it would be foolhardily to place great trustin the “from” field of an email message,since the Simple Mail Transfer Protocol

(SMTP) used to transmit email on theInternet was never intended to be secureagainst spoofing.)

Genesis: Inadvertent: Other ExploitableLogic Error. Apparently, this flaw re-sulted from an incomplete understandingof the interface provided by the “getlogin”function. While “getlogin” functions cor-rectly, the values it provides do not rep-resent the information desired by thecaller.

Time: During Development: Source Code

Place: Support: Privileged Utilities

Case: U9

Source: Unix Programmer’s Manual,7th ed., vol. 2B. Bell Telephone Laborato-ries, 1979.

System: Unix

Description: There are resource ex-haustion flaws in many parts of Unixthat make it possible for one user to denyservice to all others. For example, creat-ing a file in Unix requires the creation ofan “i-node” in the system i-node table. Itis straightforward to compose a scriptthat puts the system into a loop creatingnew files, eventually filling the i-nodetable, and thereby making it impossiblefor any other user to create files.

Genesis: Inadvertent: Boundary Condi-tion Violation: Resource Exhaustion (orIntentional: Nonmalicious: Other). Thisflaw can be attributed to the design phi-losophy used to develop the Unix system,namely, that its users are benign— theywill respect each other and not abuse thesystem. The lack of resource quotas wasa deliberate choice, and so Unix is rela-tively free of constraints on how usersconsume resources: a user may create asmany directories, files, or other objects asneeded. This design decision is the cor-rect one for many environments but

leaves the system open to abuse wherethe original assumption does not hold. Itis possible to place some restrictions on auser, e.g., by limiting the amount of stor-age he or she may use, but this is rarelydone in practice.

Time: During Development: Require-ment/Specification/Design

Place: Operating System: File Manage-ment

Case: U1O

Source: Spafford, E. H. Crisis and after-math. Commun. ACM 32, 6 (June 1989),678-687.

System: Unix

Description: In many Unix systems thesendmail program was distributed withthe debug option enabled, allowing unau-thorized users to gain access to the sys-tem. A user who opened a connection tothe system’s sendmail port and invokedthe debug option could send messagesaddressed to a set of commands insteadof a user’s mailbox. A judiciously con-structed message addressed in this waycould cause commands to be executed onthe remote system on behalf of an unau-thenticated user; ultimately, a Unix shellcould be created, circumventing normallogin procedures.

Genesis: Intentional: Nonmalicious:Other(?--Malicious, Trapdoor if inten-tionally left in distribution). This featurewas deliberately inserted in the code,presumably as a debugging aid. When itappeared in distributions of the systemintended for operational use, it provideda trapdoor. There is some evidence thatit reappeared in operational versions af-ter having been noticed and removed atleast once.

Time: During Development: Require-ment/Specification/Design

Place: Support: Privileged Utilities

Case: Ull

Source: Gwyn, D. Unix-Wizards Digest.6, 15 (Nov. 10, 1988).

ACM Computmg Surveys. Vol. 26, No 3, September 1994

System: Unix

Description: The Unix chfn functionpermits a user to change the full nameassociated with his or her userID. Thisinformation is kept in the password file,so a change in a user’s full name entailswriting that file. Apparently, chfn failedto check the length of the input buffer itreceived, and merely attempted to rewriteit to the appropriate place in the pass-word file. If the buffer was too long, thewrite to the password file would fail insuch a way that a blank line would beinserted in the password file. This linewould subsequently be replaced by a linecontaining only “: :0:():: :“ which corre-

sponds to a null-named account with nopassword and root privileges. A penetra-tor could then log in with a null userIDand password and gain root privileges.

Genesis: Inadvertent: Validation

Time: During Development: Source Code

Place: Operating System: Identifica-tion/Authentication. From one view, thiswas a flaw in the chfn routine that ulti-mately permitted an unauthorized userto log in. However, the flaw might also beconsidered to be in the routine that al-tered the blank line in the password fileto one that appeared valid to the loginroutine. At the highest level, perhaps theflaw is in the lack of a specification thatprohibits blank userIDs and null pass-words, or in the lack of a proper abstractinterface for modifying /etc/passwd.

Case: U12

Source: Rochlis, J, A. and Eichin, M. W.With microscope and tweezers: The wormfrom MITs perspective. Comrnun. ACM32, 6 (June 1980), 689-699.

System: Unix (4.3BSD on VAX)

Description: The “fingerd” daemon in

Unix accepts requests for user informa-tion from remote systems. A flaw in thisprogram permitted users to execute codeon remote machines, bypassing normalaccess checking. When fingerd read aninput line, it failed to check whether the

Program Security Flaws ● 247

record returned had overrun the end ofthe input buffer. Since the input bufferwas predictably allocated just prior tothe stack frame that held the return ad-dress for the calling routine, an inputline for fingerd could be constructed sothat it overwrote the system stack, per-mitting the attacker to create a new Unixshell and have it execute commands onhis or her behalf. This case represents a

(mis)use of the Unix “gets” function.

Genesis: Inadvertent: Validation

Time: During Development (SourceCode)

Place: Support: Privileged Utilities

Case: U13

Source: Robertson, S. Sec. Distrib. List1, 14 (June 22, 1989).

System: Unix

Description: Rwall is a Unix networkutility that allows a user to send a mes-sage to all users on a remote system./etc/utmp is a file that contains a list ofall currently logged-in users. Rwall usesthe information in /etc/utmp on the re-mote system to determine the users towhich the message will be sent, and theproper functioning of some Unix systemsrequires that all users be permitted towrite the file /etc/utmp. In this case, amalicious user can edit the /etc\utmpfile on the target system to contain theentry:

../etc/passwd.

The user then creates a password filethat is to replace the current passwordfile (e.g., so that his or her account willhave system privileges). The last step isto issue the command:

rwall hostname < newpasswordfile.

The rwall daemon (having root privi-leges) next reads /etc/utmp to deter-mine which users should receive themessage. Since /etc/utmp contains anentry ../etc/passwd, rwalld writes themessage (the new password file) to thatfile as well, overwriting the previous ver-sion.

ACM Computing Surveys, Vol. 26, No, 3, September 1994

248 ● Carl E. Landwehr et al.

Genesis: Inadvertent: Validation

Time: During Development: Require-ment/Specification/Design. The flaw oc-curs because users are allowed to alter afile on which a privileged program relied.

Place: Operating System: System Ini-tialization. This flaw is considered to bein system initialization because propersetting of permissions on \etc/utmp atsystem initialization can eliminate theproblem.

Case: U14

Source: Purtilo, J. Risks-Forum Dig, 7,2 (June 2, 1988).

System: Unix (SunOS)

Description: The program rpc.rexd is adaemon that accepts requests from re-mote workstations to execute programs.The flaw occurs in the authenticationsection of this program, which appears tobase its decision on userID (UID) alone.When a request is received, the daemondetermines if the request originated froma superuser UID. If so, the request isrejected. Otherwise, the UID is checkedto see whether it is valid on this worksta-tion. If it is, the request is processed withthe permissions of that UID. However, ifa user has root access to any machine inthe network, it is possible for him tocreate requests that have any arbitraryUID. For example, if a user on computer1 has a UID of 20, the impersonator oncomputer 2 becomes root and generates arequest with a UID of 20 and sends it tocomputer 1. When computer 1 receivesthe request it determines that it is avalid UID and executes the request. Thedesigners seem to have assumed that if a(locally) valid UID accompanies a re-quest, the request came from an autho-rized user. A stronger authenticationscheme would require the user to supplysome additional information, such as apassword. Alternatively, the schemecould exploit the Unix concept of “trustedhost.” If the host issuing a request is in alist of trusted hosts (maintained by thereceiver) then the request would be hon-ored; otherwise it would be rejected.

Genesis: Inadvertent: Identification andAuthentication

Time: During Development: Require-merit/Specification\ Design

Place: Support: Privileged Utilities

DEC VAX Computers

DEC’S VAX series of computers can beoperated with the VMS operating systemor with a UNIX-like system called UL-TRIX; both are DEC products. In VMSthere is a system authorization file thatrecords the privileges associated with auserID. A user who can alter this filearbitrarily effectively controls the sys-tem. DEC also developed the VAX Secu-rity Kernel, a high-security operatingsystem for the VAX based on the virtualmachine monitor approach. Although theresults of this effort were never mar-keted, two hardware-based covert timingchannels discovered in the course of itsdevelopment have been documentedclearly in the literature and are includedbelow.

Case: D1

Source: VMS code patch eliminates se-curity breach. Dig. Rev. (June 1, 1987),3.

System: DEC VMS

Description: This flaw is of particularinterest because the system in which itoccurred was a new release of a systemthat had previously been closely scruti-nized for security flaws. The new releaseadded system calls that were intended topermit authorized users to modify thesystem authorization file. To determinewhether the caller has permission tomodify the system authorization file, thatfile must itself be consulted. Conse-quently, when one of these system callswas invoked, it would open the systemauthorization file and determine whetherthe user was authorized to perform therequested operation. If the user was notauthorized to perform the requested op-eration, the call would return with anerror message. The flaw was that when

ACM Computmg Surveys, Vol 26, No 3, September 1994

Program Security Flaws ● 249

certain second parameters were providedwith the system call, the error messagewas returned, but the system authoriza-tion file was inadvertently left open. Itwas then possible for a knowledgeable

(but unauthorized) user to alter the sys-tem authorization file and eventuallygain control of the entire machine.

Genesis: Inadvertent: Domain: Residu-als. In the case described, the access tothe authorization file represents a resid-ual.

Time: During Maintenance: Source Code

Place: Operating System: Identifica-tion/Authentication

Case: D2

Source: Hu, W.-M. Reducing timingchannels with fuzzy time. In Proc. of the1991 IEEE Computer Society Symposiumon Research in Security and Privacy.1991, pp. 8-20.

System: VAX Security Kernel

Description: When several CPUS sharea common bus, bus demands from oneCPU can block those of others. If eachCPU also has access to a clock of anykind, it can also detect whether its re-quests have been delayed or immediatelysatisfied. In the case of the VAX SecurityKernel, this interference permitted a pro-cess executing on a virtual machine atone security level to send information toa process executing on a different virtualmachine, potentially executing at a lowersecurity level. The cited source describesa technique developed and applied tolimit this kind of channel.

Genesis: Intentional: Nonmalicious:Covert timing channel

Time: During Development: Require-ment/Specification/Design. This flaw

arises because of a hardware design deci-sion.

Place: Hardware

Intel 80386 / 80387 Processor/CoProcessor Set

Case: IN1

Source: EE’s tools & toys. IEEE Spec-trum 25, 8 (Aug. 1988), 42.

System: All systems using Intel 80386processor and 80387 coprocessor.

Description: It was reported that sys-tems using the 80386 processor and80387 coprocessor may halt if the 80387coprocessor sends a certain signal to the80386 processor when the 80386 proces-sor is in paging mode, This seems to be ahardware or firmware flaw that can causedenial of service. The cited reference doesnot provide details as to how the flawcould be evoked from software. It is in-cluded here simply as an example of ahardware flaw in a widely marketedcommercial system.

Genesis: Inadvertent: Other ExploitableLogic Error(?)

Time: During Development: Require-ment/Specification/Design(?)

Place: Hardware

Personal Computers: IBM PC’s andCompatibles, Apple Macintosh, Amiga,and Atari

This class of computers poses an inter-esting classification problem: can a com-puter be said to have a security flaw if ithas no security policy? Most personalcomputers, as delivered, do not restrict

(or even identify) the individuals who usethem. Therefore, there is no way to dis-tinguish an authorized user from anunauthorized one or to discriminate anauthorized access request by a programfrom an unauthorized one. In some re-spects, a personal computer that is al-ways used by the same individual is likea single user’s domain within a conven-

tional time-shared interactive system:within that domain, the user may invokeprograms as he or she wishes. Each pro-gram a user invokes can employ the full

ACM Computing Surveys, Vol. 26, No. 3, September 1994

250 ● Carl E. Landwehr et al.

privileges of that user to read, modify, ordelete data within that domain. Never-theless, it seems to us that even if per-sonal computers do not have explicit se-curity policies, they do have implicit ones.Users normally expect certain propertiesof their machines—for example, thatrunning a new piece of commercially pro-duced software should not cause all ofone’s files to be deleted.

For this reason, we include a few ex-amples of viruses and Trojan horses thatexploit the weaknesses of IBM PC’s, theirnon-IBM equivalents, Apple Macin-toshes, Atari computers, and CommodoreAmiga. The fundamental flaw in all ofthese systems is the fact that the operat-ing system, application packages, and

user-provided software programs inhabitthe same protection domain and there-fore have the same privileges and infor-mation available to them. Thus, if auser-written program goes astray, eitheraccidentally or maliciously, it may not bepossible for the operating system to pro-tect itself or other programs and data inthe system from the consequences. Effec-tive attempts to remedy this situationrequire hardware modifications gener-ally, and some such modifications havebeen marketed. Additionally, softwarepackages capable of detecting the pres-ence of certain kinds of malicious soft-ware are marketed as “virus detectionprevention” mechanisms. Such softwarecan never provide complete protection insuch an environment, but it can be effec-tive against some specific threats.

The fact that PC’s normally provideonly a single protection domain (so thatall instructions are available to all pro-grams) is probably attributable to thelack of hardware support for multipledomains in early PC’s, to the culture thatled to the production of PC’s, and to theenvironments in which they were in-tended to be used. Today, the processorsof many, if not most, PC’s could supportmultiple domains, but frequently thesoftware (perhaps for reasons of compati-bility with older versions) does not ex-ploit the hardware mechanisms that areavailable.

When powered up, a typical PC (e.g.,running MS-DOS) loads (“boots”) its op-erating system from predefine sectorson a disk (either floppy or hard). In manyof the cases listed next, the maliciouscode strives to alter these boot sectors sothat it is automatically activated eachtime the system is rebooted; this gives itthe opportunity to survey the status ofthe system and decide whether or not toexecute a particular malicious act. A typ-ical malicious act that such code couldexecute would be to destroy a file alloca-tion table, which will delete the file-names and pointers to the data they con-tained (though the data in the files mayactually remain intact). Alternatively, thecode might initiate an operation to refor-mat a disk; in this case, not only the filestructures, but also the data, are likely tobe lost.

MS-DOS files have two-part names: afilename (usually limited to eight charac-ters) and an extension (limited to threecharacters) which is normally used to in-dicate the type of the file. For example,files containing executable code typicallyhave names like MYPROG.EXE. Thebasic MS-DOS command interpreteris normally kept in a file named COM-MAND.COM. A Trojan horse may try toinstall itself in this file or in files thatcontain executable for common MS-DOScommands, since it may then be invokedby an unwary user. (See case MU1 for arelated attack on Multics.)

Readers should understand that it isvery difficult to be certain of the com-plete behavior of malicious code. In mostof the cases listed below, the author ofthe malicious code has not been identi-fied, and the nature of that code has beendetermined by others who have (for ex-ample) read the object code or attemptedto “disassemble” it. Thus the accuracyand completeness of these descriptionscannot be guaranteed.

IBM PC’s and Compatibles

Case: PC1

Source: Richardson, D. Risks ForumDig. 4, 48 (Feb. 18, 1987).

ACM Computmg Surveys, Vol. 26, No, 3, September 1994

Program Security Flaws ● 251

System: IBM PC or compatible

Description: A modified version of aword processing program, (PC-WRITE,version 2.71) was found to contain a Tro-jan horse after having been circulated toa number of users. The modified versioncontained a Trojan horse that both de-stroyed the file allocation table of a user’shard disk and initiated a low-level for-mat, destroying the data on the harddisk.

Genesis: Malicious: Nonreplicating Tro-jan horse

Time: During Operation

Place: Support: Privileged Utilities

Case: PC2

Source: Joyce, E. J. Software viruses:PC-health enemy number one. Datama-tion (Oct. 15, 1988), 27–30.

System: IBM PC or compatible

Description: This virus places itselfin the stack space of the fileCOMMAND.COM. If an infected disk isbooted, and then a command such asTYPE, COPY, DIR, etc., is issued, thevirus will gain control. It checks tosee if the other disk contains a COM-MAND.COM file, and if so, it copies itselfto it, and a counter on the infected disk is

fact, it destroyed data on the user’s disksand then printed the message “Arfl Arf!Got You!”

Genesis: Malicious: Nonreplicating Tro-jan horse

Time: During Operation

Place: Support: Privileged Utilities (?)

Case: PC4

Source: Y. Radai, Info-IBM PC Dig. 7,8(Feb. 8, 1988). Also ACM SIGSOFTSoftw. Eng. Notes 13, 2 (Apr. 1988),13-14

System: IBM-PC or compatible

Description: The so-called “Israeli”virus, infects both COM and EXE files.When an infected file is executed for thefirst time, the virus inserts its code intomemory so that when interrupt 2 lh oc-curs the virus will be activated. Uponactivation, the virus checks the currentlyrunning COM or EXE file. If the file hasnot been infected, the virus copies itselfinto the currently running program. Oncethe virus is in memory it does one of twothings: it may slow down execution of theprograms on the system or, if the date itobtains from the system is Friday the13th, it is supposed to delete any COM orEXE file that is executed on that date.

incremented. When the counter equals 4 Genesis: Malicious: Replicating Trojanevery disk in the PC is erased. The boot horse (virus)tracks and the File Access Tables arenulled. Time: During Operation

Place: Operating System: System Ini-Genesis: Malicious: Replicating Trojan tializationhorse (virus)

Time: During Operation

Place: Operating System: System Ini-tialization

Case: PC3

Source: Malpass, D. Risks Forum Dig.1, 2 (Aug. 28, 1985).

System: IBM-PC or compatible

Description: This Trojan horse pro-gram was described as a program to en-hance the graphics of IBM programs. In

Apple Macintosh

An Apple Macintosh application presentsquite a different user interface from thatof a typical MS-DOS application on a PC,but the Macintosh and its operating sys-tem share the primary vulnerabilities ofa PC running MS-DOS. Every Macintoshfile has two “forks”: a data fork and aresource fork, although this fact is invisi-ble to most users. Each resource fork hasa type (in effect a name) and an identifi-cation number. An application that

ACM Computing Surveys, Vol. 26, No 3, September 1994

252 ● Carl E. Landwehr et al.

occupies a given file can store auxiliaryinformation, such as the icon associatedwith the file, menus it uses, error mes-sages it generates, etc., in resources ofappropriate types within the resourcefork of the application file. The objectcode for the application itself will residein resources within the file’s resourcefork. The Macintosh operating systemprovides utility routines that permit pro-grams to create, remove, or modify re-sources, Thus any program that runs onthe Macintosh is capable of creating newresources and applications or altering ex-isting ones, just as a program runningunder MS-DOS can create, remove, oralter existing files. When a Macintosh ispowered up or rebooted, its initializa-tion may differ from MS-DOS initializa-tion in detail, but not in kind, and theMacintosh is vulnerable to maliciousmodification of the routines called duringinitialization.

Case: MA1

Source: Tizes, B. R. Beware the Trojanbearing gifts. MacGuide Msg. 1, (1988),110-114.

System: Macintosh

Description: NEWAPP.STK, a Macin-tosh program posted on a commercialbulletin board, was found to include avirus. The program modifies the Systemprogram located on the disk to include anINIT called “DR.” If another system isbooted with the infected disk, the newsystem will also be infected. The virus isactivated when the date of the systemis March 2, 1988. On that date thevirus will print out the following mes-sage: “RICHARD BRANDOW, publisherof MacMag, and its entire staff wouldlike to take this opportunity to conveytheir UNIVERSAL MESSAGE OFPEACE to all Macintosh users aroundthe world.”

Genesis: Malicious: Replicating Trojanhorse (virus)

Time: During Operation

Place: Operating System: System Ini-tialization

Case: MA2

Source: Stefanac, S. Mad Mats. Mac-world 5, 11 (Nov. 1988), 93–101.

System: Macintosh

Description: The Macintosh virus, com-monly called “scores,” seems to attackapplication programs with WLT or ERICresources. Once infected, the scores virusstays dormant for a number of days andthen begins to affect programs withWLT or ERIC resources, causing at-tempts to write to the disk to fail. Signsof infection by this virus include an extraCODE resource of size 7026, the exis-tence of two invisible files titled Desktopand Scores in the system folder, andadded resources in the Note Pad file andScrapbook file.

Genesis: Malicious: Replicating Trojanhorse (virus)

Time: During Operation

Place: Operating System: System Ini-tialization (?)

Commodore Amiga

Case: CA1

Source: Koester, B. Risks Forum Dig. 5,71 (Dec. ‘7, 1987); also ACM SIGSOFTSoftw. Eng. Notes 13, 1 (Jan. 1988),11-12.

System: Amiga personal computer

Description: This Amiga virus uses theboot block to propagate itself. When theAmiga is booted from an infected disk,the virus is copied into memory. The virusinitiates the warm-start routine. Insteadof performing the normal warm start, thevirus code is activated. When a warmstart occurs, the virus code checks to de-termine if the disk in drive O is infected.If not, the virus copies itself into the bootblock of that disk. If a certain number ofdisks have been infected, a message isprinted revealing the infection; otherwise~he normal warm start occurs.’

Genesis: Malicious: Replicatinghorse (virus)

Trojan

ACM Computmg Surveys, Vol. 26, No 3, September 1994

Program Security Flaws - 253

Time: During Operation

Place: Operating System:tialization

Atari

Case: AT1

System Ini-

Source: Jainschigg, J. Unlocking the se-crets of computer viruses. Atari Expl. 8,5 (Oct. 1988), 28-35.

System: Atari

Description: This Atari virus infectsthe boot block of floppy disks. When thesystem is booted from an infected floppydisk, the virus is copied from the bootblock into memory. It attaches itself tothe function getbpd so that every timegetbpd is called the virus is executed.When executed, first the virus checks tosee if the disk in drive A is infected. Ifnot, the virus copies itself from memoryonto the boot sector of the uninfecteddisk and initializes a counter. If the diskis already infected the counter is incre-mented. When the counter reaches a cer-tain value the root directory and fileaccess tables for the disk are overwrit-ten, making the disk unusable.

Genesis: Malicious: Replicating Trojanhorse (virus)

Time: During Operation

Place: Operating System: System Ini-tialization

ACKNOWLEDGMENTS

The idea for this survey was conceived several years

ago when we were considering how to provide auto-

mated assistance for detecting security flaws. We

found that we lacked a good characterization of the

things we were looking for. It has had a long gesta-

tion, and many have assisted in its delivery. We aregrateful for the participation of Mark Weiser (thenof the University of Maryland) and LCDR PhilipMyers of the Space and Naval Warfare CombatSystems Command (SPAWAR) in this early phaseof the work. We also thank the National ComputerSecurity Center and SPAWAR for their continuingfinancial support. The authors gratefully acknowl-edge the assistance provided by the many reviewers

of earlier drafts of this survey. Their comments

helped us refine the taxonomy, clarify the presenta-

tion, distinguish the true computer security flaws

from the mythical ones, and place them accurately

in the taxonomy. Comments from Gene Spafford,

Matt Bishop, Paul Karger, Steve Lipner, Robert

Morris, Peter Neumann, Philip Porras, James P.

Anderson, and Preston Mullen were particularly

extensive and helpful. Jurate Maciunas Landwehr

suggested the form of Figure 4. Thomas Bethj

Richard Bisbey II, Vronnie Hoover, Dennis Ritchie,

Mike Stolarchuck, Andrew Tanenbaum, and Clark

Weissman also provided useful comments and en-

couragement; we apologize to any reviewers we

have inadvertently omitted. Finally, we thank the

SURVEYS referees who asked several questions that

helped us focus the presentation. Any remaining

errors are, of course, our responsibility.

REFERENCES

ABBOTT, R. P., CHIN, J. S., DONNELLEY, J. E.,

KONIGSFORD, W. L., TOKUBO, S., AND WEBB,D. A. 1976. Security analysis and enhance-ments of computer operating systems. NBSIR

76-1041, National Bureau of Standards, ICST,Washington, D.C.

ANDERSON, J. P. 1972. Computer security technol-

ogy planning study. ESD-TR-73-51, vols. I andII. NTIS AD758206, Hanscom Field, Bedford,Mass.

BISBEY R., II AND HOLLINC+WORTH, D. 1978. Protec-

tion analysis project final report. ISI\RR-78-13,DTIC AD A056816, USC\ Information Sciences

Inst., 1978.

BREHMER, C. L. AND CARL, J. R. 1993. Incorporat-

ing IEEE Standard 1044 into your anomalytracking process. Cross Talk. J. Def. Softcu.Eng. 6, 1 (Jan.), 9-16.

CHILLAREGE, R., BHANDAM, I. S., CIMAR, J. K.,HALLIDAY, M. J., MOEBUS, D. S., RAY, B. K., ANDWONG, M.-Y. 1992. Orthogonal defect classifi-cation—a concept for in-process measure-ments. IEEE Trans. Softw. Eng. 18, 11 (Nov.),

943-956.

COHEN, F. 1984. Computer viruses: Theory andexperiments. In the 7th DoD/NBS Computer

Security Conference. 240-263.

DEPARTMENT OF DEFENSE. 1985. Trusted com-puter system evaluation criteria. DoD 5200.28 -STD, U.S. Dept. of Defense, Washington, D.C.

DENNING, D. E. 1982. Cryptography and Data Se-curity. Addison-Wesley, Reading, Mass.

DENNING, P. J. 1988. Computer viruses. Am. Sci.76 (May-June), 236-238.

ELMER-DEWIm, P. 1988. Invasion of the data

snatchers. TIME Msg. (Sept. 26), 62–67.

FERBRACHE, D. 1992. A Pathology of ComputerViruses. Springer-Verlag, New York.

ACM Computing Surveys, Vol. 26, No. 3, September 1994

254 * Carl E. Landwehr et al.

FLORAC, W. A. 1992. Software quality measure-

ment: A framework for counting problems and

defects. CMU/SEI-92-TR-22, Software En@neering Inst. Pittsburgh, Pa.

GASSER, M. 1988. Building u Secure c077LpLL&T

System. Van Nostrand Reinhold, New York.

IEEE COMPUTER SOCIETY 1990. Standard glos-

sary of software engineering terminology.ANSI/IEEE Standard 610.12-1990 IEEEPress, New York.

LAMPSON, B. W. 1973, A note on the confinement

problem. Conznum. ACM 16, 10 (Oct.), 613-615.

LANDWEHR, C, E. 1983. The best avadable tech-nologies for computer security, IEEE Comput,

16, 7 (July), 86-100.

LANDWEHR, C. E. 1981. Formal models for com-

puter security. ACM Comput. Surv. 13, 3

(Sept.), 247-278.

LAPRIE, J. C.j ED, 1992, Dependabil~t.y; BaszcConcepts and Terminology. Springer-Verlag Se-ries in Dependable Computing and Fault-Tolerant Systems, vol. 6, Springer-Verlag, New

York.

LEVESON, N. AND TURNER, C. S 1992, An investi-

gation of the Therac-25 accidents. UCI TR-92-

108, Information and Computer Science Dept.,

Univ. of California, Irvine, Ca.

LINDE, R. R. 1975, Operating system penetration.

In the AFIPS National Computer Conference,

AFIPS, Arlington, Vs., 361-368.

MCDERMOTT, J. P. 1988. A technique for removingan important class of Trojan horses from high

order languages. In Proceedings of the 1 lth

National Computer Security Conference.NBS\ NCSC, Gaithersburg, Md., 114-117.

Recewed June 1993; final rewsion accepted March 1994

NEUMANN, P. G. 1978. Computer security evalua-

tion. In the 1978 Nattonal Computer Confer-en ce, AFIPS Conference Proceedings 47. AFIPS,

Arlington, Va,, 1087-1095.

PETROSIiI, H. 1992, To Engineer zs Human: The

Role of Failure m Successful Design. VintageBooks, New York.

PFLEEGER, C. P. 1989, Security m Computzng.

Prentice-Hall, Englewood Cliffs, N.J.

ROCHLIS, J. A. AND EICHEN, M W. 1989. With mi-

croscope and tweezers: The worm from MIT’s

perspective. Commun. ACM 32, 6 (June),689-699.

SCHELL, R. R. 1979, Computer security: The

Achdles heel of the electronic Air Force’? Am

Univ. Rev, 30, 2 (Jan, -Feb.), 16-33.

SCHOCH, J. F. AND HUPP, J. A. 1982. The “worm”programs-early experience with a distributed

computation. Commun. ACM 25, 3 (Mar.),172-180.

SPAFFORD, E, H. 1989. Crisis and aftermath. Com -mun. ACM 32, 6 (June), 678–687.

SULLIVAN, M. R. AND CHILLAREGE, R. 1992. A com-

parison of software defects in database man-agement systems and operating systems. InProceedings of the 22nd International Sympo-

sium on Fault-Tolerant Computer Systems

IEEE Computer Society, Boston, Mass.,

(FTCS-22) (July).

THOMPSON, K. 1984. Reflections on trusting trust.

Commun. ACM 27, 8 (Aug.), 761-763,

WEISS, D. M. AND BASILI, V. R. 1985. Evaluatingsoftware development by analysis of changes:Some data from the Software Engineering Lab-oratory. IEEE Trans. Softw Eng. SE-11, 2(Feb.), 157-168.

ACM Computing Surveys. Vol 26, No. 3, September 1994