reg 07/01/87 technical reference on … reg 07/01/87 technical reference on software development...

31
REG 07/01/87 TECHNICAL REFERENCE ON SOFTWARE DEVELOPMENT ACTIVITIES Reference Materials And Training Aids For Investigators July 1987 The United States Food And Drug Administration Division of Field Investigations Office of Regional Operations Associate Commissioner for Regulatory Affairs PREFACE The use of computerized systems to perform process control and quality assurance activities within the industries regulated by the Food And Drug Administration is becoming more prevalent as the size and cost of this technology decreases. Computer systems are now commonly found controlling activities ranging from the manufacture of animal feeds to the operation of medical device products destined for implant. As these systems become instrumental in assuring the quality, safety, and integrity of FDA regulated products, it becomes extremely important for the Agency to verify that proper controls were employed to assure the correct performance of the computer system prior to its implementation and for the maintenance and monitoring of the system once it has been installed. This is the second in a series of references designed to assist the investigator in his/her understanding of computerized systems and their controls. The first, "Guide to Inspection of Computerized Systems in Drug Processing" provided an overview of a complete computer processing system. This second publication is intended to focus on methods and techniques for the development and management of software. It is not the intent of this guide to serve as a medium for the establishment of new software development procedures, standards, or requirements. Ample industry standards and procedures have already been published by such organizations as the American National Standards Institute (ANSI); Institute of Electrical and Electronic Engineers, Inc. (IEEE); International Standards Administration (ISO); National Bureau of Standards (NBS); Department of Defense (DOD); and American Nuclear Society (ANS). This document is intended to provide a synopsis of these requirements in a straight forward manner for use as a technical reference by the field staff of the Food and Drug Administration.

Upload: hatruc

Post on 07-Jun-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

07/01/1987

REG 07/01/87 TECHNICAL REFERENCE ON SOFTWARE DEVELOPMENTACTIVITIES

Reference Materials And Training Aids For Investigators

July 1987

The United States Food And Drug AdministrationDivision of Field InvestigationsOffice of Regional OperationsAssociate Commissioner for Regulatory Affairs

PREFACE

The use of computerized systems to perform process control and quality assuranceactivities within the industries regulated by the Food And Drug Administration isbecoming more prevalent as the size and cost of this technology decreases. Computersystems are now commonly found controlling activities ranging from the manufactureof animal feeds to the operation of medical device products destined for implant. Asthese systems become instrumental in assuring the quality, safety, and integrity ofFDA regulated products, it becomes extremely important for the Agency to verify thatproper controls were employed to assure the correct performance of the computersystem prior to its implementation and for the maintenance and monitoring of thesystem once it has been installed.

This is the second in a series of references designed to assist the investigator in his/herunderstanding of computerized systems and their controls. The first, "Guide toInspection of Computerized Systems in Drug Processing" provided an overview of acomplete computer processing system. This second publication is intended to focus onmethods and techniques for the development and management of software.

It is not the intent of this guide to serve as a medium for the establishment of newsoftware development procedures, standards, or requirements. Ample industrystandards and procedures have already been published by such organizations as theAmerican National Standards Institute (ANSI); Institute of Electrical and ElectronicEngineers, Inc. (IEEE); International Standards Administration (ISO); National Bureauof Standards (NBS); Department of Defense (DOD); and American Nuclear Society(ANS). This document is intended to provide a synopsis of these requirements in astraight forward manner for use as a technical reference by the field staff of the Foodand Drug Administration.

CHAPTER 1--WHAT IS SOFTWARE?

There was a time when the components of a computer system were clearly defined.Hardware consisted of those elements which you could see, hold, or physically touch.Software made up the intangible components, the computer programs and data. Butcomputer technology is a dynamic and changing field in which new discoveries,methods, and techniques are encountered each day. With changes in technology itoften becomes necessary to change the established definitions to enable a moreaccurate reflection of the true meaning of a word. Software is one such word.

It is now realized that software encompasses far more than just computer programsand/or data. Today, software is recognized as consisting of not only the computercode, but also the documentation necessary to allow the execution and understandingof the program by other knowledgeable individuals. The International StandardsOrganization (ISO) supported this concept when it defined SOFTWARE as"programs, procedures, rules and any associated documentation pertaining to theoperation of a computer system"1 See References Item #1. The idea that softwareconsists of more than just the computer program is fundamental to a sound softwaredevelopment program and is the basis of this technical guide.

Categories of Software

This manual considers two basic categories of software, operating systems andapplication programs. The distinction between the two is determined by the functionwhich they perform. An OPERATING SYSTEM is "software that provides servicessuch as resource allocation scheduling, input-output control, and data management"1See References Item #1. APPLICATION SOFTWARE is "software specificallyproduced for the functional use of a computer system"1 See References Item #1. Bydefinition then, operating system software controls the activities of the computersystem while the application program accomplishes the desired task.

An example of an operating system and an application program can be seen in thereview of a computerized process control system. In most instances, the operatingsystem software for the process controller has been written by the hardwaremanufacturer and purchased as part of the computer system package. Unless speciallyordered, the same operating system software being used in our example could also bein use in hundreds of different manufacturing facilities producing many different typesof products. The actual processes being controlled by the computer system aregenerally transparent to the operating system software. The operating system softwareis concerned only with the function and operation of the computer system.

The application program on the other hand is almost always unique to the specificprocess being controlled. This program, which may have been written by the supplierof the computer system, a third party software vendor, or the users own programmers,

has been developed to meet the specifications of the process. It is the applicationprogram that actually monitors and controls the temperature, pressures, times, and allother important elements associated with the process. It is also the applicationsprogram that sounds the process deviation alarms and generates the desired reports andrecords.

From this example it may appear that our quality assurance concerns can be limited tothe procedures and controls established for the application software. However, this isnot the case. The operating system software and the application system software mustwork together if the system is to properly function. A revision to the operating systemcould be as significant to the proper function of the process control activities as arevision to the application program.

In our process control example, we were discussing a system with distinct hardwareand software components employing classical operating system and applicationprogram software. These distinctions, however, are not always easily discernible. Withthe desire to reduce the size of microprocessing systems (especially in the area ofmedical devices), many of the elements of the hardware and software have beencombined both physically and functionally.

For example, many of the medical device products being marketed today havecombined the software and the hardware together forming a hybrid which is referred toas firmware. FIRMWARE is "hardware that contains a computer program and datathat cannot be changed in its user environment"1 See References Item #1. (A detailedexplanation of firmware is contained within Appendix A.) Similarly, there are anumber of software products marketed today which do not have a clear cut distinctionbetween the operating system and the application software. In these devices, the twocategories of software have been combined in such a manner that the device is actuallydriven by the operating software and no true application program exists.

Software Characteristics

There is no mystery or magic to a computer program. Quite simply a computerprogram consists of nothing more than an extremely detailed set of instructions whichare executed by the computer system in a specific predetermined order. The apparentmagic results from the intertwining of the instructions and the ability to branch fromone set of instructions to another. This would almost appear as a contradiction. Howcan the order of the execution of the instructions be predetermined while at the sametime maintaining the freedom to branch from location to location or instruction toinstruction within the program? There is, however, no contradiction. The computerprogram is so structured that even the branching capabilities are all predetermined. Itis the branching ability of a computer program that gives it its versatility. This samebranching ability also results in its complexity.

Let us compare for a moment the function of an electro-mechanical controller in a

process and the function of a computerized controller in a process. In the electro-mechanical system the activities or actions performed are accomplished in exactly thesame manner each and every time. If a temperature reaches a predetermined set point,a relay is triggered, an electrical circuit is completed, and a fan motor is turned on tocool the process. The electro-mechanical system accomplishes this task followingexactly the same procedures time after time after time. This is not necessarily the casefor a computer driven system.

In the computer driven system, the software monitoring and controlling the processtemperature may also be monitoring and controlling other processing functions. As aresult we don't know exactly where we will be in the execution of the computerprogram when a process deviation occurs. Similarly, the program may be written insuch a manner that we may have more than one type of corrective action available tous. If the temperature is approaching the set point the program may be written to spendmore time monitoring the temperature; if the temperature reaches the set point theprogram may be written to take some form of corrective action; and if the set point isexceeded the program may be written to divert the process while at the same timesounding an alarm and preparing a process deviation report.

Through the use of the computer driven controller, the options available to us haveincreased beyond simply turning on the fan. Through the diversity of the program,there is also more than one way (or path) available for reaching the temperaturemonitoring portion of the program. Consequently, the actions taken by the programmay follow one set of instructions one time and a completely different set the next.This is an extremely important difference between electro-mechanical systems andcomputerized systems, primarily when we consider the validation of the system. Aswill be discussed in later chapters of this guide, it is important to realize that althougha computerized system may function properly during two, three, four, or more test runsthis in itself does not reflect the proper function of the system. It is possible that wefollowed the same paths or branches through the program during each of our test runsand that the remaining paths or branches contain errors or faults. For this reason, thevalidation of most computer controlled system must consist of far more than simplytesting the computer through the performance of production runs.

There are two additional software characteristics which should also be discussed asthey relate directly to the software's ability to follow different instruction paths duringexecution. It is important for us to realize that unlike the electro-mechanical systemswith which we are familiar, software:

a. software can be improved over time; and

b. failures occur without advance warning.

From what we've discussed to this point about software, the reason for thesecharacteristics should be apparent. It would stand to reason that software can be

improved with time as the number of possible branches or paths within the programthat are actually executed will normally increase with the use of the program.Consequently, latent or previously undetected program errors are found and changes orcorrections to the program can be made. Also contributing to this characteristic is thefact that unlike electro-mechanical system, software does not wear out from use. Acomputer program can be executed hundreds of thousands of times with the physicalcondition of the software remaining unchanged from the first to the last run. (It shouldbe noted that this is not case for computer hardware--disk, tape, module--which servesas the media to hold the software. Hardware again is a physical element thatdeteriorates with time.)

Similarly, the characteristic that software failures occur without advance warningshould also be understood at this point. With mechanical components we realize thatthey physically deteriorate from use. For this reason we establish maintenanceschedules for the replacement of parts that are showing wear and we determine fromour experience with the device when or how often these parts will need replacing. Weanticipate the wear and subsequent failure. With software, however, there is nophysical wear on the program. Consequently, the failure of the software does not resultfrom deterioration but rather from unforeseen or undetected programming errors.These types of errors may be encountered at any time and occur without any advancewarning.

THE SOFTWARE LIFE CYCLE

Software can be very complex. A process control program may consist of thousands oflines of code with numerous paths and branches from instruction to instruction.Nevertheless, the logic of the program must be understood by those responsible forwriting, testing, and maintaining it. Because of this complexity, and the fact thatcomputer systems have progressed from performing only basic arithmetic calculationsto life dependent and life sustaining systems, computer software development hasevolved into a recognized science of established procedures and disciplines. Amongthe basic teachings of this science are two premises:

1. Quality begins at the design level.

2. You cannot test quality into your software.

In an attempt to assure that these premises are followed, all categories of softwareshould go through an organized development process consisting of a series of distinctphases which are collectively referred to as the SOFTWARE LIFE CYCLE. Theexistence of these identified phases allows reviews and tests to be conducted atdistinct points during the development and installation of software. This in turn helpsbuild quality into the software and provides a measure of that quality.

Although a consensus has not developed as to the naming of the phases, it is generally

recognized that the Software Life Cycle is the period of time that starts when thesoftware product is conceived and includes at least:

a. a requirements phase;

b. a design phase;

c. an implementation phase;

d. a test phase;

e. an installation and checkout phase; and

f. an operation and maintenance phase.

Chapters 3 through 8 will focus on each of the phases of the software life cycle. Butfirst, Chapter 2 will discuss the standards and procedures which should be in placeprior to entering the Software Life Cycle.

CHAPTER 2--STANDARDS AND PROCEDURES

Development Standards

As the development of a computer system generally requires the coordination of talentbetween a number of different individuals or divisions within the structure of anorganization, each division must fully understand its' authority and responsibility tothe project. Each participant must be aware of what the others are doing and of theirown specific responsibilities. For this purpose standards should be established andapproved which clearly define:

a. the individuals by position description or the divisions within the organizationwhich are to interact in the development process;

b. the specific responsibilities and the authority of each individual or division; and

c. the sequence of steps to be taken, including the procedures to be followed and theapprovals which must be obtained, in the submission, review, and approval of aproposal for the development, implementation, or modification of a computer system.

In many instances the standards and procedures which are established for thedevelopment of software address the activities of the engineers, programmers,analysts, and managers. These are the individuals most commonly associated with thesoftware development task. It is important, however, that the quality assurance groupnot be omitted from consideration at this point. The development process should beperiodically audited to assure that the established requirements are actually being

adhered to. These audit procedures should describe the involvement of a qualityassurance group detailing all reporting requirements and responsibilities for reviewsand audits at various stages during the development process.

Programming Standards

Equally important in the development of quality software is the establishment ofstandards and procedures for the actual writing or coding of the computer program. Acomputer program is often a lengthy and very complex document which must beunderstood by those responsible for writing, testing, and maintaining it. To facilitatethis understanding every effort should be made to maintain uniformity and clarity frommodule to module, routine to routine, or program to program regardless of who wrotethem. For this purpose, written standards should be in place which detail therequirements and restrictions which are to be followed in writing computer programs.These standards may address:

1. Structural standards (e.g., structured programming with primary top down logicflow, modular program design, etc.).

2. Software documentation procedures including:

a. description of the supporting documentation which is to be used (e.g., functionaland/or process flow charts, data dictionaries, decision trees, pseudocode, etc.); and

b. a statement as to where the documentation is to be maintained and who isresponsible for keeping it current.

3. Coding standards covering:

a. conventions for naming programs, modules, and variables;

b. conventions for designating revision levels;

c. identification of applicable programming manuals;

d. restrictions and conventions on code format;

e. methods for measuring the complexity of the program; and

f. complexity controls, (e.g., limiting the number of lines of code per module, controlof branching instructions, etc.).

4. Software testing standards and procedures which include:

a. designation of who will be responsible for the testing of the software;

b. discussion of how the test data is to be generated;

c. description of the test records and error logs which must be maintained; and

d. a discussion of how errors or outstanding problems are to be resolved.

Standards and procedures such as these should be in place prior to beginning thedevelopment of the software if a quality product with a reasonable degree of safety andperformance confidence is to be produced. Clearly everyone involved in thedevelopment project should have a sound understanding of his/her responsibilities andshould be able to readily determine the responsibilities of the other members of theteam. All expectations should be defined with nothing being left to chance.

CHAPTER 3--REQUIREMENTS PHASE

The first phase in the life cycle of any software product should be a definition orREQUIREMENTS PHASE. In defining software requirements, care should be takento assure that they are as correct and complete as possible. Consider the processcontrol program which we discussed in Chapter 1. If the process temperature exceededan established set point, an alarm would sound notifying the operator of a processdeviation. If the requirements for this program were written to reflect that when adeviation occurs, an alarm sounds, the programmer who knows little about thetechnical aspects of the process we are controlling would write the code to meet thisrequirement. The simple sounding of the alarm, however, would probably not meet theneeds or requirements of the operator. In our fictitious process control system westated that the program is monitoring and controlling any number of important factorssuch as temperature, pressure, time, and product flow. You can imagine the confusionin the production area as the operator tries to determine where the deviation in theprocess occurred each time the alarm sounded. There is also a good chance that by thetime the operator identified the specific deviation, it may be too late to save theproduct. In addition, this particular manufacturing activity may require that records ofprocess deviations be maintained. If this is not detailed in the program requirements,provisions for a deviation report and a method for alarm identification will more thanlikely not be included in the software.

Ideally, problems such as this will be realized and resolved while our project is still inthe requirements phase. The failure to do so will eventually result in a need to rewrite,add, or delete sections of program. As each revision introduces the possibility forerror, the modified code must be accompanied by new documentation and retested tothe extent necessary to establish a new degree of confidence in the revised program.This process takes time. Meanwhile, the operator in our example is becoming wornout trying to determine the meaning of each alarm, process deviation reports are notbeing maintained, and finished product is being discarded.

Identifying the actual requirements for the finished product is one of the most difficulttasks associated with software development. It requires the ability to understand boththe process or activity being computerized as well as the needs of the individuals oroperators who will be using the system. In many cases, these needs may not even beknown by the users of the system at this phase, but they must nevertheless beanticipated to the maximum degree possible by those responsible for planning andcoordinating the project.

To this point only the importance of defining the project requirements have beendiscussed. It is, however, equally important to document the desired functions andperformance capabilities for the software product during this phase. Suchdocumentation is essential for the proper conveyance of ideas and needs to thoseindividuals responsible for other phases of the software life cycle.

Prior to proceeding to the next phase, time should be taken for the review andapproval of the documents generated during this phase to assure that they arecomplete, and that the requirements decided upon are correct and compatible with thesystem being developed. Any unresolved problems at this point should be documentedwithin these records along with a description of the actions being taken to rectifythem, the identity of the individual responsible for tracking them, and the points ortimes at which they will be periodically reviewed.

The requirements phase should not be taken lightly. As stated in Chapter 1, qualitybegins in the design level and the establishment of the complete requirements is thefirst step in the design process.

CHAPTER 4--DESIGN PHASE

Software design is "the process of defining the software architecture, components,modules, interfaces, test approach, and test data for a software system according tosystem requirements."1 See References Item #1 The keyword is "defining."

From the Requirements Phase the needs and expectations for the finished softwareshould be known. During the DESIGN PHASE these requirements will be convertedinto specifications.

For each systems application, a formally developed and approved design specificationshould be established. This document, which should be written as clearly and preciselyas possible, is the guide/instructions to those responsible for writing and maintainingthe computer programs. The design specifications should include all of the details andspecifics for the program. These may be:

1. A description of the activities, operations, and processes that are being controlled,monitored, or reported by the computer system.

2. A detailed description of the hardware which is to be used.

3. A detailed description of the software which will be used including the algorithmswhich are to be used in processing the data.

4. A detailed description of the files which will be created/accessed by the system andthe reports which are to be generated.

5. A description of all limits and parameters which are to be measured by the systemand the points at which the alarms are to sound or the data is to be rejected.

6. A description as to how often or when the limits and specified parameters are to bemeasured.

7. A description of all error and alarm messages, their cause, and the corrective actionswhich must be taken should they occur.

8. A description of all interfaces, connections, and communication capabilities bothbetween the processor and the external world and between parts of the software.

9. A description of the security measures which are to be followed to verify theaccuracy of the program being used and to protect the program from both accidentaland intentional abuse.

Prior to the acceptance of these design specifications, a review should be conducted todetermine if the design is following accepted standards and is consistent with thesoftware requirements.

The design specifications should be updated with each software revision anddocumented so that they can be followed during later phases of the life cycle.

CHAPTER 5--IMPLEMENTATION PHASE

By now the needs and expectations for the system, as well as the procedures andspecifications which are to be followed in meeting these requirements, are fullyknown. The documentation which has been prepared in the earlier phases of the lifecycle is now ready to be provided to the programmer(s) for conversion into a softwareproduct. But, there is far more to the implementation phase than the conversion ofconcepts and ideas into a computer program. It is during this phase that the softwaredevelopers must also concern themselves with documenting the program which theyare writing and begin taking the steps necessary to "debug" or eliminate faults (errors)from it.

In most instances, software is developed by the programmer as a SOURCE CODE.The source code may be in any number of different languages including BASIC,

PASCAL, C, ADA, ASSEMBLER, FORTRAN, or COBAL to name a few. Differentportions of the source code may even be written in different languages.

Source codes were developed as an aid to the programmer as they allow the programto be written in a mathematical or English based format. As written, however, thesource code cannot be understood or executed by the computer. Consequently, prior tobeing run (executed) the source code must be converted into a machine executableformat using another software package which is referred to as an assembler, compiler,or interpreter depending on the language being used. The result of this conversionprocess is an object code version of our program.

The OBJECT CODE (or machine level binary code) is "a fully compiled orassembled program that is ready to be loaded directly into the computer" (ISO). Wheresource code is easily readable, object code is represented by a series of 1's and 0'swhich would require a great deal of knowledge and understanding to interpret theirmeaning.

It is important to understand that programming even in source code is difficult.Computer programs are difficult for the programmer to write and, in themselves, areeven more difficult for someone else to understand. As a result, efforts are made toestablish at least some degree of continuity from programmer to programmer. This isthe purpose of Programming Standards and Procedures which we discussed in Chapter2. There are also established rules and procedures, called SYNTAX, which must befollowed based on the programming language which we are using. But even withrules, procedures, standards, and syntax requirements, no two programmers will writea computer program in exactly the same way. As a result the documentation of thecomputer program is extremely important.

Documentation

Software documentation refers to the records, documents, and other informationprepared to support the program. This information ranges from the programrequirements which were defined in the Requirements Phase of the software life cyclethrough to the manuals which explain the program to its users. Historicaldocumentation reflecting the history of the software product including records of allchanges or revisions is referred to as an AUDIT TRAIL and is further discussed inChapter 8.

Entire books have been written on the subject of software documentation. To coverthis subject in detail is beyond the scope of this guide. Essentially, however, computersoftware should be supported with sufficient information to allow a knowledgeableindividual to understand the program requirements and specifications, as well as thelogic and algorithms being used.

To assure this "understanding" the basic information maintained should include the

documents and records generated during the Requirements Phase and the DesignPhase of the software life cycle; the records and documents which reflect the testprotocols which were used to test the software; test data; test results; and recordsexplaining the rationale behind the tests which were or were not conducted. Theinformation required in support of the software logic, however, will vary fromprogram to program.

The type and degree of documentation needed to support the software logic isdependent upon both the language being used and the complexity of the program. Forexample, high level languages such as Basic and Pascal may require less supportdocumentation than languages such as Assembler or Fortran. The reason for this is thatlanguages such as Basic and Pascal are more English oriented, making them easier toread and understand. This is not to say that software does not require support in theform of flowchart, data dictionary, comment statements, etc., but rather that thesupport may not need to be as extensive as it would be with other programminglanguages.

Regardless of the language used, the deciding factor as to the degree of documentationneeded rests in the ability to understand the code by those responsible for maintainingit. It should be possible for these individuals and their supervisors to locate sections ofthe program responsible for the performance of specific control functions and explainthe basic logic being followed without undue difficulty.

Although the focus of this guide is on software, it is important to realize that systemsdocumentation and user documentation are also important to the quality of the finishedproduct.

As computer systems become more complex, it becomes extremely difficult to retainan understanding of how the various peripheral devices, software, and the establishedrecords and files interact. SYSTEMS DOCUMENTATION assist in this area byproviding pictorial and/or narrative explanations of the interaction of hardware andsoftware. The need for such documentation is again proportionate to the complexity ofthe system.

USERS DOCUMENTATION consists of operating manuals containing instructionswritten in lay terminology which clearly define the activities and processes beingperformed by the computer and those which must be performed manually. Thisdocumentation should explain the procedures to be followed in running the systemwith detailed discussions on the meaning and significance of all advisories, alarms anderror messages as well as the corrective action which must be taken.

Documentation, such as that discussed in this section, should be developed inconjunction with the implementation phase of the software life cycle. If prepared at thetime that the program is actually written, it is likely to be clearer and more accuratethan if prepared at the conclusion of the project.

Debugging

The process of "debugging" or "locating, analyzing, and correcting suspected faults"1See References Item #1 in the computer program should begin with the actual writingof the code. Control of programming is achieved by keeping the scope of any part of aprogram small and functional. Breaking a large program into workable pieces calledmodules allows for this degree of control. It is both more efficient and far easier toreview and analyze small modules of code locating and correcting potential errors thanit is to try to locate these same faults in the finished complex program.

The concept of testing the software will be discussed in detail within the next chapter.It is important, however, that you realize that it is at THIS point in the softwaredevelopment process that we begin to review the code for suspected errors. Youshould not wait until the program is completed or, as has been found in severalinstances, the program is placed into the finished process control system or medicaldevice to begin analyzing the software for possible faults.

CHAPTER 6--TEST PHASE

The Test Phase is actually "the period of time in the software life cycle during whichthe components of a software product are evaluated and integrated, and the softwareproduct is evaluated to determine whether or not requirements have been satisfied"1See References Item #1. This is NOT the point at which we first begin thinking oftesting the program. By now, the software should have already undergone some basicdebugging by the programmer. In addition, a complete test plan should be based on theprogram requirements and specifications as detailed in the earlier Requirements andthe Design Phases of the project.

Although some discussions of the Software Life Cycle distinguish between the"debugging" of the software during the Implementation Phase and the "testing" of thesoftware during the Test Phase, this document will deal with both the testing anddebugging of software as a single entity.

Software Errors

Programming errors can be classified into one of two types, syntax errors or logicerrors.

SYNTAX ERRORS are language or format errors. They are caused by typos oroversights in the structure of expressions in a programming language. Examples ofsyntax errors include the omission of a comma, period, or parenthesis in a programinstruction. The failure to properly structure a portion of the program, or on somecomputer systems, attempting to access a file that does not exist can also result in asyntax error. Syntax errors are relatively straight forward and in most cases are

identified by the computer system as it attempts to convert the source code into anobject code. Most computer systems are programmed to identify syntax errors and willnot attempt to execute the program until they have been corrected.

LOGIC ERRORS by contrast are difficult to detect as they reflect an error in theprogrammers reasoning or method of problem solving. Logic errors are sometimesundetected by the programmer as he/she is unable to see the mistake in his or her ownreasoning. Of further concern is the fact that logic errors are totally transparent to thecomputer. If in the process control example, which we discussed in Chapter 1, wewere to program our system to look for a temperature of 250 F with 0 psi, the systemwould look for exactly that temperature and pressure. The system is totally ignorant tothe fact that this temperature and pressure combination is an impossibility.

Test Plan

The intent of software debugging/testing is to identify and correct as many errors aspossible and to minimize the impact of the unforeseen fault when it does occur. Thetesting of software is an extremely complex task requiring a great degree of planning.The steps taken in testing of the software should include the preparation of a test planwhich is "a document prescribing the approach to be taken for intended testingactivities. The plan typically identifies the items to be tested, the testing to beperformed, test schedules, personnel requirements, reporting requirements, evaluationcriteria, and any risks requiring contingency planning"1 See References Item #1.

Often, the philosophy used in the development of the test plan is critical to the successor failure of the test exercise. The definition of the word "testing" is worthy ofconsideration. If the purpose of the test plan is to demonstrate that the softwareperforms its intended function or that the program does what it is supposed to do, thenthe plan will undoubtedly be designed to show that the program works. This concept isnot satisfactory for finding errors. A more appropriate approach is to begin the designof the test plan by acknowledging that the program contains errors and then design theplan with the intent of finding as many as possible. From this viewpoint, testing ismore correctly defined as "the process of executing a program with the intent offinding errors"3 See References Item #3.

Ideally the test plan and the selection of the test data will be assigned to a team ofindividuals rather than to the programmer. There is good reason for this approach. Aswas mentioned in our discussion of logic errors, there is a good chance that the logicerrors contained within the program are totally transparent to the programmer. If theprogrammer was able to see the error, it would already be corrected. In addition thetesting approach taken by the programmer will almost always be from the position oftrying to show what the program will do. The programmer is undoubtedly proud ofhis/her work and will have a great deal of difficulty developing an impartial test plandesigned to reveal program failures.

This is not to say that the programmer should be totally excluded from thedevelopment of the test plan. An appropriately designed plan must take intoconsideration the basic structure of the software when selecting test methods and testdata. The programmer will have an intimate knowledge of this structure which he/shewill be able to contribute to the project.

Although the team approach is preferred, it's not absolute. But, in situations where theprogrammer is also responsible for developing the test plan, additional care must betaken in an attempt to arrive at a plan that is as objective as possible.

Regardless of whether the test plan is developed using the team approach or by theindividual programmer, consideration should be given to basic testing principles suchas the following which were developed by Myers(3):

1. Do not plan a testing effort under the tacit assumption that no errors will be found.

2. The probability of the existence of more errors in a section of a program isproportional to the number of errors already found in that section.

3. The test plan should include a definition of the expected output or result.

4. The results of each test should be thoroughly reviewed and evaluated.

5. The results of each test should be retained and not discarded upon completion of thetest.

6. Test cases must be written for invalid and unexpected, as well as valid andexpected, input conditions.

7. The test plan should be designed to challenge the program to not only assure that itdoes what it is supposed to do, but to also assure that it does not do what it is notsupposed to do.

Test Methods

To fully test the capabilities and limitations of software, the software must be tested insimulation. Only through simulation can the necessary test cases be generated toadequately challenge the ability of the software to function under both normal andabnormal conditions. Appropriate testing cannot be accomplished solely through theperformance of production or quality assurance runs or through the evaluation of thefinished device.

The proper testing of the software includes the challenge of EACH decision pathwithin the program as well as all alarms, error routines, process specifications, etc.This is not to say that all possible combinations of paths must be challenged, but rather

that each individual path within the program must be reviewed and evaluated as to itscorrectness and effect on subsequent program execution.

As this degree of testing is often hampered by both the size and complexity of thefinished software package, large programs should be divided into smaller units bymodule or subroutine which can be evaluated individually to assure that the designspecifications have been met and that the code structure is challenged according to thedefined test criteria. This procedure is referred to as MODULAR TESTING and maybe initiated within either the Implementation Phase or the Test Phase of the SoftwareLife Cycle.

Once the modules or routines have been tested individually, they should be integratedand tested to identify design inconsistencies and to remove errors which may resultfrom data transfers and interfaces. This INTEGRATION TESTING should notrepeat the module tests but, rather, it should build upon the previous test results.

There are two different methods of integrating the modules during the performance ofintegration testing. The first, and preferred method, is INCREMENTALINTEGRATION which is the reformation of the program module by module orfunction by function with an integration test being performed following each addition.

The second method, NONINCREMENTAL INTEGRATION, consists ofimmediately relinking the entire program following the testing of each independentmodule. Integration testing is then conducted on the program as a whole. This methodis not recommended as errors are usually very hard to isolate and correct.

Types of Tests

There are a large number of different tests which can be used for the challenge andevaluation of software. To list and discuss each is beyond the scope of this guide. Foridentity purposes, several of the more commonly encountered test types are brieflydiscussed in Appendix C.

In general, the test procedures used for the evaluation of software can be categorizedinto two types, functional or black-box tests and structural or white-box tests. The testplan should consist of tests of both types.

FUNCTIONAL TESTING (black-box testing) challenges the design specificationsfor the software. The emphasis is on input and output, not processing. It is the testingof the program without using knowledge of program design and implementation.Functional testing can be subdivided into two specific areas, equivalence classpartitioning and boundary value analysis.

The theory behind EQUIVALENCE CLASS PARTITIONING is that input andoutput data can be partitioned into classes with the assumption being that all data in

one class should produce an identical result. The equivalent classes cover valid as wellas invalid input/output data.

In addition to equivalence class testing, frequently software errors can be related toparameter limits or boundaries (BOUNDARY VALUE ANALYSIS). Therefore, testcases on and close to the class limits are included.

STRUCTURAL TESTING (white-box testing) tests the code. The emphasis is ontesting the flow of control within the program. The criteria is to design test cases untileach statement is executed at least once. Therefore each true/false decision must beexecuted with both true and false outcomes.

Depending on the type of software being developed, the data in the performance ofthese test procedures will normally include:

a. normal cases;

b. limits;

c. exceptions;

d. special values (e.g., "-", "0", "1", empty strings);

e. initialization;

f. missing paths;

g. wrong path selection; and

h. wrong action.

Additionally, testing experience and intuition can be combined with knowledge andcuriosity about a system under test to add some categorized test cases to the design ofthe test case set. This may involve the use of special values or a particular combinationof values which may be error prone.

It is customary to test various aspects of the software at different times throughout thesoftware development process. As a result, the records of software verification will notnecessarily be consolidated in one location or one document. What is of significance isthat the software has in fact been tested following an approved test plan or protocol;that the test results have been appropriately reviewed and compared with theanticipated results; and that all test procedures, records and documents, including thetest results, have been reviewed and approved by the appropriate personnel.

The testing of the software in the manner described in this chapter is not limited to the

initial release of the program. Rather, the software must be retested, as necessary,following each revision. The testing procedures which should be used following therevision of the software will be discussed in Chapter 8.

CHAPTER 7--INSTALLATION AND CHECKOUT PHASE

By now it should be apparent that far more goes into assuring the quality andreliability of a computerized system than simply validating its ability to function in anapplication environment. This premise does not reflect, however, that systemsvalidation is to be omitted. Rather, the validation of the computer system is an integralelement, but only one element, in the criteria for system acceptance.

Until now, the tests and evaluations which have been conducted on the software havebeen performed in simulation and independent of the application environment. We arenow ready to begin testing the software package in combination with the actual deviceor production hardware and within the environment with which it has been designed tofunction.

The validation of a computer system is accomplished through the performance ofeither actual or simulated production runs or through the actual or simulated use of thefinished medical device product. During the performance of system validation, thecomputer system must be properly monitored. The physical parameters which it isdesigned to measure, record, and/or control should be measured by an independentmethod until it is demonstrated that the computer system will function properly in theapplication environment.

In addition to evaluating the system's ability to properly perform its control functions,the validation of the system should also include an evaluation of the ability of theusers or operators of the system to understand and correctly interface with it.

The length of time or the number of production runs necessary for the validation of thesystem will vary from application to application and program to program dependingupon the complexity of the software and the number of faults encountered. Thevalidation efforts should continue for a sufficient period of time to allow the system toencounter a wide spectrum of processing conditions and events in an effort to detectany latent faults which are not apparent during the normal processing activities.

Records must be maintained during the performance of system validation of both thesystem's capability to properly perform and the system's failures, if any, which areencountered. The revision of the system to compensate for faults detected during thevalidation process should follow the same procedures and controls as any othersoftware modification or change. These procedures and controls are discussed inChapter 8.

CHAPTER 8--OPERATION AND MAINTENANCE PHASE

The software has been written, tested, installed, and is now operational. Unfortunately,this is the point where some developers close the files and move on to the next project.Actually, the current project is far from complete. By certifying the software asoperational we have simply moved it from one phase in its life cycle to another. It hasnow entered "the period of time ... during which a software product is employed in itsoperational environment, monitored for satisfactory performance, and modified asnecessary to correct problems or to respond to changing requirements"1SeeReferences Item #1.

There are two critical elements expressed within this definition of this phase of theSoftware Life Cycle: software monitoring and modification. It is these two elementswhich are the focus of this chapter.

Monitoring Software

As discussed in the previous chapters on software and systems testing, softwarecontains faults. Although the system may have passed the testing and review regimenwhich we developed for it, there is still no assurance that it is totally fault free. Morethan likely latent errors remain which have gone by undetected. Because of this, it isimportant that we do not place the new system into operation with no monitoring plansor procedures.

The degree to which a computerized system should be monitored is dependent uponboth the functions which it performs and the length of time that it has been inoperation. Consequently, to some extent the degree of confidence which we place in acomputer system is proportional to the length of time that the revision level has beenin use and whether the process being controlled has remained unchanged.

There are no set or established rules for how often a system should be monitored. Itstands to reason, however, that the records generated by the system should be routinelyreviewed for accuracy and for the presence of obvious discrepancies. This is especiallytrue when new or unusual circumstances have occurred which affect either the systemdirectly or the process which it is monitoring or controlling. In addition, the systemshould also be placed on a periodic maintenance schedule with the maintenanceactivities including the verification that both the hardware and the software remain asdescribed in the design specification for the current revision level.

Modifying Software

One of the major advantages of using a software controlled system is the ease withwhich the characteristics of the system can be changed. As a result, very few programsstay unchanged for long. New capabilities can be added and existing ones can berevised or deleted and, once the change has been completed, there are no erasures orcrosscuts reflecting that it occurred.

In the next chapter we will touch on several security methods which can be used in aneffort to protect the system from unauthorized modification or revision. Theseprotection methods, however, are fallible. If someone truly desires to circumvent thesystem they will find a way to do so. As a result, it is important that everyoneconcerned be informed of the possible consequences of unapproved revisions and thatproper controls be established detailing the procedures which must be followed inrevising or modifying software. These procedures should include:

1. Controls which assure that the modification is reviewed and approved by thoseknowledgeable of the intended function of the system and the programming languagebeing used.

2. Controls which assure that with each modification or revision, the designspecifications and support documentation are also revised to the highest levelnecessary to assure that all documentation accurately reflects the functions andoperations of the system.

3. Procedures for determining the degree of testing necessary to assure the properfunction of the system following any modification.

4. Procedures for distributing revised software to the operational level.

In addition to these controls and procedures, documents should also be preparedwhich:

1. Detail the specific changes made to the system. An appropriately identified printoutor a copy maintained on a magnetic storage media of each revision level of eachapplication program should be retained within the firm's records. From these records itshould be possible to reconstruct the entire history of the software from the initialRequirements Phase to the installation of the current revision level. Collectively, theserecords are referred to as an AUDIT TRAIL.

2. Record all tests performed following each revision. If the software and total systemwere not subjected to a complete retesting of all functions and operations followingthe revision, the test documentation should include a statement explaining the rationalfor the performance of only limited testing. The decision to only test portions of thesystem should be reviewed and approved by the appropriate personnel.

When reviewing software modifications do not overlook the possibility of revisionsbeing made to the operating system software. Any revision to the operating systemmust be thoroughly reviewed and evaluated by the firm's systems personnel, as eventhe smallest change to an operating system could have a major impact on theestablished application software.

CHAPTER 9--QUALITY ASSURANCE AND CONTROLS

Throughout the various stages of the Software Life Cycle, documents are beingprepared, various forms of data are generated, and control records are beingmaintained. As we attempt to represent these activities within this reference, all ofthese materials appear straight forward and manageable.

In reality, the pressures and frustrations of a software development project are suchthat often there doesn't appear to be time to stop and record an event or document whatappears to be a simple and obvious procedure. Experience dictates that as deadlinespass, the amount of support documentation generated becomes less and less. For thisreason it is important that QUALITY ASSURANCE reviews be performed of thedevelopment process to assure that the process remains in control.

It is recommended within ANSI/IEEE Standard 730-19844 See References Item #4that the following reviews be made:

1. Software Requirements Review to ensure the adequacy of the requirements statedin the software requirements specification.

2. Design Review to evaluate the technical adequacy of the preliminary design of thesoftware as depicted in the software design specifications.

3. Software Verification and Validation Review to evaluate the adequacy andcompleteness of the verification and validation methods.

4. Functional Audit prior to software delivery to verify that all requirements specifiedin the software requirements specification have been met.

5. Physical Audit to verify that the software and its documentation are internallyconsistent and are ready for delivery.

6. In-Process Audit of a sample of the design to verify consistency of the design,including:

a. code versus design document;

b. interface specifications (hardware and software);

c. design implementations vs. functional requirements; and

d. functional requirements vs. test descriptions.

7. Managerial Reviews to assess the execution of this plan. These reviews should beperformed by an organizational element independent of the unit being audited or by a

qualified third party.

It should be noted that these requirements are those of the ANSI/IEEE, not the Food &Drug Administration. This information is provided for reference only.

APPENDIX A--FIRMWARE

As briefly mentioned in the first chapter of this guide, firmware has been defined as"hardware that contains a computer program and data that cannot be changed in theuser environment. The computer programs and data contained in firmware areclassified as software; the circuitry containing the computer program and data isclassified as hardware"1 See References Item #1. From this definition it should beapparent that firmware is a hybrid of both hardware and software.

The software prepared for use as firmware should be subjected to the same controls asany other software package. The procedures described throughout our discussion ofthe software life cycle still apply. Once the software is developed, however, it in itselfis not the finished product. Rather, the development project continues with theintegration of the hardware and the software. This integration process utilizes memorytypes, programming devices, and erasing equipment.

Memory Types

What does firmware look like? For the most part we have no problem visualizing thephysical characteristics of hardware or software, but what about firmware?

In the finished state, firmware appears as an integrated circuit or silicon chip. The chipused may vary among several different types with their classification being based ontheir memory capabilities. Those commonly found in use are:

1. READ-ONLY-MEMORY (ROM)--"a type of memory whose locations can beaccessed directly and read, but cannot be written into"2 See References Item #2. Thedata content is determined by the structure of the memory and is unalterable.

2. PROGRAMMABLE READ-ONLY-MEMORY (PROM)--a field programmableread-only-memory that can have the data content of all memory cells altered once.

3. ERASABLE PROGRAMMABLE READ-ONLY-MEMORY (EPROM)--areprogrammable read-only-memory in which all cells may be simultaneously erasedusing ultra violet light and in which each cell can be reprogrammed electrically.

4. ELECTRICALLY ERASABLE PROGRAMMABLE READ-ONLY-MEMORY (EEPROM)--a reprogrammable read-only-memory in which cells may beerased electrically and in which each individual cell may be reprogrammed

electrically.

5. VOLATILE MEMORY COMPONENTS--battery backed random-accessmemory (RAM) is another type of memory component. This memory requires a powersupply but lends itself to modification and reprogramming.

Programming Devices

The integration of our finished software product into one or more of these memorychips is accomplished using a device referred to as a PROM Programmer or PROMBurner. The procedures normally followed in the initial programming of the memorychip consist of connecting the PROM Burner to a computer system and then"downloading" or transferring through the computer the software program from theconventional storage media on which it was developed, to the memory component. (Itis possible with some types of PROM Programmers to program the memory chipdirectly using a bit by bit programming method. This procedure, however, is notcommonly used as it is extremely time consuming and error prone.)

Once the first memory chip has been successfully programmed, this chip will normallybe used as the master with duplicate chips being programmed or cloned on the PROMProgrammer directly from it. At this point there is no longer a need for theconventional computer system and the duplication of chips can be accomplished quiterapidly.

Erasing Equipment

The erasing of programmed memory chips is a relatively straight forward process. Theprogrammed chips are removed from the circuitry and are placed in an erasing device.The chip is then subjected to ultraviolet light or an electrical current which resets all ofthe circuits on the chip back to a non-programmed state. Upon completion of theerasing process, the chip is removed and is ready for reprogramming.

(It is important that the state of the memory chip be checked and verified at least priorto programming and preferably also following erasure to assure that all bits on the chipare in a non-programmed state. The failure to do so could result in the erroneousprogramming of the chip.)

APPENDIX B--PROM PROGRAMMING

MEMORY PROGRAMMING DEVICES

These devices perform two key function:

1. Transfer data or software programs from typical storage media (magnetic tape,disks, etc.) to a "master" read-only-memory component; and

2. The cloning of other memory components from the master component.

As with other electronic equipment, operation of these memory programming devicesshould be controlled and monitored. Documents pertaining to standard operatingprocedures (SOP's) should be available covering operation of the unit, preventivemaintenance, and a schedule and documentation of calibration efforts.

MASTER MEMORY COMPONENTS

The master memory components are the components from which production memorycomponents are cloned or copied. Procedures should be in place covering thefollowing areas:

1. Security and Maintenance--There should be a SOP for the storage/maintenance ofmaster components which outline the procedures the firm has taken to protect thecomponents against damage.

2. Change Control--During its life cycle, software is usually modified and specificchange control measures should be established. Here it is not intended to repeatsoftware change control but rather that established procedures should be in place toremove old master components from production areas. Additionally, proceduresshould be established for the archiving of programs so that they can be reconstructed ifthe need arises.

COMPONENT CONTROL

Acceptance Criteria--There should be established procedures and acceptance criteriafor memory components.

Component History--A component history record should be maintained which tracksthe number of memory components from each lot that were either successfully orunsuccessfully programmed. This record serves a useful purpose in the evaluation ofprogramming device performance, and memory component failure trends.

Component Identification--A procedure should be established that clearly identifiesthe methods used to identify programmed components. The identification shouldinclude a method of identifying the revision level of the software. The followingmethods have been employed:

1. Stick-On-labels--Stick-on labels have been used to identify memory components.These labels can be used to identify revision levels and also cover the chip accesswindow which protects the component from UV light and unintentional changes in theprogram.

2.Ink printing--Ink printing involves two steps: first the previous identificationmarking must be covered with black epoxy ink; and second the new identificationmarking must be applied.

BENEFITS OF A FIRMWARE CONTROL SYSTEM

Destroyed or malfunctioning memory components can be quickly replaced with propercomponents.

Memory components impacted by changes in software can be immediately identified.

A clear audit trail (a history of the device) to "Burned In" software is established thatassures proper marking, testing, and placement.

It is easy to verify that the proper software is on the circuit board.

AREAS OF SPECIAL CONCERN

INTERFACE CONNECTIONS

As with any other peripheral device care must be taken that the memory programmeris compatible with the computer in which software is stored. Additionally, somememory programmers have their own software.

GANG EXPANSIONS

Although some devices must be programmed individually, most of the commerciallyavailable programmers are marketed with the capability of expanding the number ofcomponents which can be programmed at one time. Expansion boards known as"Gang" expanders are used for this purpose.

CALIBRATION

Programming equipment requires periodic calibration checks. Specifically, the voltageused to program memory locations is important and usually within a narrow range(i.e., 21V+/-.5V).

APPENDIX C--SOFTWARE TEST METHODS

As discussed in Chapter 6, the ability of software to meet its functional requirementsand design specifications can best be demonstrated through the execution of a propertest plan. This section describes techniques and methods which are being used toperform this testing function. The methods described in this section are not all

inclusive of the test methods available nor are the test methods fully detailed. Theintent of this section is to briefly overview some of the common test methods beingused.

CODE, MODULE, AND INTEGRATION TESTS

DESK CHECKING

The manual simulation of program execution to detect faults through step by stepexamination of the source code for errors in logic or syntax. (ANSI/IEEE).

INSPECTION

Inspection is a formal evaluation technique in which software requirements, design, orcode are examined in detail by a person or group other than the author to detect faults,violations of development standards, or other problems. (ANSI/IEEE).

WALK-THROUGH

A walk-through is a review process in which a designer or programmer leads one ormore other members of the development team through a segment of design or codethat he/she has written, while the other members ask questions and make commentsabout technique, style, possible errors, violation of development standards, and otherproblems. (ANSI/IEEE).

STATIC ANALYSIS

This technique is one which is used to identify weaknesses in the source code byexamination via paper review but without execution. The intent is to find logical errors(program errors dealing with the logical sequence of events) and to pinpointquestionable coding practices which may deviate from established standards.

SYMBOLIC EXECUTION

A verification technique in which program execution is simulated using symbolsrather than actual values for input data, and program outputs are expressed as logicalor mathematical expressions involving these symbols. (ANSI/IEEE).

SOFTWARE AUTOMATED TESTING TOOLS

ACCURACY STUDY PROCESSOR

Used to perform calculations or determine accuracy of computer manipulated programvariables.

AUTOMATED TEST GENERATOR

A software tool that accepts as input a computer program and test criteria; generatestest input data that meet these criteria; and, sometimes, determines the expectedresults. (ANSI/IEEE).

COMPARATOR

Used to compare two computer programs, files, or sets of data to identifycommonalties or differences. Typical objects of comparison are similar versions ofsource code, object code, data base files, or test results. (ANSI/IEEE).

CONSISTENCY CHECKER

Used to test for requirements in design specifications for both consistency andcompleteness.

INTERPHASE TESTING

Testing conducted to ensure that program or system components pass information orcontrol correctly to one another. (ANSI/IEEE).

INTERRUPT ANALYZER

Analyzes potential conflicts to a system as a result of the occurrence of an interrupt.

SIMULATOR

A device, data processing system, or computer program that represents certain featuresof the behavior of a physical or abstract system. (ANSI).

A simulator provides inputs or responses that resemble anticipated process parameters.Its function is to present data to the system at known speeds and in a proper format.

STATIC ANALYZER

A software tool that aides in the evaluation of a computer program without executingthe program. Examples include syntax checkers, compilers, cross-reference generators,standards enforcers, and flowcharters. (ANSI/IEEE).

TEST RESULT ANALYZER

This is used to test output data reduction, formatting, and printing.

CHANGE TRACKER

This program to documents all changes made to a program.

SYSTEMS TESTING

LOAD/STRESS

A method of testing real time process control systems to determine their ability to copewith program interrupts while maintaining adequate process control.

VOLUME

Testing designed to challenge the systems ability to manage the maximum amount ofdata over a period of time. This type of testing also evaluates the systems ability tohandle overload situations in an orderly fashion.

USABILITY

Tests designed to evaluate the machine-user interface. Are the communicationdevice(s) designed in a manner such that the information is displayed in aunderstandable fashion enabling the operator to correctly interact with the system?

PERFORMANCE TESTING

A test designed to measure the ability of a computer system or subsystem to performits functions; for example response times, throughput, number of transactions.(ANSI/IEEE).

STORAGE TESTING

This test determines whether or not certain processing conditions use more storagethan estimated.

CONFIGURATION AUDIT

The process of verifying that: all required configuration items have been produced;that the current version agrees with the specified requirements; that the technicaldocumentation completely and accurately describes the configuration items; and, thatall change requests have been resolved. (ANSI/IEEE).

COMPATIBILITY TESTING

The process of determining the ability of two or more systems to exchangeinformation.

In a situation where the developed software replaces an already working program, aninvestigation should be conducted to assess possible compatibility problems betweenthe new software and other programs or systems.

RELIABILITY ASSESSMENT

The process of determining the achieved level of reliability for an existing system orsystem component. (ANSI/IEEE).

Theoretical models exist for estimation of mean-time-between-failures based on thenumber of errors found, and an estimated number of errors left in the software. Ingeneral, reliability is also related to documentation and quality assurance effortsperformed throughout the software life cycle.

ROBUSTNESS ASSESSMENT

The evaluation of the software to determine the extent to which it can continue tooperate correctly despite the introduction of invalid inputs. (ANSI/IEEE).

Test documents should be maintained which demonstrate the performance of thesystem when exposed to errors including power failures and invalid inputs.

METRICS

In challenging a software or system with test cases, it would be helpful to have ameans of evaluating the adequacy of the test being used. It does little good to have atest result which indicates that the software will function according to therequirements and design specification without knowing the accuracy of the testmethods used to arrive at this conclusion. As a result, a number of techniques havebeen developed which are designed to measure the adequacy, or METRICS, of thesoftware test methods being used. Three of the more commonly used methods are:

COVERAGE ANALYSIS

A test procedure in which the program is processed with counters and flags in alldifferent branches. When a branch is executed, the corresponding flag is updated.After test execution with test cases, the flag values are listed, accumulated andcompared to the optimum value depending on actual criteria.

FAULT SEEDING

The process of intentionally adding a known number of faults to those already in acomputer program for the purpose of estimating the number of indigenous faults in theprogram. (ANSI/IEEE).

In performing this test, known errors are inserted in a copy of the program beingevaluated and the copy is executed with selected test cases. If only some of the seedederrors are found, the test set is not adequate. However, the ratio of found seeded errorsto the total number of seeded errors is an estimate of the ratio of real errors to the totalnumber of real errors. This test result is an estimate of the number of errors remainingand thus the amount of future testing required.

PROGRAM MUTATION

A program version purposely altered from the intended version to evaluate the abilityof program test cases to detect the alteration. (ANSI/IEEE).

If the test set is adequate, it should be able to identify the mutants through errordetection.

The method of seeding is crucial to the success of the technique and consists ofmodifying single statements of the program in a finite number of "reasonable" ways.The developers of this method conjecture a coupling effect which implies that these"first order mutants" cover the deeper, more subtle errors which might be representedby higher order mutants.

APPENDIX D--REFERENCES

The following references were used in the preparation of this guide.

1. ANSI/IEEE Standard 729-1983. Glossary of Software Engineering Terminology.

2. Harry Helms, The McGraw-Hill Computer Handbook. New York: McGraw-HillBook Company, 1983

3. G.J. Myers, The Art of Software Testing. New York: Wiley-Interscience, 1979

4. ANSI/IEEE Standard 730-1984. Software Quality Assurance Plans.

5. ANSI/IEEE Standard 828-1983. Software Configuration Management Plans.

6. ANSI/IEEE Standard 829-1983. Software Test Documentation.

7. ANSI/IEEE Standard 830-1984. Software Requirements Specifications.

8. G.J. Myers, Software Reliability: Principles and Practices. New York: Wiley-Interscience, 1976.

9. R. Dunn and R. Ullman, Quality Assurance for Computer Software, New York:McGraw-Hill, 1982.

10. Susanne Klim, Systematic Software Testing for Micro Computer System,International Planning Information, 1984.