realistic assumptions for software reliability models

Upload: pramod

Post on 30-May-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    1/8

    Realistic Assumptions For Software Reliability Models

    David ZeitlerSmith's Industries Aerospace & Defense Systems, Inc.

    4141 Eastern Avenue, S.E., Grand Rapids, MI 49518-8727PHONE: (616)241-8168 / EMAIL: [email protected]

    AbstractA definition of reliability appropriate for systems con-taining significant software that includes trustworthi-ness and is independent of requirements will be statedand argued for. The systems addressed will encom-pass the entire product development process as well asboth product and its documentation. Cost incurred asa result of faults will be shown to be appropriate as aperformance measurement for this definition. This andmore realistic assumptions will then be shown to leadto the use of auto-regressive integrated moving aver-age (ARIMA) mathematical models for the modelingof reliability growth.

    Key Words: Reliability definition, trustworthiness,growth models, software system reliability, model as-sumptions.

    1 GROUNDWORKMost authors find the assumptions made in modelingreliability questionable for application to the real worldof software developm ent. To more firm ly anchor the-

    oretical development to the real world, I will start byreviewing the reasons for reliability models in generaland specifically how considerations for the reliability ofsoftware based systems dictate that we both change themeasurement of reliability considered and the assump-tions upon which the model must be based.

    1.1 Rel iabi l i ty En hanc em ent Frame-work

    A good framework from w hich to st art considering soft-ware reliability is presented in RADC-TR-87-171 [6].It does a good job of identifying the tasks involved in

    statistical reliability improvement and relating them tothe DOD-STD-2167A terminology. Too often, reliabil-ity discussion begins at the mathematical models for

    reliability growth and ignore the larger picture of full reliability program. The RADC document is athe first attempt I've seen to pull together a speccation for implementation of a software reliability pgram. I have added relationships to the frameworkshow explicitly the presence of the three major tyof reliability models.

    As shown in figure 1. There are four tasks in tframework with associated outputs: Goal SpecificatiPrediction, Estimation and Assessment. Goal Speccation is a nearly independent process that provitargets for the development process based on an appcation level model. Prediction s take a second mobased on development metrics and provide feedbainto the design process. Estimations take test measuments and a third model to provide feedback into test and burn in processes. Finally the assessments toperational data and feed it back into all the modto help tune their predictive and estimation capabties. The three models identified roughly correspoto component count predictions, stress analysis predtions and growth estimation models currently in usehardware reliability work.

    So we have three sepa rate mod els, each with a difent goal. The first two models are predictive in natuwhile the third is estimating the reliability of an exing product. Most current work is aimed at the devopment of this third type of reliability model. Since strongest inferences will be possible at this later stof a program, this is also the most productive areabe addressing.

    1.2 Software, Hardware & Syste m sReliabili ty

    It is often stated that software reliability is very d

    ferent from hardware reliability, but what exactlythis difference? Usually the statem ent is made in lation to the idea that software doesn't wear out, b

    Reprinted from Proc. Int'l Symp. Software Reliability Eng.,IEEE CS Press, Los Alamitos,Calif., 1991, pp. 67-74. Copyright 1991 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

    12

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    2/8

    the differences are more fundam ental. Software is a setof instructions, like a blueprint or schematic,it cannotfail. It can however be wrong in a multitude of ways.

    Analogous component levels for software and hard-ware are illustrated in table 1. The basis of this anal-ogy is comparative equality in the level of functionalityprovided by the components on each side of the table.For example, both resistors and processor instructionsprovide a fundamental functionality for their respec-tive disciplines. Note tha t on either side of this tablethe levels shown are hazy. The ta ble's prim ary pur-pose is to illustrate relationships between software ter-minology and roughly equivalent hardware structuresfor comparison purposes.

    Hardware reliability analysis almost exclusively ad-dresses the first two levels of the hierarchy (discretecomponents), with more recent work attempting to ex-tend the m odels to the higher levels. Since usage ofparallel paths through the system is not uniform, thistype of analysis quickly becomes extremely complexand highly dependent on the end users input distri-bution.

    From this relationship between hardware and soft-ware we can see first that there are equivalent waysof thinking of the two seemingly very different disci-plines. Software is most comparable to the blueprints,schematics and production process specifications ofhardware. At best, if we compare software programswith hardware components, we then must think of de-velopment methods, standar ds, etc. as the blueprintsor production specifications. Thus each program devel-oped is analogous to an individual hardware unit fromprod uction . We can now see from th is analogy tha t sta-

    tistical analysis of software must be carried out acrossmany software units to achieve the same effects as ap-plying statistical analysis of hardware components tomany parts.

    Carrying this analogy just a bit farther, to parallelhardware reliability we should be looking at the reli-ability of individual CPU instructions much as hard-ware has determined the reliability of individual com-ponents. This is quite different from the current workin software reliability, which jumps in at the higherlevels, treating the software as a black box, withoutattempting to address the low levels which hardwarehas built upon. Hardware has discrete components at

    these levels that can be analyzed independent of theparticular use of the component. This examination offundamental software components independent of theiruse will yield only that the components do not fail.Therefore, som ething different mu st be going on when

    we discuss reliability for software systems.In hardw are we are looking at reliability hazard func-

    tions as the probability that an individual componentwill fail. In software, we are looking at the probabilitythat a given component will be used in an inappropri-ate man ner. This implies tha t we are looking not atthe probability of failure of a concrete replicate of adesign, as in hardware, but rather at the probabilitythat the designer incorrectly uses a component in thedesign. For exam ple, choosing the correct packagingof a resistance component for the target environmentalconditions and reliability requirements, bat using thewrong resistance value in the circuit for the full range ofthe circuits intended function (i.e., incorrect functionaldesign as opposed to incorrect environmental design).

    Reliability analysis now recommends reduced partcounts and greater operational margins to Improve reli-ability. In the context of the software system reliabilityimprovement process I'm suggesting, reliability mightalso be suggesting that the use of complex components,which have a high probability of misuse, be minimized.This would reduce the probability of design errors.

    An example of the kind of design criteria that mightcome out of this type of analysis would be 'Reduce useof complex non-programmable com ponents'. Or per-haps, the development of software modules unique tothe system should be minimized, since they probablywould not have the maturity and/or level of testing ofcommon software. As a confirmation tha t w e're on theright track, note that this agrees with common prac-tice for quality software development today. So we cansee from this analogy that software reliabilityis signif-icantly different from hardware reliability.

    1.3 Sepa ration of Software from Sys-tem Reliabil i ty

    We are at the point where the functionality of systemsare primarily resident in the software. Special purposeor single purpose hardware is nearly a thing of the past.Since we have shown above that software reliability ismore related to functionality than to physical compo-nents, addressing software reliability is equivalent toaddressing system reliability. It is also equivalent toaddressing the reliability of the software/system devel-opment process, since the reliability of the end product

    is as much a result of the process that developed it asof the product itself.This then suggests that a system level approach to

    the overall reliability problem will be more effectivethan attempting to isolate either hardware or software.

    13

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    3/8

    M ode l s

    Appl ica t ion

    Component

    Growth

    Figure 1: Software Reliability Framework

    Tasks Metrics

    Goal

    Specification

    Prediction

    Estimation

    Assessment

    Application

    Requirements,Design, and

    Implement at i on

    Test

    M e a s u re me n t s

    Ope ra t iona l

    Pe r fo rma nc e

    Concept

    Development

    SoftwareDevelopment

    Ope ra t ion

    Table 1: analogous Hardware/Software Structures

    HardwarePrimitive Components

    Resistors,Capacitors, etc.Integrated Chips

    CPU, UART, MMU, etc.Sub-circuits

    RS422, ARINC, etc.Boards

    Serial I/O, Memory, etc.Sub-systems (HWCI's)Display, Keyboard, etc.

    SoftwarePrimitive Instructionsmove, shift, add, etc.

    UnitsProcedures, Modules, etc.

    CPC'sPackages, Objects, etc.

    TLCSC'sExec, I/O, etc.

    Programs (CSCrs)Operational Flight Program

    SystemsNavigation System, Weapon System, Fuel Savings System, etc.

    14

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    4/8

    W hat's m ore, ifwe solve the reliability problem as dis-cussed here, the solution will be equally applicable toareas that hardware reliability has traditionally con-sidered out of scope. A solution to software reliabilitythen should cover not only the software aspects of thereliability question, but the wider system aspects si-multaneously.

    In [7] Fabio Schreiber concludes that the time is ripefor the unification of several research areas into a uni-fied investigation of system reliability. I am taking hissuggestion a step further here in including not only thesystem performance, but also in effect the performanceof the development process as well. This is not only de-sirable from the relationship of reliability to function-ality, but is necessitated by the inadequacy of require-ment specification techniques for complex systems.

    1.4 De finition of Software Re liabili ty

    Most authors that address the definition of systems re-

    liability, address it in terms of adherence to require-ments. With the present state of the art in require-ments specification for complex systems, this hardlyseems reasonable. We cannot yet determine if a systemadheres to requirements when these requirements areincomplete and/or ambiguous as they invariably are.

    According to Webster's New Collegiate Dictionary,two possible meanings of reliability seem to apply. Thefirst is consistency in response, in which case software(or any other deterministic process) cannot be unre-liable. Ano ther definition has reliability mean ing 'tobe depend able, or trustwo rthy '. The first definitiondoesn 't seem to fit at all. We can be sure tha t soft-ware does not have perfect reliability. So it app earsreliability means to perform with little probability ofunexpected behavior, (i.e., we can depend on or trustit). This is beyond just repeatable behavior and doesseem to get at the issue here. So if the system alwaysdoes what is expected ofit, it is perfectly reliable. Thuswe see that trustworthiness is a major component ofre-liability.

    Reliability is then more closely associated with meet-ing user expectations, regardless of stated require-ments. (Note: this is not in addition to meetingrequirem ents. Any given requirem ent, regardless ofsource, may not meet user expectations and meetingthem w ill still be considered a failure!) Reliability isthen related to perceived failures as opposed to whatmay be considered ac tual failures. It is well known th at ,although we're making progress, the precise specifica-tion of a complex system is not at this time possible.

    The result is that there is room for user interpretation,which can lead to perceived failures. These failures areas expensive (or more expensive) than 'actual' failures.In either case, the user is not satisfied with the opera-tion of the system and it is considered unreliable.

    So I am defining reliability as that which minimizesthe users perceived failures.

    1.5 M easurem ent of Rel iabi li ty

    Perceived failures can be measured in terms of cost dueto user complaints, so I'm suggesting we use cost as ameasure of reliability. These costs consist of the lostproductivity or production due to down time, manage-ment costs for handling the return of faulty product,costs associated with negotiation of variances againstrequirements, etc. The cost does not however stop atthe custom er. We also will need to consider the costto the developer. This cost is in terms of the lost pres-tige (and therefore presumably potential market share

    in the future) as well as the im med iate cost to analyzeand fix the product returned. This can be modeled asa constant times the cumulative cost incurred by thecustomer.

    Measurement of perceived failures at the user is toolate to help us estimate the growth curve of the systemreliability during development. It will provide adequatefeedback into future reliability analyses, but a measureof perceived failures during the development process isneeded. This can be obtained through the early appli-cation of operational testing using actual users. Earlytesting can be focused on the user interface, with grad-ual incorporation of functionality as the integration of

    the system progresses.In [5] Ragnar Huslende lays out an extension of stan-dard reliability concepts for degradable systems or anypartially operable system. As Huslende states, any sys-tem can be viewed as degradable. Using cost as a mea-sure of the system performance we can see that zerocost is equivalent to the no-fault state of Huslende'sH(t). So I will be looking at cost as a measure of sys-tem performance.

    c(t) = cu(t) + cP 'C u(t) + cj(t)where:

    cu = cost to the user with respect to timecp = impac t of customers costs on developerscj = developers cost to fix the product

    This gives us a value that can be measured tangibly(although not necessarily precisely) regardless of the

    15

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    5/8

    specifications, development process, or other variables(controllable or not). Using the cost as our performancemeasure also gives us a handle on the varying degree ofseverity that failures tend to cause.

    Measuring reliability performance as cost also allowsus to focus on those problems that are important tous and the custom er. We all know of little prob lems

    that are annoyances in software based systems, but arecertainly not worth spending massive effort to correct.Likewise, we all would consider any effort expended toelimin ate risk of life to be spent well. W ith cost asour measure of reliability, little problems that are soonworked around have a negligible impact on reliability,while loss of life has a correspondingly large impact.

    These costs can be measured through existing or aug-mentation of existing accounting systems for trackingproduct that has been fielded. For systems under de-velopment, we, will need to m ake use of the user in-teraction with early prototypes to measure relation toexpectations. This also implies tha t we need to make

    the actual users an integral part of the standard testingprocess in order to improve reliability of systems.

    2 MODELINGEssentially all proposed reliability models are growthmodels, an exception being the linear univariate pre-dictive model(s) attempted by SAIC for RADC-TR-87-171. The intent of the reliability growth modelingprocess is to be able to predict, from actual failure datacollected during early development or testing, both theexpected end product reliability and the magnitude of

    effort necessary to achieve the target reliability, andhence the date when a system release can be achieved.Reliability growth models have been applied to severalprograms for these purposes. These applications haveshown considerable promise. Many programs now alsouse an intuitive form of this by monitoring a problemreporting system during the late stages of a programwith an eye toward determining when the software isready for release.

    Martin Trachtenberg in [8] provides a general for-mulation of software reliability with the relationshipof his general model to major existing m odels. Thiswork is a good foundation upon which to base furthermodeling work. In particular, my measure of reliabilityperformance as cost can be easily incorporated into themodel. Modification of the general model to incorpo-rate dynamics is somewhat more complex however.

    In Trachtenberg's model, failure rate is a function of

    software errors encountered. He considers f=f(e) ande=e(x) where f is the n umb er of failures, e is the num berof errors encountered and x is the number of executedinstruc tions. To determine a failure rate, he differen-tiates f(e(x(t)) with respect to time to obtain a formin terms of current number of failures per encounterederror (s), apparent error density or number of encountered errors per executed instruction (d), and softwareworkload in terms of instructions per time unit (w)Thus arriving at failure rateX(t) as below:

    W = % = $;'-% = '-d-v,This model is intuitively attractive and does provide

    a general structure from which other models can beviewed. In the following paragraphs I will consider eachbasic assumption and show the modification necessaryfor a realistic model. When possible, I will be using thnotation from Trachtenburg's paper.

    2.1 A Sem i-structural Approach

    As Schreiber stated in [7], cost based modeling canbe extremely comp lex. This due to the complexitieof incorporating variable failure impact into structuraequations for the process. In addition , adding the development process into the reliability equations addhuman systems into the equations. I am suggestingthat an intermediate approach be taken. From a fewbasic assumptions about the relationships expected, will be proposing a high level empirical model capablof capturing the necessary characteristics of the pro-cess, thereby avoiding the complexities of attemptingto specify the micro level structu ral model under whichthe system actually operates.

    2.2 Assumpt ionsRecent work (one specifically is Ehrlich, et al [3]) hashown that in some applications the current growthmodels proposed for software reliability provide reasonable estima tes of test effort needed. The work discussedwas specifically designed to meet the given assumptionof the models. Necessary characteristics for using thecurrent available models vary, but generally include independence in the interar rival times of failures, uniformmagnitude of impact of failures (making failure rate areasonable measure of reliability), and uniform systemload during test and/or operation (i.e., random tesexecution ). None of these assum ptions hold for manyreal-time systems and avionic systems in particular.

    A clear indication that we have autocorrelation evenin systems with uniformity in testing can be seen in the

    16

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    6/8

    telephony system test data analyzed by Ehrlich et.al.The plots show a good match between the homoge-neous Poisson model being fitted and the actual data,but there is visual evidence of positive autocorrelationin the data sets. This suggests the presence of auto re-gressive factors needed in the model even for a processwhich fits the assumptions well.

    In systems that cannot be easily tested in a ran-dom fashion, these auto regressive factors are likely tobe sufficiently significant to trigger an early release ofsoftware or an over intensification of test effort by ex-cursions from the too simplistic model that are merelybased on rando m varia tion. At the very least, theywill reinforce the positive feedback reactive effect seenwhere increased failures during test causes increasedtest effort (lagged by management response time con-stants of course).

    2 .2 .1 Tim e base for real t im e system s

    Real time systems have a more-or-less uniform instruc-tions (or cycles) per unit tim e pattern. This makesmeasurement of instructions executed per unit time apoor measure for system load. Instead, we need to bemeasuring the number of instructions outside the oper-ating system per time unit. This number will increaseproportionately with system load.

    No changes in the equations are necessary here, sincewe're just modifying th e interp retation. This modifica-tion transfers us from the time domain to the computa-tion domain, as discussed in [1]. More specifically, weare working in the application computation domain.

    2.2.2 Co st of Fail ures

    Including a non-constant cost for failures will removethe uniform failure impact assumptions made in thestandard models, this puts our emphasis on cost perunit time. The cost of a failure is both a function ofthe fault that causes it and of the situation in whichthe failure occurs.

    c(f) = c(f,b)Pb(t)where:cj(a,b) = cost of failure a occurring

    in situation bPb(t) = probability of situation b at time t

    Since situations (or test cases) cannot occur ran-domly in the types of systems we're looking at, but oc-cur in the context of an operational profile, cost due tothe occurrence of any particular failure is a time basedfunction related to the operational profiles. These op-

    erational profiles will produce a 'fine grain' correlationstructure in the occurrences of failures.

    Our early measurements for growth estimation aretaken during the integration of the system. This in-tegration process limits the nature of the profiles tofunctions currently integrated and operational. Thuswe will also see a 'coarse grain' correlation structure to

    the occurrences of failures.For our cost based reliability measurement we re-place failure rate, with cost rate{j(t)) based measures.These cost based measures will be more directly usableby management and give planners a better handle onappropriate test effort feedback into the developmentprocess. We can also see that th is process level feed-back into the product will impact failure occurrences.

    7(0 = _ dc df_ de_ dr_~~ dj ' de ' dx ' dt= c - s - d - w

    2.2.3 Failure in ter- arriva l t im e

    Software can be viewed as a transfer function. Its in-put must be taken as the combination of the valuespresent at its inputs and the state of its memory atthe beginning of execution. Ou tput is the combina-tion of values presented to its outputs and its internalstate upon completion of execution. In many softwaresystems, this transfer function is applied to essentiallyindependent inputs and the system state is reset be-fore each execution. In real-time avionic software, andmost other real-time software systems, the inputs are

    sequences of highly correlated values applied to a sys-tem that retains its internal state from the previousinput set.

    Clearly then, the assumption of random distributionof inputs is not feasible here. Therefore a primary as-sumption of our growth models cannot be applied. Wemust then prepare our models for the likely autocorre-lation of failure occurrence. This will imply some formof time-series model for the occurrence of software fail-ures.

    It also can be argued that a piece of code with manyfaults being found is likely to have more faults remain-ing in it, since the same factors tha t produced the faulty

    code affect the remainder of the code as well. Combinedwith this is the operational profile nature necessary formost real time systems testing. The operational pro-file ensures t ha t th e conditions tha t caused a failuremust persist for some period of time, thus increasing

    17

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    7/8

    the probability of recurrence or related failure detec-tion.

    So we see that software systems failure occurrencesare not independent, and in fact, will exhibit corre-lation structure that is dependent on the correlationstructure of the inpu ts coming from an operational pro-file as well as on the system itself. This is not easily

    modelable in general.

    2 .2 .4 Fau l t co rrect io n p rocess

    When a fault is corrected, the same process as thatwhich created the fault is used to fix t. The fix hen hasa probability of introducing new faults (or not correct-ing the original fault) that is clearly not zero. Ifwe lookat the magnitude of the change in changed or added in-structions, schematic symbols, drawing symbols, etc.,(Me) and the probability of introducing faults as theproportion of faults (eo) to instructionsat the begin-ning of the program (I), po= ( ^ ) and let pe(t) = the

    probability of fixing a fault at time t, we can view theremaining faults in the code at timet ase(t) = e(t - 1) + p e(t) (Me po - 1)rather thane(t) = e(t - 1) - Pe (t)

    Using B as the linear lag operator, we have

    (l-B)e(t)=p.(t)-(M,-po-l)

    The right hand side of the previous equation repre-sents the random process of findingand ixing faults inthe system. This random process is the combination ofthe above processes of failure inter-arrivals, fault recur-rence and fault removal. Since together these representa dynamic process driven by random noise, we can letthe rhs = Oo+0(B)et} e(t) is a white noise driver for theprocess. We can see then that the fault content of thesystem is modelable as a standard autoregressive inte-grated moving average (ARIMA) [2] time-series model.

    Note that this relationship implies tha t the fault con-tent of the system is not strictly decreasing and in factmay exceed the initial fault content.

    2 .2 .5 R ec ur re nt faults &: fault removal

    Faults are not removed immediately in the real world.Often minor faults are determined to be not worth theeffort to f\x at all. Thus a realistic model of softwaremaintenance must be a cost prioritized queue. The re-sult is a clear dependence of the number of failures per

    fault on the cost of removing the fault when comparewith the cost of encountering the fault and the expectenumber of occurrences of the fault if its not fixed.

    For our context, the queueing process can be expressed as a finite difference equation using the sumof the expected cost of not fixing the faults as the statvariable. State changes occur whenever there is a faul

    with an expected cost for not fixing greater than thcost of repairing. The fault fixed will be the one withthe highest return (not necessarily the first or last).

    Thus we can see that known faults may never reachzero (as one would expect) and that the expected number of recurrences of a failure due toa given fault isa function of both the cost to fix t and its expectedimpact on lifetime cost of the system. Since the cost tfix is known to be increasing with time during devopment and the expected impact is dependent on thoperational use of the system, we will have the numbeof failures occurring for each fault being dynamic also

    3 Summary

    I have shown here th at we can expect the occurrences ofailures and the magnitude of their impact to vary dynamically in time and that they are all interdependentThus our model is clearly dynamic and an a priori determination of its order (let alone the exact structurewill be difficult. Th is is identically the case for work isocial, economic and other human systems. Not realla surprise since we can see from the necessary definition of reliability in terms of customer expectationthat products containing significant software components are deeply embedded in human systems.Theapproach taken to modeling and forecasting in humanbased systems is fitting autoregressive integrated moving average models to the real world d ata. I'm suggesing here that we need to do the same for estimatinreliability grow th.

    In the September '90 QualityTIME section of IEEESoftware [4], Richard Hamlet has an excellent discussion of quality and reliability. He clearly delineates fous the distinction between failures and faults.I sus-pect that Mr. Hamlet will disagree with my definitionof reliability, as will Parnas. If however, we separatetrustw orthine ss from reliability as suggested, we enup with the conclusion that software systems are perfectly reliable. If this is the case, we then should drothis thrashing over software reliability, and attack theproblem of software trustworthiness. Webster howeveincludes trustworthiness asa part of reliability. I will

    18

  • 8/14/2019 Realistic Assumptions for Software Reliability Models

    8/8

    then stand by my arguments. Reliabil i ty of softwaresystems is a measure of how much trust the user canp ut in the software performing as expected .

    More real istic assu mptions are necessaryfor applica-tion of reliability growth modelingto software systems.The assumpt ions are :

    Failures do not have equal impact .

    Fai lure rates have dynam ic structure.

    The fault removal process is a cost prioritizedqueue depending on the ra t io of expected costtocost to fix.

    Fault content has a dynamic s t ruc ture and is notstrictly decreasing.

    I have shown that, because of my definition and theseless restrictive assumptions onour model, software reli-

    abil i ty m ust be modeledas an ARIMA time series. Theintent here is not to replace existing models. They havealready shown their value in some real world situations.My intent is to provide an avenue for further applica-tion of reliability analysis in software system develop-ment applicat ions wherethe restrict ive assumptionsofcurrent models will make themof little use.

    3.1 Further research.

    Considerable empirical workis needed to validate myclaims. Costs need to be collected to perform this em-pirical work. Initial efforts could however makeuse ofest imates from exist ing information and/or apply sim-ulat ion techniques to initially validate the approach byshowing that ARIMA models could producethe kindsof behavior known to be present in the software sys-tem development process. Eventually,it is hoped thatinformation gained from this more empirical approachwill lead to greater understanding of the process andstructural models.

    References

    [1] Beaudry, M. Danielle, Performance-Related Re-liability Measures for Computing Systems, IEEETransac t ion on Co mp u t e rs , Vol c-27. No. 6, June1978, pp 540-547.

    [2] Box, George E. P. & Gwilym M. Jenkins, Time Sc-

    ries Forecasting and Control,Holden-Day 1976.[3] Ehrlich, W illa K., S. Keith Lee, and Rex H.

    Molisani. Applying Reliability Measurement:ACase Study, IEEE Software, March1990.

    [4] Hamlet, Richard New answers to old questions,IEEE Software, September 1990.

    [5] Huslende, Ragnar, A Combined Evaluationof Per-formance and Reliability for Degradable Systems,ACM -SIGM ETRICS Conf. on Measurement andModeling of Co m pu t. Syst., 1981, pp. 157-163.

    [6] Methodology for Software Reliability Prediction,RADC-TR-87-181, Science Applicat ions Interna-t ional Corporat ion for Rome Air Development Cen-ter 1987.

    [7] Schreiber, Fabio A., Information Systems: A Chal-lenge for Computers and Communications Reliabil-ity, IEEE Journal on Selected Areas in Communica-tions, Vol. SAC-4, No.6, October 1986,pp 157-164.

    [8] Trachtenberg, Martin, A General Theory of Soft-ware Reliability Modeling.IEEE Transactions onReliability, Vol. 39, No. 1, 1990 April, pp 92-96.

    4 Acknowledgements

    Th an k s to Gerry Vossler and Derek Hatley for theirreview of this work and to Smiths Industries for theirsupport . I would also like to thank the reviewerswhopointed me toward related work in systems reliabilitywhich was quite helpful.

    19