the computer journal 1992 burns 3 15

Upload: adeliza-mortalla

Post on 06-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    1/13

    On the Meaning of Safety and SecurityA. BUR NS 1, J . McD ERM ID 1 AND J. DOBSON2*1 University of York, 2 Computing Laboratory, University of Newcastle upon Tyne, Newcastle upon Tyne NE1 7RU

    We consider the distinction between the terms 'safety' and 'security' in terms of the differences in causal structure andin terms of the differences in the degree of harm caused. The discussion is illustrated by an analysis of a number ofcases of system failure where the safety and security issues seem, at least a t first sight, to be difficult to disentangle.Received July 1991

    1 . I N T R O D U C T I O NThe terms 'safety' and 'security' are often used tocharacterise computer systems when used in a contextwhere reliance is placed upon them. These terms arenormally regarded as representing distinct properties, yetthere is considerable ambiguity in the terms. At alinguistic level, the common phrase 'safe and secure'indicates a limited distinction and, in Germanf, no realdistinction can be made as the term sicherheit meansboth safety and security (although some technicalterminology has been introduced to make distinctionsbetween the two notions). Further, in many dictionaries,safety is denned in terms of security - and vice versa.There are also some difficulties in practice in decidingwhen to use which term. For example, the followingproblems might be treated as problems of safety orsecurity (or both): unauthorised m odification to the contents of an RO Min a car Automatic Braking System (ABS) leading toa fatal accident; a software design fault in a programmed tradingsystem which causes a bank to buy many times itsassets, hence bankrupting the company; a syringe pump whose setting was altered by anunauthorised (and un trained) individual to give a fataldose of drugs.

    All of these examples have characteristics with conno-tations of security (e.g. unauthorised access or modifi-cation) and characteristics with connotations of safety(e.g. causes of fatalities) and it is unclear whether thesesystems are concerned with safety, security or both.Perhaps the second example has the least obvious safetycharacteristics - but it certainly was not safe, in the senseof causing irremediable damage, for the bank to use thesystem.A full analysis of the above examples would require usto consider more carefully system boundaries, and so on.However, we believe that there are some underlyingprinciples which help us to carry out such an analysis,and here we are primarily concerned with those prin-ciples. Specifically we believe that safety and security canbe distinguished in terms of the nature of the harmcaused and the natu re of the causal relationships betweenevents leading to a safety or security 'inc ide nt'. Our aimhere is to throw light on these ideas.

    * To whom correspondence should be addressed.t And also, accord ing to informan ts, in French (securite), Spanish(seguridad) and Italian (siguranza).

    1.1 The value of the distinctionIt is reasonable to enquire whether it is worthwhilemaking such distinctions, especially as there is now agrowing trend to use the term 'dependability' and totreat safety and security simply as special cases ofdependability. However, we believe there are a numberof benefits, as we will endeavour to show.In the ab stract, it is much more satisfying to have cleardistinctions than relying purely on intuition, so theanalysis is of intellectual benefit. For example, some saythat safety is a special case of security, and our analysisenables us to suggest a resolution of this issue. Morepragmatically, safety-critical and (supposedly) securesystems are subject to different legislation and standardsregarding their use, assessment and construction. Itcertainly is useful to know which legislation applies -assuming that the relevant bodies will accept thedefinitions we propose.Secondly, there is, we believe, some importance in theconcepts we use to make the distinction, namely thenature of the causal relationships involved and thedegree of harm caused. Certainly ou r own understand ingof the use of these concepts has been improved by ourattempts to apply them to understanding the concepts ofsafety and security.Finally, at an engineering level it is important to beable to determine what are safety- and security-relevantcomponents to aid in designing a system so as tominimise the number of critical components. Similarly,different techniques are appropriate for analysingrequirements and designs where safety is our concern,rather than security (typically the distinction is betweenfailure mode behaviour and failure mode consequences,e.g. unauthorised information flow). Also we believethat, given clearer understanding of the terms, we canproduce better analysis techniques for dealing with safetyand security - indeed, this is the prime motivation behindour work.1.2 Existing definitionsWe must stress at the start that we are not trying so muchto define the terms 'safety' and 'security' as to show thenature of the distinctions between them. The mainproblem with the definitional appro ach is that, as churchhistory has all too often shown, it can give rise toreligious wars. We do not wish to start another one.Rather we wish to argue that there are two separatediscriminants that can be applied in separating securityissues from safety issues, and we wish to suggest a way in

    THE COMPUTER JOURNAL, VOL. 35, NO. 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    2/13

    A. BURN S, J. MC DE RM I D AND J . DOBSONwhich these discriminants can be combined. Unfortu-nately, providing definitions seems to be the clearest wayof showing the distinctions. We have tried, however, tobe consistent in giving the definitions in pairs in order toemphasise, not the definitions themselves, but thedifferences between them.Although we are concerned with drawing distinctionsbetween safety and security as concepts we recognise tha tmany systems have both safety and security facets, oraspects (see for example the system discussed in Ref. 1),and so for simplicity we shall talk about safety criticaland secure systems in much of the following.In the introduction we implicitly used intuitive, andnormal dictionary, meanings of the terms safety andsecurity. The problems and ambiguities we have identifiedarise, at least in part, from 'overlaying' technicalmeanings onto everyday words. Unsurprisingly there-fore, in addition to the common definitions of the terms,a num ber of ' tech nica l' definitions have been producedwhich are intended to clarify both the meanings of theterms and the distinctions between them. Our belief isthat these definitions are still rather unsatisfactory. Forexample it is common to define security in terms ofintentional (deliberate) faults, and safety in terms ofdamage to resources including people and the environ-ment. However, these definitions do not provide cleardistinctions since we can have intentional safety-relatedfailures (e.g. through vandalism to protection systems by'pressur e g rou ps'), and so these definitions do not reallyhelp in analysing the above examples.

    Similarly it is comm on to say tha t security is concernedwith logical resources, e.g. money , and safety is concernedwith physical (including, but not limited to, human)resources but how then do we classify the failure of anautomatic teller machine (ATM) which emits largenumbers of banknotes (physical entities) when it shouldnot? Clearly the logical/physical distinction is limited inapplicability as there is a difficulty in deciding what levelof abstraction is appropriate for making the judgement,and physical entities can be carriers of information. Sofar as we are aw are, other existing definitions suffer fromsimilar problem s - by which we mean tha t the definitionsmake distinctions between safety and security whichcontradict our intuitions, or do not enable us to makedistinctions in all relevant circumstances. It is theseproblems that make us believe that it is appropriate toseek new definitions and analysis techniques.1.3 Con tent of the paperIt seems rather unfruitful to deba te at length existingdefinitions of safety and security (see, for example Ref. 2for safety and Ref. 3 for security). One problem with suchdefinitions (and there are many of them) is that thesedefinitions of safety and security have been developedseparately, and hence do not exhibit any interestingparallels or similarities of concept. The fact that in somelanguages the same word is used for both suggests thatthere are some conceptual similarities and it is there thatwe wish to e xplo re; hence we shall present and refine o urunderstanding of safety and security in parallel.The rest of the paper is structured as follows. We firstpresent, in Section 2, an informal discussion of theconcep ts of safety and security by looking at two possiblealternative approaches. In Section 3 we discuss rather

    more rigorous definitions for safety and security, refineour definitions and give a commentary on the definitions.In Section 4 we analyse a few simple examples so as to tryto provide verisimilitude to out definitions. Finally,Section 5 draws some conclusions about the utility of thenew definitions.

    2. A N I N F O R M A L C H A R A C T E R I S A T IO NOF SAFETY AND SECURITYIn presenting our understanding of safety and securitywe shall first give some intuitive, but not altogethersatisfactory, definitions and then refine them to a statewhere we feel th at they are 'satis facto ry' informaldefinitions. By ' satisfactory' we mean that they fulfil thecriterion of respecting the common intuitions regardingthe terms and that they will suffice as a precursor to themore rigorous definitions in Section 3. As we areconcerned with showing the distinction between theterms we develop definitions for each term in parallel.We suggested above that causal definitions and theconsequences of failure are distinguishing characteristicsof safety and security, so we treat these two notions inturn.2.1 A causal basis for defining safety and securityOur first intuitions regarding the distinction betweensafety and security are most easily represented in causalterms. Specifically they relate to the extent to which asystem failure* affects us immediately, or its effects aredelayed (or only potential). Here we use immediacy inthe sense of causal consequence, not tem poral immediacy,bu t we note that causal immediacy often entails temporalimmediacy. To avoid complexity in definition we proceedon the basis that either normal behaviour or failures canlead to harm - this obviates problematic discussions ofwhat specification should be used as a basis for judgingfailure, where we use the word 'failure' to mean anybehaviour that has an undesired effect (even if suchbehaviour was in accordance with, or did not contradict,a specification). We can thus define the terms as follows: a safety critical system is one whose failure could do usimmediate, direct harm; a security critical system is one whose failure couldenable, or increase the ability of, others to harm us.

    From these two definitions we immediately derive twomore definitions which we shall need: a safety critical system is one whose failure could do isimmediate, direct harm; a security critical system is one whose failure couldenable, or increase the ability of, others to harm is.

    These definitions seem to reflect basic intu itions - asystem is not safe if it can harm us; it is not secure if itgives others the means of harming us. Note that sincereal systems are never perfectly reliable (their failurerates are never nil), in the terms just introduced, a safetycritical system is not safe. In practice we would say thata system was safe either if it was not safety critical Here, and elsewhere, we are not naively assuming that a system hasonly one failure mode. Read 'failure' as being short for 'any of whose

    conceivable possible failure modes'. By failure, incidentally, we mean'any undesi red behaviour ' .

    TH E CO MP UTE R JOU RN AL , VOL. 35, NO . 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    3/13

    ON THE MEANING OF SAFETY AND SECURITY(intrinsic safety), or it was safety critical but the likelihoodthat it would fail and harm us was acceptably low(engineered safety). Similar interpretations apply to thedefinition of security.Our main interest is in articulating clearly thedistinction between the concepts of safety and security,and not in formulating all of the related definitions, sowe restrict ourselves to refining the notions of safetycritical systems and security critical systems as thissimplifies the exposition of our ideas.Our next refinement is to note that the implicitdistinction is not just one of immediacy of cause, but alsoof sufficiency - with safety we are concerned withsufficiency of causality whereas with security we areconcerned with sufficiency for enabling causality. Thusan alternative way of stating our definitions is: a system is safety critical if failure could be sufficientto cause us harm; a system is security critical if failure could not besufficient to cause us harm, but could increase thenumber of possibilities, or likelihood of existing

    possibilities, for others to intentionally cause us harm.There are a few corollaries which we can usefully drawbefore we go into more detail.First, we can see that the definitions help us withregard to the classification of systems - this can be donethrough classification of the consequences of failure.Second, a system can have both security and safetyimplications through the causal consequences of differentpotential failures, or even through different causalconsequences of a single failure. Third, the definition ofsecurity encompasses the notion of other agents and thishas some overlap with the notion of intentional faults,although it places them in the context of the causalconsequences of the failure, not the causal antecedents.This does not, of course, mean that the failures ca nnot bedeliberately caused, rather that the distinguishing charac-teristic of security is what is done after, rather thanbefore, the failure (a malicious agent could still exploitinformation which he gains through happenstance).Fourth, we must be concerned with specific failures, notall failures, as safety and security critical systems are notusually critical in all respects. We will take into accountthe above observations when refining our definitionslater in the paper.The main limitations of the above definitions relate tothe nature of failure, the nature of the causal relationsand the scope of the causal consequences. We have, as

    yet, been vague about what is meant by failure a nd, moreimportantly, our implicit definition is not obviouslycompatible with the standard definition set out in Ref. 4(as that definition is based on the notion of a specificationagainst which failure is judged,* and ours does notexplicitly rest on such a concept). Further our notion ofcausality has not been explored and there is considerablesubtlety in modelling causality for computerised safetyand security critical systems. Finally, limiting thedefinitions to systems that harm us, whether interpretedindividually or corporately, is over-restrictive as we maystill deem a system safety critical if it can harm others.We resolve this latter problem below, and address eachof the remaining issues in Section 3.* Even though it accepts that specifications can be at fault.

    2.2 Failure consequences as a basis for defining safetyand securityOur definitions introduced the notion that safety andsecurity critical systems are those whose failure can lead,directly or indirectly, to harm. However, this does nottake into account the fact that there are different degreesof severity of failure. Even safety critical and securesystems can have relatively benign failures. Sometimesthe term safety-related is used to refer to systems whichhave safety' respon sibilities' and where the consequencesof failure are slight (e.g. small-scale injury rather thandeath) although the distinction between safety-relatedand safety-critical systems is inevitably difficult to drawprecisely. We wish to draw out a rather m ore fundamentaldistinction, which is between what we term relative andabsolute harm.We shall use the terms 'absolute' and 'relative' todenote scales of value in the following way.An absolute value is when the value ascribed to thatwhich is to be protected is not quantified but consideredto be either all or nothing, i.e. after system operation(including the possibility of system failure), the value iseither what it was before (normal or intended opera tion)or else valueless (failure or unintended operation).Hum an life is like this. So are legal judge men ts - they areeither correct or incorrect, hence the use of the term" unsafe " in English law to describe a verdict which is feltby competent judges to be incorrect (in a certain legallydefined sense of running counter to the evidence).A relative value is when the value can quantitativelydegrade as a result of system operation, either con-tinuously or on a stepwise basis. Money is like this; so isthe value ascribed to information. Indeed most securitycritical systems are concerned with protecting money orinformation, but we believe that this is a consequence ofthe nature of the concept, not a defining property.Absolute harm usually occurs when some service orresource is impaired. Clearly loss of life is such animpairment, but a major spill from an oil tanker wouldalso be seen as an impairment to a variety of resources asthere is irrecoverable loss of (wild)life, loss of naturalhabitat, and so on. Relative harm usually occurs inadversarial situations (e.g. when the objectives of twoenterprises are in conflict) and relates to a gain or loss incompetitive advantage. Here loss of the designs for a newproduct by one company and its acquisition by anothercauses relative harm - absolute harm would only entail ifthe competitor exploited this knowledge so as to impairthe first company seen in its own right (e.g. by making itgo out of business). Further, loss of the informationwould not be harmful if there were no competitors (andif it were impractical for any to be created to exploit theinformation gained).

    There is potentially a complication to the notion of'degree of harm' as the judgement about degree willdepend on context and point of view. For example, asyringe pum p may be more critical when used on a wardfor administering drugs than when used for a dministeringanaesthetics in an operating theatre (because the patien t'svital signs will be monitored continuously in the lattercase but not the former). More significantly, what isdeemed to be absolute harm from one point of view maybe classified as relative from another point of view. Totake a commercial example, a computer failure which

    THE CO MPUTER JOURNAL, VOL. 35, NO. 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    4/13

    A. BUR NS, J . MC DE RM I D AND J. DOBSO Ncaused loss of a single customer's account probablywould be considered an issue of relative (and probablymin or) h arm by a bank, but an issue of absolute harm bythe cu stom er (unless there was some way for the customerto regain the funds). This is a classical case where wemust apply techniques of hazard and risk analysis, 5 butwe need to take into account the relevant point of viewin assessing the results of the analysis. Note that ' pointof view' is orthogonal to 'degree of harm'.Hav ing introduced the notions of absolute and relativeharm, we now propose definitions (which for the timebeing we will treat as alternative to the previous ones) ofthe terms safety-critical and security-critical systemsbased on these notions: a safety-critical system is one whose failure couldcause absolute harm; a security-critical system is one whose failure couldonly cause relative harm.

    Given the above discussion on degrees of harm itshould be clear that a fundamental point is being made,and we are not simply picking out extremes of aspectrum. The point is that the distinction betweensafety- a nd security-critical is being explained in terms ofthe nature of the harm caused, as seen from a particularpoint of view of course. It is perfectly reasonable for asystem to be viewed as safety critical from one point ofview and security critical from another, since the degreeof harm may be absolute from one point of view andrelative from another.It might be argued that the definitions are difficult toapply in a number of contexts, e.g. military systemswhich are intended to do harm . However, we can alwaysdelimit the scope of the services and resources whichare considering depending on the objectives of theorgan isation using the system. Thus, clearly, protection ofone's own troops, equipment and so on is the concern inmilitary situations and failures which harmed themwould rightly be considered safety issues, although harmto the opposition would no t - i n fact 'harm to theen em y' is likely to be taken to be equivalent to 'o f benefitto us', and this is why the point of view represented by' u s ' is an important indexical. Interestingly we can seeboth safety and security issues in the weapon systemexample. Imagine an artillery system that used rangingshells before firing the main weapon. Firing the rangingshells gives rise to relative harm - information is given tothe enemy which may enable them to do absolute harmto the artillery piece or the troops. However, successfulfiring of the weapon may increase safety. Note that wehave here a trade-off between security and safety andthat security can be reduced by the normal behaviour(this is true of many 'risky' pursuits), whereas safety issuch that normal behaviour should be safe. Therelationship between safety and security is discussedmore fully in Section 4.

    As a further example, we briefly return to the bankingsystem situation outlined above. Loss of some of abank's funds is an example of relative harm (to the banktreated as a resource) but not absolute harm as the bank,its funds, its profits, and so on can all still be talked abou tmeaningfully (this is a sufficient condition for harm notto be absolute). However, a competitor may be able to' mak e c api tal' out of the incident and lure customersaway from the troubled bank. From the point of view of

    a depositor whose funds were lost in their entirety (andwhere there is no recovery mechanism)* the issue is oneof safety - the deposito r may lose his house, be unable tobuy food, and so on. It certainly was not safe for him toentrust his funds to the bank. Thus we can see that thisconcept enables us to take into account issues of contextand point of view quite naturally. Also this simpleexample illustrates why we believe safety to be anappropriate attribute for the second example given in theintroduction (the programmed trading system).We have now proposed two apparently different viewsof the notions of safety and security (one causal, and theother based on the notion of classification of degrees ofharm). It is now app ropriate to consider the relationshipbetween these two views of safety and security and to seewhether or not they can be reconciled.2.3 Relationship of the two viewsThe above two views of, or ways of defining, safety andsecurity appear very different and there seems to be noobvious way of telling whether or not they are compatible.There is no a priori reason to suppose that a distinctionin terms of the kind of damage caused (absolute orrelative harm) is the same as the distinction in terms ofthe causal structure (direct or indirect harm). However,the fact that the two distinctions appear to be extension-ally equivalent (i.e. to produce the same classification inpractice even though on different logical bases) doesperhaps explain some of the puzzlement and difficulty weexperience when in trying to separate the concepts.At this stage there are two strategies upon to us. Wecan either observe that there are two separate distinctionsand introduce two separate pairs of words to classifysituations according to which distinction is beinginvoked; or we can note the extensional equivalence andprovide a definition which com bines the distinctions. Wechoose, not entirely confidently, the second of theseapproaches. O ne reason for our choice is the observationthat in an adversarial context, relative harm usuallyoccurs when competitive advantage is given - and this isequivalent to increasing the number (or likelihood) ofways to give opportunities to others of causing harm.Not all situations fall into an adversarial context, ofcourse, but it is difficult to use the word 'secu rity'plausibly without having any connotations of 'theenemy' (who wishes to harm us). We also note that therelative/absolute distinction enables us to remove therestriction of harm as something that applies solely to us.If in addition we now explicitly introduce the notion ofcontext and judgement,! we have: a system is judged to be safety-critical in a givencontext if its failure could be sufficient to causeabsolute harm;

    * As usual there is an issue of system boundary here. We areimplicitly saying that there is no recovery mechanism within the systembeing considered (bank plus depositor). If there was a further systemwhich provided mea ns of recovery from the problem - say thegovernment then we would have been guilty of a failure insystematicity in analysis (we should have included the government),and hence in using the term 'abs olute har m'.t T he problem of treating failure as a judgem ent is subtle anddifficult, and is often ignored by computer scientists (perhaps becauseof its subtlety and difficulty). We are examining the issues involved andhope to report on our investigations in due course.

    TH E CO MP UTE R J OU RN AL , VOL. 35, NO . 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    5/13

    ON THE MEANING OF SAFETY AND SECURITY a system is judged to be security-critical in a givencontext if its failure could be sufficient to cause relativeharm, but never sufficient to cause absolute harm.

    Again we must remember that a system can have bothsafety and security connotation s, and we need to be morecareful to distinguish different failures, and differentcausal consequences of failures, when formalising theideas. We also need to take into account the fact that asystem can support m any services, so we shall make a finalrefinement of the definitions at a later stage. Thedefinitions appear to make security 'subordinate' tosafety in that security failure can clearly lead to (be apartial cause of) a safety incident in a wider system. Theissue is slightly more subtle and we return to the notionwhen we have presented our formalisation of theconcepts.3. A N O T A T I O N FOR R E P R E S E N T I N GT H E C O N C E P T SOur aim here is to put our definitions on a firmer footingfrom the point of view of a model of causality - which ofcourse means that we can capture both aspects of thedefinitions.We do not attempt to model the (quantitative) level ofharm, as opposed to the (qualitative) degree of harm, aswe believe this is inevitably a subjective issue, so that thebest we could do would be to present a treatise on utilitytheory and leave the application of the theory up to thereader. However, as we will show, our causal definitioncaptures one fundamental aspect of the notion ofrelative/absolute harm, by distinguishing between whatwe term agent and event (action) causality. We expressthe distinction in terms of intentionality: agent causalityis where the causal relation is intended, and even causalityis where there is no deliberate intention for the causalrelation to hold. The validity of this way of describingthe distinction depends on scope, i.e. the agents weconsider, but it is acceptable if we restrict ourselves to theoperational environment for the system.

    We adopt here a simple formalisation for representingcausal relations. We use simple predicates to representconditions under which an event will occur, and definedependencies between events which represent a stage in a'causal chain'. It would be possible to develop and to usea causal logic (of which there are several in existence andunder development). However, since we are concernedhere with clarifying concepts, a simpler approach willsuffice. First, however, we make a few observations oncausality to clarify the nature and purpose of some of thecausal structures, although we strive to avoid thephilosophical quagmire of the nature of causalityitself.3.1 Basic causal notionsWhen we say, informally, the some event causes ano therwe usually only give a partial description of the events,and of the conditions under which the causal relationshold. There are two basic issues here: one relating toeven descriptions and the other concerning necessity andsufficiency of causal relations. These two points are verystrongly related but we discuss them in turn rather thantogether at the risk of slight repetition.

    Consider lighting a match. We would usually describethe event that caused the match to light as 'striking amatch', but there are other aspects of the ' event ' ofstriking the match which we could have described, butdid not, e.g. the speed of movement, the nature of thesurface, and so on. Thus we can have differentdescriptions of the same event. In our definitions we willassume that we can adequately characterise distinctevents, but we recognise that distinguishing and identi-fying events is difficult in practice.Taking the second point, we can also use the matchexample as a basis of our discussion. Norm ally we wouldsay that striking a match was sufficient cause for it tolight-but actually this is only a partial cause and thepresence of oxygen, the fact that the match and the boxwere not wet, and so on, all are contributory causes.Further, striking the match is not necessary - we couldlight it using a blowtorch. Similar considerations applyin safety and security - the failure of a computer systemcan only lead to harm if that system is connected to, andcontrolling, dangerous equipment, or the like. Thus sucha failure is neither necessary nor sufficient. Determininga causal relationship seems to be a matter of judgement,bu t one for which we can give some stru cture. We will beconcerned with describing events and causal relations interms of necessary and sufficient conditions and showingthe relationship between such conditions.We now introduce some terminology and structure toour causal definitions which we will use extensively later.The failure of a computer process control systemwhich leads to a harmful incident in the controlledprocess will normally only be a partial cause of theincident. Typically we would refer to the failure as thecause as this is the aspect of the situation that changedand thus apparently caused the incident. The remainingcausal factors which usually are taken for grantedare sometimes called, for that reason, standing con-ditions.6It is useful to investigate the relationship between thefailure (partial cause) and the standing condition in moredetail. Typically computer system failures will be anInsufficient but Necessary part of some larger causalcondition which is, itself, Unnecessary but Sufficient tocause harm (known as an INUS condition). Put anotherway, there are many conditions which are sufficient tocause the overall system to fail causing harm, but none ofthem are necessary, i.e. there are a number of possiblefailure m odes. M ore technically there a re several minimalsufficient conditions (MSCs) which represent the separatefailure modes for the controlling and controlled system.These MSCs incorporate the notion of 'standing con-dition ' which we introduced earlier. For each (or perhapsonly some) of these MSCs the failure of the computersystem is necessary, but not sufficient - the connection ofthe computer equipment to the plant, the absence ofsuitable protection mechanisms, etc., are also neededbefore the event of concern can occur. Thus a failure ofa safety critical (computer) system will, in general, eitherbe an INUS condition for the controlled system or notcontribute to a harmful situation.

    It is useful to introduce some simple notation to dealwith such causal structures. In general we will have a setof MSCs and at least one of these conditions must betrue for the incident to arise. Further, for each MSC,some conditions must be present for a harmful failure to

    THE COMPUTER JOURNAL, VOL. 35, NO. 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    6/13

    A. BUR NS, J . M CD ER MI D AND J. DOBSO Noccur and some must be absent. We can thereforerepresent the possible causes of a harmful incident as:

    M S Q v M S C j V M S C , . . .where we can regard each MSC as having threecomponents :

    M SC N = C N | F N | D N (In this notation, the bra () and ket () symbols and thevertical bars are to be taken as part of the syntax of thefirst and third components respectively.)In this formula, compon ent C | (for contributory)represents those cond itions which are part of the norm aloperation of the system for the failure F to make theMSC true and | D (for defensive) represents those thingswhich are part of the abnormal operation, these beingeither ordinary compo nents in an error state is detected.*For example C | might be the presence of flammablematerial, F a failure which generates a spark, and | D the absen ce, or or inope rative sta te, of a suitable sprinklersystem. It is indeed often the case that a partialcondition for a failure to occur has to be expressednegatively, as the absence or inoperative state of aparticular component. However, we shall assume thatthe implied negative is included in the statement of the| D condition in order to avoid a possibly confusing useof the negation symbol.

    In the above terminology the failure, F, is the INUScondition. For the INUS condition to be a 'sufficientcause', or to have immediate effect in the causal sense, asit must do for our definitions to be appropriate, thenboth the contributory and defensive causes (C| and| D respectively) must hold for the appropriate MSC.jThe above notation represents causal immediacy, but itsays nothing about temporal issues. If we view F as thecondition caused by the failure event then we can saythat the harmful event actually occurs if the MSC is evertrue - thus this deals with the situation where there aredelayed hazards which nonetheless are caused (in theIN U S sense) by a failure. J In principle with a delayedcause the failure might never lead to a hazard.There are some modelling assumptions behind thisdiscussion of causality. The most significant is that weassum e, in effect, tha t C | represents those aspects of the'real world' which are beyond our control, or at leastoutside the sphere of control of the system or enterprisewe are considering. In contrast | D represents issueswhich we may be able to control, or influence, such asengineered protection systems, and the like. Thus | D need not be static as we may modify our defences as timepasses (clearly the 'real world' changes too but manycausal relations, e.g. to do with the flammability of gases,are invariant). However, it is typically the case that | D changes slowly by comparison with the operation of thesystem.* The issue as to whether a particular condition is to be treated ascontributory or defensive is, of course, ultimately a matter of u dgeme nt.t Or possibly more tha n one M SC, as one failure could be involvedin more than one MSC, or in MSCs for different safety hazards/securityrisks.t Strictly speaking we should also note that F is a function of time,as a failure may start some continuous process which later causes anMSC to be true, e.g. a leak from a valve. However, this finer modellingdoes not affect our overall argument from a definitional point of view,

    but it would be a necessary property of an adequate formal model ofsafety.

    Also we note that changes to | D may be planned, orunplanned. It is clear that, in many safety criticalsituations, operators find ways of managing the equip-ment which were not intended by the plant designers.Whilst, in some cases, this may contribute to the incidentbecoming an accident often they will find ways ofpreventing an accident by going outside the operatingprocedures for the equipment. Our causal structures cancope with this creativity, but it gives us some difficulty indefining what the attribute 'safety critical' means incausal terms. However, since | D would normally changeonly slowly, we can say that F is a critical failure if itwould lead to harm ceteris paribus - all other thingsbeing equal. Thus a system would not cease to be safetycritical just because an operator had found a way ofdealing with one particular incident if he had done so bymodifying a | D condition (or equally, but less likely) a C | condition, in a way that had not previously beenconsidered. Perhaps another, more constructive, way ofsummarising this would be to say that we make thejudgements of criticality in terms of the causal depen-dencies that at design time are believed will later hold. Toput it more strongly, the distinction between C | and| D is judgemental, but the judgements are of practicalbenefit as a way of structuring an analysis of criticalsystems during their design and analysis. These obser-vations have ramifications for the way in which we buildspecific models of safety- or security-critical systems.

    We have just introduced anothe r interesting dimensionto this discussion - the epochs at which the conditionsand events are determined (and e valuated). All the eventsand conditions are, we assume, defined (identified) atdesign time. However, events in the class F arise atexecution time for the computer system. Thus althoughthe C | and | D conditions can be described at designtime, the evaluation takes place at the time of event F.We need therefore to be able to describe the changes thattake place to C | and | D . The (values of) conditions C | are established by physics, the plant design and itsoperation, and |D is evaluated in a similar way to C |, except that unplanned events aremorelikely tomodify| D than C | during plant operation. Also maintenancecan change any of the event/condition definitions andchange actual values. Thus maintenance activity can beanalysed using the same type of causal structures,provided we can show how failures in maintaining Fmodify C | and | D conditions associated with otherMSCs.

    3.2 Types of entity in the modelIt is sometimes convenient to think of object in andsurrounding a computer system as falling into threeclasses: agents, activities and resources.1 For our purposeshere we need to extend the model with the notions of: objective - a specific target of the some enterprise; inthis context we will be concerned with safety andsecurity objectives for the enterprise using the system; service - a grouping of activities which are all intendedto satisfy some objective; in this context we will beconcerned with those services intended to achievesafety or security objectives; event-an action associated with a service or aresource; we are concerned with such events as safety

    8 TH E CO M PUT ER JOURN AL , VOL. 35, NO . 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    7/13

    ON THE MEANING OF SAFETY AND SECURITYand security relevant occurrences (including failures)and their causal consequences.Further we need to classify the enterprises and systemsinvolved in order to have a suitable basis for articulatingthe model. We distinguish the computer system (CS) fromits operational environment (OE) which contains all theequipment which has direct interactions with the com-

    puter system. In our discussion we will assume that thecomputer system only provides one service so we willrefer to computer failures, not service failures. Thecomputer system and the operational environment areclasses of resource, as is the critical resource (CR) the'protection' of which is the concern of some objective (itmay be a component of the computer system orenvironment). We use the term user enterprise (UE) forthe organisation concerned with operating the computersystem to achieve some specified objective or objectives.We use the term operators (O) for those agents wholegitimately interact with the computer system or theequipment in its operational environment; operators aremembers of the user enterprise. We use the term freeagents (FA) for other agents who may be involved in thecausal chain resulting in harm to the critical resource. Wewill use the one or two letter mnemonics as subscripts tolabel events, and the like, in order to be clear about theinterpretation of the causal definitions we will present.

    Some comments are in order before we present ourdefinitions. We note that agents are essentially human,and they are characterised by holding responsibility andauthority. Agents will typically be identified by acombination of role and objective - an agent will holdsome role in respect to the achievement of a definedobjective. Authority may be delegated to machines, butresponsibility may not - this is essentially a hum an(organisational) notion. Thus, even where automation isused for critical functions, e.g. automated stock trading,the machine has the authority to buy and sell, but theresponsibility for the actions of the program lie elsewhere(with some agent). This notion will be central to some ofour later analyses.We could think of systems having multiple servicesand satisfying several objectives (with a many-to-manyrelationship between services and objectives). For sim-plicity we assume a single objective for the computersystem and a single service intended to satisfy thatobjective, again without loss of generality. We could adda further agent, the judge, to cater for the fact thatnotions of safety and security (and the identification ofevents, for th at m atter) are judgem ental. Again we makesimplifying assumptions and the definition presented canbe thought of as representing the judgement made by thedesigners with respect to events, causal chains, and thelike. Thus we believe our definitions are simpler than isnecessary for a full analysis of safety or security criticalsystems, but that the essence of the approach is fullycovered by the definition, or definitions.We present two distinct definitions and we treat safetybefore security.3.3 A definition of safetyWe can now give a definition of safety which, at the toplevel, is surprisingly simple:

    H C R ^ C O E | F C S |

    A failure of the computer system F c s combined withthe standing conditions in the environment C O E | (i.e.the plant design and physics) is sufficient to cause theharmful state for the critical resource H CR (where we usethe arrow

  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    8/13

    A. BURNS, J. MCDERMID AND J. DOBSONThus the above causal structures are general enough topermit the modelling of Trojan Horses and other formsof deliberately introduced flaws so long as the model isextended to cover all the relevant enterprises, e.g. thedevelopers.This definition shows clearly the distinction betweensafety critical and security critical systems and hopefullyclarifies the notion of causal sufficiency. There are,however, some interesting characteristics of our defini-tions which we have not yet explored and there are someimprovements we can make to the definitions, forexample, to reflect the fact that a computer system maysuppo rt m ultiple services. We provide a brief comm entaryfirst, then present the final form of the definitions.

    4. C O M M E N T A R YWe discuss here a number of important characteristicsand ramifications of the definitions. There is no notion ofpriority implied by the ordering.4.1 Simplicity of the formal definitionsOur formal definitions are very compact and may appeartrite or almost vacuous. Indeed the reader might beinclined to wonder what the fuss is about. However theirsimplicity is a greater virtue. The distinction betweensafety and security and their relationship has been atroublesome matter for some time. We hope and believethat the very simplicity of the distinction speaks for itsvalidity. In any case the simple and sharp distinctiongives us a powerful analytical tool as we shall showlater.

    4.2 The scope of the harmful behaviourThe definitions consider harm without being specificabo ut what set of resources, facilities, and the like shouldbe considered (the military examples make it clear thatwe have to make some distinctions). We need to considerin respect of all resources, and so on, whether there is alegitimate reliance on protection or preservation of thatresource by the computer system. Some examples shouldclarify the point.Safety-critical and security-critical systems can harm' individuals' other than the system owners or operators- for example the clients of a bank, or the environmentof a chemical pla nt - and this is often the primary reasonfor being concerned about such systems. As before it iseasiest to see the issues in the context of safety. Considera chemical process plant. It is clear that the owners andoperators of the plant are responsible for the resourcesdirectly controlled by the plant, e.g. the chemicalsthemselves and the pipes, valves, and so on which makeup the plant. However, the owners and operators alsohave responsibilities for their workforce and the en-vironment through legal and moral frameworks. In thesafety case we often speak of a 'duty of care' and withinthe contex t of an enterprise m odel we would expect to seeexplicit responsibilities representing this duty.

    It may seem simply that we have to take into accountthe set of authorities and responsibilities of the ownersand operators, but we also need to take into account theconsequences of failure from different points of view.

    Thus, for example, there will be responsibilities andauthorities associated with a computerised bankingsystem from the points of view both of the owners andoperators, and from the point of view of the users. Insetting the system to work the bank is (effectively)delegating authority (for certain aspects of security) tothe computerised system for itself and for its clients.Thus in this case the computer system is acting as asurrogate for the bank. Perhaps more precisely when anindividual becomes a client of the bank he (or he and thebank jointly) are delegating a uthority to the computerisedsystem. However, in delegating authority the bank istaking on a responsibility for the safe and secureoperation of the system from the point of view of thecustomers. Thus a definition of safety and security needsto take into account responsibilities from wherever theycan (legitimately) arise. Recall our earlier comments onagency. Responsibility cannot be delegated to a machineand the responsibility for system safety or securityremains with the operators, e.g. the bank, or pe rhaps thedevelopers depending on the nature of the developmentcontract.

    In analysing particular situations we need to introducean enterprise model covering all those organisations andindividuals who have legitimate dependence on theenterprise operating the system (and perhaps those theoperators depend upon) in order to evaluate respon-sibilities and to delineate the resources which should beconsidered in a safety or security analysis.4.3 The relationship between safety and securityWe noted above that security problems (relative harm)can often lead to safety problems (absolute harm). Wehave already mentioned an obvious military examplerelating to information about a proposed offensivereaching the opposing force in time for improved defencesto be prepared. In the commercial sphere we can imaginesimilar information regarding future products.However, safety incidents can also lead to, or imply,security incidents when there is an adversarial situation.For example destruction of a physical protectionmechanism, e.g. a fence, may well be viewed as a safetyincident but it might also have security connotations (itincreases the likelihood of certain sorts of 'attacks'succeeding). Perhaps more interestingly the death of aworker in a factory may adversely affect customerconfidence and thus give other companies a competi-tive advantage (we note that, in practice, securityis on occasion concerned with covering up safetyproblems).In an adversarial situation, or where there arecompetitors, a safety problem almost invariably impliesa security problem. Security problems are, by their verynature, in the causal chain of those classes of safetyincident where the absolute harm arises from the actionof some agent who gained useful knowledge from the'failure' of some other system. In this sense security is'subordinate' to safety, in a single context, but, ingeneral, security incidents can lead to safety incidents,and vice versa, where there are causal dependenciesbetween actions in different conte xts. In this sense itis not meaningful to talk about one concept beingsubordinate to, or representing a subset of, theother.

    10 TH E CO MP UT ER JOU RN AL , VOL. 35, NO . 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    9/13

    ON THE MEANING OF SAFETY AND SECURITY

    4.4 Recovery and judgementOur definitions are based on the notion of causalconsequences. However, as we mentioned above, if someform of recovery action is taken then the anticipatedcausal consequences may not come to pass. This is thereason we introduced the notion of ceteris paribus in ourearlier discussions (although the definitions do notexplicitly introduce this notion). This raises the questionof when the judgement about causal consequences ismade.The obvious answer to the above question is that thejudgem ent is made at design time - when decisions arebeing made about how to protect against predictedfailure modes. This is true in the sense that suchjudgements are made at this time, but this is not the onlytime such judgements are made. They may also be mad ein accident investigations (when it may be realised thatthe causal chains a re different to those which the designeranticipated), in maintenance, and even during operation(the operator taking some recovery action).

    Thus the definitions have to be interpreted as beingjudgements made on a ceteris paribus basis at a specifictime, recognising that different judgements may be madeat different times by that individual (and perhaps bydifferent judge s). Consequently recovery is viewed as thatwhich the judge (perhaps the system designer) wouldview as abnor mal, i.e. falling outside normal procedures,at the time of making the judgement. We do not clutterthe definitions with such qualifiers, but they should beinterpreted in this light.4.5 Safety, security and integrityGiven our domain of discourse there is inevitably aquestion of the level of abstraction at which ourdefinitions apply - after all systems are com posed ofother systems, and so on. We motivated the definitionsby consideration of computerised systems, but clearly thedefinitions could be applied to other classes of system(this is one of the appeals of the causal definition). Thesimple answer is that they can be tried at any level ofabstraction which is of interest, but that they cannot beapplied validly at all levels. The test we need to apply iswhether or not the causal relations set out above apply tothe system when modelled consistently at that level ofabstraction. With this interpretation we see that it is no tappropriate to define all subsystems in a security criticalor safety critical systems as being themselves securitycritical or safety critical. Of course this is nothing n e w -but it is a criterion for judging the validity of ourdefinitions. Also there are some interesting corollaries ofthis observation.

    First, little software inside a safety critical system canlegitimately be called safety critical. In many cases it willonly be device drivers controlling actuators which havethis property. Th is is not to say that other software in thesystem does not contribute to safety, but that its role inthe causal chain means that it does not deserve theattribute 'safety critical' according to our definitions.When considering 'real world' issues we assumed that C | and | D in our causal definitions were largely static(although | D is more likely to change than C |) . Thisis not the case within a safety critical system. A failure ofa software co mpon ent could modify the C | condition

    for a safety critical software component, e.g. corrupt atable used by a device driver, and thus lead to a failure.We would term the corrupting component integritycritical. We will set out a definition for this concept whenwe give our final definitions of safety and security.Second, when considering security critical systems weobserve that, in a typical situation, rather more of thesoftware can be considered to be security critical th an ina similar safety critical system. This is in part due to theindirect causal links between failure and harm but alsodue to the fact that the resources of concern are typically'inside' the computer system. However, in a similarmanner, we may introduce the notion of integrity criticalsoftware components (indeed the definition is identical).Third, the application of the attribute s security criticalor safety critical can be erroneou s. Sometimes, this is justa simple error of judgement, but there are occasions ofwhat we term failures of systematicity, as the error arisesfrom considering an inappropriate system or systemboundary. Note that this applies to complete systems,not just system components, as for example a com-puterised information system which has no capability tocontrol physical plant but which advises operators onwhat to do. The computer system plus operators tosafety critical. The computer system only is safety criticalif the operators have no way of independently validatingthe computer's advice (no | D condition).We accept that, in common practice, the terms securitycritical and safety critical are used for components oflarger systems which are legitimately viewed as securitycritical or safety critical even though the components failour definitions. Whilst many of these components willbest be thought of as having integrity requirements webelieve that the terms safety related and security relatedare quite helpful as descriptive terms for those com-ponents which are potentially in a causal chain which canlead to a catastrophic failure - although at least one ofthese terms already is used with other connotations.5. FINAL D EFIN ITION SThe definitions we gave in Section 2 were rather limitedin a number of respects. We have improved thedefinitions, but there several issues that we need toaddress including the fact that safety and security aremore properly thought of as properties of services (of acomputer system in a context) rather than of computersystems, per se. Following Laprie4 we exploit thedefinitional convenience of treating services first, thenconsidering how to classify systems based on theproperties of the services which they provide.Thus we first provide definitions for safety- andsecurity-critical services: a service is judged to be safety-critical in a givencontext if its behaviour could be sufficient to causeabsolute harm to resources for which the enterpriseoperating the service has responsibility; a service is judged to be security-critical in a givencontext if its behaviour could be sufficient to causerelative harm, bu t never sufficient to cause abso luteharm, to resources for which the enterprise operatingthe service has responsibility.

    We have used behaviour here rather than failurebecause, as we identified above, normal behaviour asTH E COM PUTE R JOURN AL, VOL. 35, NO . 1, 1992 11

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    10/13

    A. BURN S, J. MC DE RM I D AND J . DOBSONwell as failure behaviour may have safety or securityconnotations. Now we can deal with systems: a com put er system is judge d to be safety-critical in agiven context if it has at least one component orservice which is judged to be safety critical; a com puter system is judged to be security-critical in agiven context if it has at least one component or

    service which is judged to be security critical.This simple reworking enables us to make theappropriate distinctions between systems, but also torecognise that a given system may be both safety- andsecurity-critical (and the same applies to services). Wecan now complete our definitions by consideringintegrity: a service is judged to be integrity-critical in a givencon text if it is in the causal chain of service provision fora safety-critical or security-critical service but whosebeh avio ur is not sufficient to cause relative or abso luteharm with respect to resources for which the enterpriseoperating the service has responsibility; a compu ter system (com ponent) is judged to beintegrity-critical in a given co ntext if it has at least oneservice which is judged to be integrity-critical.These definitions summarise the results of much of theabove discussion.6 . E X A M P L E SIn order to illustrate the analytical utility of ourdefinitions we briefly reconsider the three examplesituations outlined in the introduction. In order topresent a full analysis of these situations we would needto build extensive models of the enterprises and systemsinvolved. However for our present purposes such detailis not necessary and we simply add adequate informationto our earlier descriptions to enable us to present aninformal analysis of each situation.6.1 Automatic braking systemOur first example was based on a hypothetical incidentinvolving an automatic braking system (ABS), namely: unauth orised modification to the contents of an RO Min a car ABS leading to a fatal accident.

    In o rde r to analyse the possibilities we need to considersystem boundaries. For our discussion we will assumethat the fatality involved the driver.Taking the system as the car and its occupant then thesituation is quite simple: the service is the provision of optimal b rakin g: the incident is an example of absolute harm ; the failure of the service (computerised ABS) was theimmediate cause of the harmful incident (this is clearin our example although the engineering detail is notsupplied); the corruption of the ROM is an integrity flaw whichled to an (unspecified) safety failure of the computersystem and service.

    The ABS service and the control computer are safety-critical and the ROM is integrity-critical.We can now consider how the fault arose. Here we

    assume that the modification to the ABS system arosewhen a 'specialist' tried to optimise the brakingperformance and failed.* Here the system is the car plusthe technician providing the service; the service is optimisation of braking p erformanc e; the harmful incident was the corruption of the ROM .

    In this case we would say that there was an integrityfailure of the service and this in turn was perhaps causedin turn by inadequate education of the technician. Notethat if the technician had been in the pay of a rivalmanufacturer and had deliberately introduced the fault,then this would have been a security issue. Thus ourdefinitions allow us to discriminate between differentcontexts. The importance of this is that, seen from thepoint of view of providing appropriate countermeasures,the difference is important - countermeasures against awell-meaning but incompetent mechanic include train-ing, whereas countermeasures against wicked competi-tors infiltrating Trojan mechanics involve much harderissues of personnel selection.6.2 Programmed trading systemThe second example was of a programmed tradingsystem which malfunctioned with disastrous results: a software design fault in a programmed tradingsystem which causes a bank to buy many times itsassets, hence bankrupting the company.

    Again we make the analysis in stages. Here the systembeing considered is the bank including its computersystem. Again the situation is quite simple to analyse,although we need to add some detail in order to clarifythe example: the service is buying and selling shares, exploiting thedifference between the prices in two different markets ,and the ability of computers to react faster thanhumans; the incident is an example of absolute h arm ; the failure of the service (program med trading) wasthe immediate cause of the harmful incident (this isclear in our example although details of the financialcollapse were not supplied); the software design fault is an integrity flaw which ledto an (unspecified) safety failure of the computersystem and service.

    Perhaps the only slightly surprising aspect of theanalysis is the treatment of the bank failure as absoluteharm, but hopefully the reason for this assessmentshould be clear from the definitions and earlier d iscussion.Clearly the programmed trading service and computermust be considered safety critical (although perhaps abank would not realise this until after the system hadfailed).If we consider the development environm ent for thesoftware (where we presume that the software designfault arose) then, if the fault were introduced accidentally,we would have a case of an integrity failure, as above.However, if we considered rival banks and the possibilityof a maliciously introduced flaw then clearly we have a

    * We are aware of organisations which carry out such 'op tim isati on 'on engine management systems; a similar service for brakes seemssomewhat less likely, but not beyond the bounds of possibility.

    12 TH E COM PUT ER JOU RN AL , VOL. 35, NO . 1, 1992

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    11/13

    ON THE MEANING OF SAFETY AND SECURITYsecurity problem - in the software development environ-ment. Thus one fault could be caused in two differentways - but this should not be surprising and it indicatesthat we may need to provide protection against morethan one minimal sufficient condition for causing aharmful condition to arise.6.3 Syringe pumpThe third example relates to a commonly used medicalinstrument, a syringe pump, which can be set up to pumpdrugs or anaesthetics from a syringe into a patient at apredetermined rate. In our example we only have ahypothetical fault, but we assume that the incidentsuggested actually occurred to help us lay out theanalysis. The example is: a syringe pump whose setting could be altered by anunauthorised (and untrained) individual to give a fataldose of drugs.

    At first sight, it might appear that this is equivalent tothe case of the ABS that we have already discussed.However, there are some interesting differences which weshall draw out when we consider the system boundaries,since there is no equivalent in the ABS case of thehospital organisation as a whole being considered as thesystem in which the fault or failure occurred.To start w ith, we consider the system as being m erelythe syringe pump and the patient. The basic judgementsare: the service is the provision of a particular drug at apredetermined rate; the incident is an example of absolute harm; the failure of the service (drug provision) was theimmediate cause of the harmful incident (presumablytoo high a rate of delivery); the corruption of the delivery rate setting is anintegrityflawwhich led to the 'over-rate' safety failureof the computer system and service.

    Clearly the service and the computer control system inthe pump are safety critical.Analysis of the source of the failure is more interestingand we consider two possibilities. First we assume asituation where there is free access to the hospital andthere are no mechanisms for challenging individuals andpreventing them from entering wards. The system is thesyringe pump. We assume that the individual who mis-set the rate had a grudge against the patient (to providethe opposing objectives). Here we have: the service is the provision of a facility for accuratelysetting a predetermined drug delivery rate. the incident is an example of an integrity failure; the failure of the service (setting delivery rate) was theimmediate cause of the harmful state (too high a rateof delivery).

    The service and the computer system providing theservice are integrity-critical. It might be expected thatthis would be viewed as a security problem. However,within the context discussed there is no prospect ofcompetitive advantage and no notion of relative harm.This is still rather a surprising result and we return to thepoint below.Secondly, we consider an alternative situation where

    the hospital authorities are responsible for access to thepremises. Here we have to consider two levels of system- the pump as before as the ' inner' system and thehospital and its staff, including the syringe pump, as theencompassing system. The analysis for the inner systemis the same as before and we do not repeat it here. For theencompassing system we have: the service is the prevention of access by individualswho have no legitimate business on the premises(more specifically ensuring that individuals withlegitimate access only visit those areas compatiblewith their legitimate interests); the incident is an example of relative harm (the deathis not entailed by the failure); the failure of the service (admitting an unauthorisedindividual) was only a partial cause of the harmfulincident (setting too high a rate of delivery and hencecausing the fatality).

    This is clearly a security not a safety incident (althoughdeath ensued as the causal consequence of the securityfailure). Interestingly this is a good example of a situationwhere an organisation (the hospital) has responsibilityfor 'resources' which it does not own, i.e. which do notconstitute part of it as an enterprise.A few further comments are in order. The two scenarioswhich we analysed above are really rather different. Inthe first case we (implicitly) assumed that the hospitalhad no responsibility in regard to the setting of thedelivery rate on the pump and clearly this is not true inthe second case. It is interesting to assume instead thatthe hospital does have the responsibility (specifically forthe appropriate setting to maximise the chance ofrecovery), but we now need to design the mechanisms ofthe hospital and to consider the options.First weassume that the 'hospital design' is intendedto ensure that only qualified staff, e.g. nurses, can haveaccess to modify the pump settings and that everyoneelse is to be prevented from so doing by physical means;this is then simply the second case we analysed above.Specifically the hospital has the authority and responsi-bility with regard to the integrity of the pu mp setting, butit delegates the authority (but probably not responsi-bility) to the nursing staff for actual changes, and to thesecurity staff (and to a lesser extent medical staff) forprevention of unauthorised (and unqualified) modifi-cation.However, if we assume open access the situation isquite different. Wittingly or not, the hospital has

    effectively given away any notion of authority (specifi-cally, access capability) for the pum p settings to anyonewho enters the hospital. This is actually a security faultin the design of the hospital - relative harm is done inregard of the future patients of the hospital, or at leastthose who have 'enemies'. Clearly the security faultcould be remedied by adding access control mechanismsto the pump - and this would perhaps have satisfied ourintuitive expectations that the incorrect setting of thepump would be viewed as a security failure. We believethat the above analysis is more realistic, but it does pointou t the need to consider the operational context whenconsidering this type of requirement.Indeed, it is often the case that introducing a newcomputerised system into an organisation should lead toa modification of that organisation in terms of respon-

    THE COMPUTER JOURNAL, VOL. 35, NO. 1, 1992 13

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/
  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    12/13

  • 8/2/2019 The Computer Journal 1992 Burns 3 15

    13/13

    ON THE MEANING OF SAFETY AND SECURITYREFERENCES

    1. A. Bums and A. M. Lister, A framework for buildingdependable systems. Computer Journal 34, 173-181 (1991).2. N. Leveson, Software safety - why, what and how?. ACMComputing Surveys 18, 125-163 (1986).3. D. Longley and M. Shain, Data and Computer Security:Dictionary of Standards, Concepts and Terms. Macmillan,New York (1987).4. J.-C. Laprie, Dependability: a unifying concept for reliablecomputing and fault tolerance. In Dependability of CriticalSystems, edited T. Anderson, pp. 1-28. Blackwell, Oxford(1988).5. G. Singh and G. Kiangi, Risk and Reliability Appraisal onMicrocomputers. Chartwell-Bratt, Bromley, Kent (1987).6. J. L. Mackie, Causes and conditions. In Causation and

    Conditionals, edited E. Sosa. Clarendon Press, Oxford(1975).7. J. E. Dobson and J. A. McDermid, Security Models andEnterprise Models. In Database Security: Status andProspects II, C. E. Landwehr, pp. 1-39. Elsevier Science,Amsterdam (1989).8. S. J. Clarke, A. C. Coombes and J. A. McDermid,

    Methods for developing safe software. Proceedings of theSafetyNet '90 Royal Aeronautical Society, London, 1990),pp . 6:1-6:8.9. J. A. McDermid, Safety arguments, software and systemreliability. Proceedings of the 2nd International Symposiumon Software Reliability Engineering (IEEE, Austin, Texas,1991).

    Book ReviewK. V. RUSSELL (Ed.)Yearbook of Law Computers and Technology,Volume 5Butterworths, Sevenoaks, 199129.50. ISBN 0-406-18704-5This international yearbook's general aim isto publish work arising from a variety ofdisciplines and perspectives which reflect onthe law's concern with new technologies. Hasit succeeded? Each issue now concentrates ona major theme. Here it is 'Technology and theCo urts '. Clearly this ties up the disciplines. Inthe two principal sections, the theme section'Technology and the Courts ' and 'CurrentDevelopments', certainly the perspectives ofthe judiciary, academics and I.T. specialistsare well represented. The experience of prac-tising solicitors who, after all, are in directtouch with the courts' consumers, would alsohave been relevant.The introduction reminds us that effectiveuse of technology enhances the quality ofjustice, making the system more accessible andmore accurate, and providing more benefits.Yet this is not the rationale employed inmarketing the new technologies. Efficiencyand economy, particularly the latter, are thekeywords, so that the vested interests in the

    politics of achieving justice can safely holdtheir own.The theme section provides interesting des-criptions of what is happening with the use ofcomputers in courts, both in the UK andabroad, and glimpses of the exciting devel-opments to look forward to, heralding thetransformation of our legal institutions inwhat Ronald Madden epitomizes as the 'post-Gutenberg' era. The content is rooted in thepracticalities of what has already beenachieved.Mark Tantam's approach is thought-pro-voking, not only because of the analogybetween international conflict and adversarialconduct at court, but on the ways of presentingcases effectively to juro rs who are not used toconcentrating for long.In a fascinating account of promoting theLEXIS service, Kyle Bosworth points outthat, contrary to conventional wisdom,lawyers do not in fact typically spend theirtime looking things up. This has meant theconsideration of new ways of encouraging theuse of computer-assisted legal informationretrieval.In contrast to thefirstsection, there is a lackof balance in the 'Current Developments'section. Two articles about data protection,

    two on education, one straightforward analy-sis of the EC Telecommunications ServicesDirective, and one article which could havebeen included in the theme section, comprise amiscellany without a coherent focus on selec-ted topical trends and their implications. Inthis section, Roy Freed's controversial article,on teaching and practising computer laweffectively, stresses the nature and scope of thesubject in an idiosyncratic way. His extendeddefinition of'computer law', going far beyondhis initial quotation of the ' substantive legalaspects of the availability of computer tech-nology', is so wide as not to be entirelymeaningful. The authoritative reviews of sig-nificant books published and the discussionsof major cases during the year are useful.It is unfortunate that a book with suchinformed content, which is well produced, ongood-quality pa per, backed by a distinguishedAdvisory Board, should suffer from so manymisspellings, particularly such a book, whereknowledge of electronic remedial techniquesshould have been taken for granted.This book is refreshing in bringing innova-tive insights and ideas to the fore, as well as inthe actual reports of what is taking place inthe courts.

    RACHEL BURNETTLondon

    THE COMPUTER JOURNAL, VOL. 35, NO. 1, 1992 15

    byguestonF

    ebru

    ary2

    0,2

    012

    http://com

    jnl.oxford

    journ

    als.or

    g/

    Downlo

    adedfrom

    http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/http://comjnl.oxfordjournals.org/