an imprecise fault tree analysis for the estimation of the rate.pdf

8
An imprecise Fault Tree Analysis for the estimation of the Rate of OCcurrence Of Failure (ROCOF) Giuseppe Curcurù, Giacomo Maria Galante, Concetta Manuela La Fata * Dipartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica (DICGIM), Università degli Studi di Palermo, 90128 Palermo, Italy article info Article history: Received 11 March 2013 Received in revised form 9 July 2013 Accepted 9 July 2013 Keywords: Rate of Occurrence of Failure Fault Tree Analysis Initiator Events Enabler events DempstereShafer Theory abstract The paper proposes an imprecise Fault Tree Analysis in order to characterize systems affected by the lack of reliability data. Differently from other research works, the paper introduces a classication of basic events into two categories, namely Initiators and Enablers. Actually, in real industrial systems some events refer to component failures or process parameter deviations from normal operating conditions (Initiators), whereas others refer to the functioning of safety barriers to be activated on demand (Enablers). As a consequence, the output parameter of interest is not the classical probability of occur- rence of the top event, but its Rate of OCcurrence (ROCOF) over a stated period of time. In order to characterize the basic events, interval-valued information supplied by experts are properly aggregated and propagated to the top. To this purpose, the DempstereShafer Theory of evidence is proposed as a more appropriate mathematical framework than the classical probabilistic one. The proposed method- ology, applied to a real industrial scenario, can be considered a helpful tool to support risk managers working in industrial plants. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Risk Analysis (RA) is dened as the process of systematic use of available information in order to identify hazards and to estimate the risk (IEC 60300-3-9, 1999). It consists of four basic steps, namely hazard analysis, consequence analysis, likelihood assessment and risk estimation (AIChE, 2000). Each step makes use of different qualitative and quantitative techniques, which collectively guide toward estimating the risk and ensuring the system safety. With relation to the likelihood assessment, Fault Tree Analysis (FTA) is the most popular and recommended technique. It makes possible the identication and analysis of conditions and factors that lead to the occurrence of a dened undesired event (i.e. top event) signicantly affecting the system performance (IEC 61025, 2006). After identifying all the possible dangerous top events, the risk analyst needs to individuate all the possible causes (i.e. basic events) whose combination can generate the undesired event. Commonly, researchers do not use to distinguish among different types of basic events and are mainly interested in the character- ization of the top event in terms of its probability of occurrence. The latter two aspects constitute the core of the present paper. Actually, in high risk plants the role played by the identied basic events is quite different: some of them are inherent to the process control, others refer to the functioning of safety barriers. Further- more, it seems more realistic and appropriate to characterize the undesired event in terms of its rate of occurrence over a stated period of time rather than of its probability of occurrence. There- fore, the paper proposes an imprecise FTA in which two kinds of basic events are considered and whose output parameter is the top event rate of occurrence. In particular, basic events are classied in Initiators and Enablers. The rst ones refer to the component fail- ures or process parameter deviations with respect to the standard conditions, whereas the latter ones represent the failure of the safety barriers. Therefore, the top event arises as a consequence of the occurrence of some initiators together with the failure of all the safety barriers. For the two aforementioned categories of basic events, two different input imprecise parameters are suggested, namely the Rate of OCcurrence Of Failure (ROCOF) for initiators, and the average Probability of Failure on Demand (PFD) or the classical steady-state unavailability (Q) for enablers. In the context of the present paper, FTA is called imprecise because the input parameters are realistically assumed as unlikely exactly known, i.e. assessed by single values. Actually, the uncer- tainty on their true value leads to an interval-valued characteriza- tion of them and to the use of a more suitable mathematical framework than the classical probabilistic one. Helton, Johnson, * Corresponding author. Tel.: þ39 09123861842. E-mail addresses: [email protected] (G. Curcurù), giacomomaria.galante@ unipa.it (G.M. Galante), [email protected], [email protected] (C.M. La Fata). Contents lists available at ScienceDirect Journal of Loss Prevention in the Process Industries journal homepage: www.elsevier.com/locate/jlp 0950-4230/$ e see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.jlp.2013.07.006 Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Upload: galvigarcia

Post on 05-Dec-2015

216 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

lable at ScienceDirect

Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Contents lists avai

Journal of Loss Prevention in the Process Industries

journal homepage: www.elsevier .com/locate/ j lp

An imprecise Fault Tree Analysis for the estimation of the Rateof OCcurrence Of Failure (ROCOF)

Giuseppe Curcurù, Giacomo Maria Galante, Concetta Manuela La Fata*

Dipartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica (DICGIM), Università degli Studi di Palermo, 90128 Palermo, Italy

a r t i c l e i n f o

Article history:Received 11 March 2013Received in revised form9 July 2013Accepted 9 July 2013

Keywords:Rate of Occurrence of FailureFault Tree AnalysisInitiator EventsEnabler eventsDempstereShafer Theory

* Corresponding author. Tel.: þ39 09123861842.E-mail addresses: [email protected] (G. Cur

unipa.it (G.M. Galante), concettamanuela.lafata@unipa(C.M. La Fata).

0950-4230/$ e see front matter � 2013 Elsevier Ltd.http://dx.doi.org/10.1016/j.jlp.2013.07.006

a b s t r a c t

The paper proposes an imprecise Fault Tree Analysis in order to characterize systems affected by the lackof reliability data. Differently from other research works, the paper introduces a classification of basicevents into two categories, namely Initiators and Enablers. Actually, in real industrial systems someevents refer to component failures or process parameter deviations from normal operating conditions(Initiators), whereas others refer to the functioning of safety barriers to be activated on demand(Enablers). As a consequence, the output parameter of interest is not the classical probability of occur-rence of the top event, but its Rate of OCcurrence (ROCOF) over a stated period of time. In order tocharacterize the basic events, interval-valued information supplied by experts are properly aggregatedand propagated to the top. To this purpose, the DempstereShafer Theory of evidence is proposed as amore appropriate mathematical framework than the classical probabilistic one. The proposed method-ology, applied to a real industrial scenario, can be considered a helpful tool to support risk managersworking in industrial plants.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Risk Analysis (RA) is defined as the process of systematic use ofavailable information in order to identify hazards and to estimatethe risk (IEC 60300-3-9,1999). It consists of four basic steps, namelyhazard analysis, consequence analysis, likelihood assessment andrisk estimation (AIChE, 2000). Each step makes use of differentqualitative and quantitative techniques, which collectively guidetoward estimating the risk and ensuring the system safety.

With relation to the likelihood assessment, Fault Tree Analysis(FTA) is the most popular and recommended technique. It makespossible the identification and analysis of conditions and factors thatlead to the occurrence of a defined undesired event (i.e. top event)significantly affecting the system performance (IEC 61025, 2006).

After identifying all the possible dangerous top events, the riskanalyst needs to individuate all the possible causes (i.e. basicevents) whose combination can generate the undesired event.Commonly, researchers do not use to distinguish among differenttypes of basic events and are mainly interested in the character-ization of the top event in terms of its probability of occurrence.

curù), [email protected], [email protected]

All rights reserved.

The latter two aspects constitute the core of the present paper.Actually, in high risk plants the role played by the identified basicevents is quite different: some of them are inherent to the processcontrol, others refer to the functioning of safety barriers. Further-more, it seems more realistic and appropriate to characterize theundesired event in terms of its rate of occurrence over a statedperiod of time rather than of its probability of occurrence. There-fore, the paper proposes an imprecise FTA in which two kinds ofbasic events are considered and whose output parameter is the topevent rate of occurrence. In particular, basic events are classified inInitiators and Enablers. The first ones refer to the component fail-ures or process parameter deviations with respect to the standardconditions, whereas the latter ones represent the failure of thesafety barriers. Therefore, the top event arises as a consequence ofthe occurrence of some initiators together with the failure of all thesafety barriers. For the two aforementioned categories of basicevents, two different input imprecise parameters are suggested,namely the Rate of OCcurrence Of Failure (ROCOF) for initiators, andthe average Probability of Failure on Demand (PFD) or the classicalsteady-state unavailability (Q) for enablers.

In the context of the present paper, FTA is called imprecisebecause the input parameters are realistically assumed as unlikelyexactly known, i.e. assessed by single values. Actually, the uncer-tainty on their true value leads to an interval-valued characteriza-tion of them and to the use of a more suitable mathematicalframework than the classical probabilistic one. Helton, Johnson,

Page 2: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

Abbreviations

RA Risk AnalysisFTA Fault Tree AnalysisROCOF Rate of OCcurrence of FailurePFD Probability of Failure on DemandDST DempstereShafer TheoryFOD Frame Of Discernmentbpa Basic Probability AssignmentBel BeliefPl PlausibilityENF Expected Number of FailureNHPP Non-Homogenous Poisson ProcessHPP Homogenous Poisson ProcessLE Level ElementLIC Level Indicator and ControllerLCV Level Control ValveSIF Safety Instrumented FunctionSIS Safety Instrumented System

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e12921286

& Oberkampf (2004) use several simple test problems to showdifferent approaches (probability theory, evidence theory, possi-bility theory and interval analysis) for the representation of theuncertainty in model prediction that derives from the uncertaintyin model inputs. Authors emphasize the care that must be used ininterpreting the different results arising from the proposed un-certainty representations. In particular, when uncertainty on inputparameters is modeled by means of a uniform probability distri-bution and no further information is available to distinguish be-tween the potential values that the parameter can assume in theinterval, supplied results are only in appearance exact. Evidencetheory overcomes this appearance of exactness by leading to adisplay of the lowest and the highest probability that are consistentwith a probabilistic interpretation of the given information.Therefore, in the present paper the DempstereShafer Theory ofevidence is proposed as the mathematical framework to deal withthese imprecise data.

2. Literature review

Performing a quantitative FTA requires the characterization ofbasic events bymeans of specific reliability input data that are oftendifficult to come by, especially in process plants wherein theoccurrence of severe accident scenarios is rare. As a consequence,the available reliability data are generally poor so that the knowl-edge on processes is partial or incomplete. Furthermore, even ifavailable, data have many inherent uncertainty issues, such asvariant failure modes, design faults, poor understanding of failuremechanisms, as well as the vagueness of the system phenomena.

In literature, uncertainty due to natural variation or randomizedbehavior of physical systems is called aleatory or objective uncer-tainty, whereas that one due to the lack of knowledge or incom-pleteness about the parameters characterizing the physical systemsis known as epistemic or subjective uncertainty (Ferson & Ginzburg,1996; Hoffman & Hammonds, 1994). In the RA field, the probabi-listic approach has been widely used to manage both these un-certainties. However, such an approach requires known probabilitydensity functions, generated from historical data, that arecommonly not available in process plants. As a consequence, ameaningful attention has been paid by researchers to theoreticalapproaches alternative to the probabilistic one. In particular, thepossibility theory and the evidence theory (Shafer, 1976), also

known as DempstereShafer Theory (DST), have been considered asthe most promising methodologies to deal with the epistemic un-certainty. In addition, the DST suggests several combination rules toaggregate evidences coming from different sources of information(Sentz & Ferson, 2002).

Generally, DST has beenmainly applied to design problems (Bae,Grandhi, & Canfield, 2003, 2004), whereas its application in thereliability field has not yet been widely researched. Furthermore,the application of DST in the reliability field has been mainlyfocused on the use of the so called three-valued logic wherein adiscrete Frame Of Discernment (FOD) is defined (Guth, 1991). Thestructure of the FOD within the three-valued logic approach is ofthe type {T, F}, where T and F stand for true and false respectively.Such a kind of FOD leads to four subsets in the power set forcing theexperts to express a judgment in terms of degree of belief for anevent to be true (i.e. the event occurs) or false (i.e. the event doesnot occur). However, in real engineering applications, it is notreasonable that experts can supply a degree of belief about an eventto be true and false. To overcome such a drawback, an innovativeapplication of DST in the FTA is proposed in (Curcurù, Galante, & LaFata, 2012a, 2012b). Firstly, a continuous FOD, coincident with thereal interval [0,1], is defined. Then, in order to supply the proba-bility of occurrence of basic events with an associated degree ofbelief, the involvement of a team of experts is proposed. Throughthe Dempster combination rule, the supplied information areaggregated and then propagated to the top. Belief and Plausibilityfunctions are employed to estimate the uncertainty about the topevent probability of occurrence.

The application of the possibility theory to the FTA reduces tothe employment of the classical fuzzy set theory (Zimmermann,1991). A methodology for a fuzzy based computer-aided FTA toolis presented in (Ferdous, Khan, Sadiq, & Amyotte, 2009), whererobustness of the fuzzy based approach is compared with that oneof the conventional probabilistic technique. In (Markowski,Mannan, & Bigoszewska, 2009) a fuzzy-based bow-tie model,consisting of the combined representation of the fault and theevent trees, is presented for the accident scenario risk assessment.Even in (Ferdous, Khan, Sadiq, Amyotte, & Veitch, 2012) a genericframework for a bow-tie analysis under uncertainty is developed. Itproposes the use of appropriate techniques to handle data uncer-tainty and introduces the interdependence of input events. Thefuzzy-based and the evidence theory-based approaches aredeveloped to address the uncertainty.

Generally, FTA is proposed in order to calculate the top eventprobability of occurrence. The characterization of the top event interms of its probability of occurrence is often questionable from theperspective of the risk management team. Actually, it seems moreuseful to know the rate of occurrence of the top. Such a problem hasnot been yet deepened in literature by researchers and constitutesthe main focus of the present paper.

The remainder of the paper is organized as follows. A briefintroduction to the DempstereShafer Theory is supplied in Section3. The reliability parameter ROCOF is presented in Section 4,whereas Section 5 deals with the proposed imprecise FTA. A casestudy for an industrial system is presented in Section 6 and finallyconclusions are drawn in Section 7.

3. The DempstereShafer theory of evidence

In 1967 Arthur P. Dempster and later Glenn Shafer introducedthe theory of evidence, also known as DempstereShafer Theory(DST) as a mathematical framework for the representation of theepistemic uncertainty. It is based on three different measures,namely the Basic Probability Assignment (bpa), the Belief measure(Bel), and the Plausibilitymeasure (Pl). Within the DST, the Frame Of

Page 3: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292 1287

Discernment (FOD) U is defined as a set of mutually exclusive andexhaustive elements, whereas the power set, PU, comprises all thepossible subsets of U (2jUj), including the empty set. Here, jUj statesfor the cardinality of the FOD.

Definition 1. the bpa is the amount of knowledge associated to everysubset pi of PU and it is denoted by m(pi). The sum of all the bpasassociated to each element in PU is set equal to 1.

Each element pi having a m(pi) > 0 is called focal element of PU. Onbpas the following assumptions hold:

mðpiÞ : PU/½0;1� (1)

mðBÞ ¼ 0 (2)

Xpi4PU

mðpiÞ ¼ 1 (3)

With relation to the Eq. (2), it means that in the evidence theorynone possibility for an uncertain parameter to be located outside of theFOD is given.

Definition 2. the Belief is as the sum of all the bpas of the propersubsets pk of the element of interest pi, namely:

BelðpiÞ ¼Xpk4pi

mðpkÞ (4)

Therefore, the Belief can be considered as a lower bound for theset pi.

Definition 3. the Plausibility is the sum of all the bpas of subsets pkthat intersect with the set of interest pi, namely:

PlðpiÞ ¼X

pkXpisB

mðpkÞ (5)

The Plausibility can be considered as the upper bound of the set pi.In order to aggregate evidences coming from different and in-

dependent sources of information, the DST offers several combi-nation rules. Among them, the firstly defined rule within theframework of the evidence theory is the Dempster one. Assumingthe independence of two generic sources of information, thecombination of the corresponding bpa on pi can be obtained asfollows:

½m14m2�ðpiÞ ¼8<:

0 for pi ¼ BP

paXpb¼pim1ðpaÞ$m2ðpbÞ1�K for pisB

(6)

where m1 and m2 are the bpas expressed by the two sources withrelation to the events pa and pb respectively. The parameter (1-K)in Eq. (6) is a normalization factor that allows at respecting theaxiom (3). The parameter K represents the amount of conflictingevidence between the two sources and it is calculated as follows:

K ¼X

paXpb¼B

m1ðpaÞ$m2ðpbÞ (7)

The Dempster’s rule verifies some interesting properties and itsuse has been justified theoretically by several authors (Dubois &Prade, 1986; Klawonn & Schwecke, 1992; Voorbraak, 1991). Any-way, it ignores contradicting evidences among sources by meansof the normalization factor and exhibits numerical instability ifconflict among sources is large. As a consequence, Yager (1987)proposed a combination rule in which all the conflicting mass isassigned to the ignorance rather than to the normalization factor.

Thus, for a high conflict case (i.e. higher value of parameter K), theYager combination rule gives more stable and robust results thanthe Dempster one.

4. Rate of occurrence of failure e ROCOF

In a quantitative FTA, different reliability parameters can char-acterize the top event. Actually, for non repairable systems, thesystem unreliability is the parameter of interest. Instead, forrepairable systems, risk analysts can be focused on the estimationof the Expected Number of Failures over a time horizon (Rausand &Høyland, 2004).

Let consider a time interval [0,t]. N(t) represents the number offailures over this time interval. If s and t are two different timeinstants with s< t, the difference [N(t)�N(s)] indicates the numberof failures occurring in the interval [s,t].

By indicating with ENF(t) the Expected Number of Failures at t,i.e. E[N(t)] ¼ ENF(t), the unconditional intensity of failure u(t) isdefined as follows:

uðtÞ ¼ d½ENFðtÞ�dt

¼ limDt/0

E½Nðt þ DtÞ � NðtÞ�Dt

(8)

Since the time intervalDt is supposed to be small, it is possible toapproximate the previous equation to:

uðtÞzE½Nðt þ DtÞ � NðtÞ�Dt

(9)

From Eq. (9), u(t) can be interpreted as the ratio between themean number of failures in the interval Dt and the interval itself.Considering the previous assumption on Dt, the probability of thenumber of failures to be greater than one can be approximately setto 0. Therefore, E[N(t þ Dt) � N(t)] can be 0 or 1, and it can beinterpreted as the probability of occurrence of a failure event in theinterval Dt. Then, the following equation holds:

uðtÞ$Dt ¼ Pr ½t � T � t þ Dt� (10)

where T is the stochastic variable time to failure. Therefore, theunconditional intensity of failure u(t) of a generic repairablecomponent, also known as Rate of OCcurrence Of Failure (ROCOF),multiplied by the interval Dt, represents the probability of a failureoccurrence in that interval.

On the contrary, the conditional failure rate l(t) is representativeof the component reliability behavior and is defined as follows:

lðtÞ$Dt ¼ Pr½t � T � t þ DtjF� (11)

where F is the event “the component is working at time t”. Byindicatingwith A the event “the component fails within the interval[t, t þ Dt]”, Eq. (11) can be written as follows:

lðtÞ$Dt ¼ Pr½t � T � t þ DtjF� ¼ Pr½AjF� (12)

Therefore, Eq. (10) turns into the following one:

uðtÞ$Dt ¼ Pr ½t � T � t þ Dt� ¼ PrðAÞ (13)

Furthermore, it is possible to state that:

A ¼ ðAXFÞW�AXF

�(14)

However, the probability of the event ðAXFÞ is negligiblebecause it would imply the occurrence in the small interval Dt of arepair followed with a failure. As a consequence, Eq. (14) can bewritten as follows:

Page 4: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

TOP1

EXPLOSION

EVENT1

FIRE STARTS

I

EVENT2

PROTECTIONSYSTEM

UNAVAILABLEE

Fig. 1. Example of a fault tree with initiator and enabler events.

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e12921288

A ¼ ðAXFÞ (15)

Therefore, from Eq. (15) arises that Eq. (12) can be written asfollows:

lðtÞ$Dt ¼ PrðAXFÞPrðFÞ ¼ PrðAÞ

PrðFÞ ¼ uðtÞ$DtAðtÞ (16)

where A(t) is the component availability at the time t. With rela-tion to a non repairable component, it is functioning at time t oncondition that it did not fail during the interval [0,t]. As a conse-quence, the availability A(t) in Eq. (16) turns into the componentreliability R(t) and the unconditional intensity of failure reduces tothe f(t) (i.e. the probability density function of the variable T).Equation (16) allows at formulating the following relation be-tween l(t) and u(t):

uðtÞ ¼ lðtÞ$AðtÞ (17)

The latter equation is time-dependent. However, the knowledgeof the instantaneous value of the conditional failure rate l(t) for arenewal process is almost impossible as well as the time-dependent availability A(t). As a consequence, Eq. (17) has a prac-tical utility under the assumption of a constant conditional failurerate and a steady-state value of A(t).

Therefore, Eq. (17) turns into the following one:

u ¼ l$A (18)

In the present paper, as commonly assumed in real applications,the conditional failure rate is considered constant.

In the literature, different methodologies are proposed for theestimation of the ROCOF. For instance, in (Tan, Jiang, & Bai, 2008)the failure of complex repairable systems, generally consisting in aNon-Homogenous Poisson Process (NHPP), is modeled as a Ho-mogenous Poisson Process (HPP) that represents a simplifiedapproach for the reliability analysis. As a consequence of thisassumption, times between successive failures are independentand identically distributed exponential random variables. Then,the ROCOF is constant and can be easily estimated. In (Phillips,2001), a NHPP process is used to model the occurrence of failureevents over time. In this case, the intensity function is not con-stant. The paper presents a parametric estimate of the expectedcumulative intensity function and a non-parametric estimate ofthe expected ROCOF. In both cases, confidence regions for theROCOF are proposed. The focus of the present paper is not theROCOF estimation by historical data. Actually, to achieve thispurpose, reliability data should be available to the analysts. This isunfortunately the main drawback of high risk process plantswhere rare failure events imply a poor availability of data.Therefore, in the present paper ROCOF is supposed to be suppliedby a team of experts. The imprecision of their available knowledgeis also taken into account.

5. Imprecise FTA

As previously mentioned, the quantification of risk requires theimplementation of a FTA in order to characterize the top eventspreviously determined in the hazard analysis step. For the purposeof this work, the distinction of basic events in the two aforemen-tioned categories (initiator and enablers) is essential. For instance,Fig. 1 shows the fault tree related to the top event “Explosion”. Insuch a case, the event “Fire Starts” is classified as initiator, whereasthe event “Protective System Unavailable” is the enabler. It isobvious that top event takes place when a fire starts and the pro-tection system is unavailable.

5.1. Input data for initiators and enablers

Information related to basic events are here supposed to besupplied by a team of experts. In particular, each expert refers to theanalyst an interval inwhich he/she believes the parameter of interestcould lie in. The imprecise information for the initiator events con-sists in a real positive interval in which the ROCOF lies in with anassociated belief mass. Otherwise, for the enabler events the intervalis referred to the average PFD or the steady-state unavailabilityQ with the corresponding belief mass. The two aforementionedparameters can be defined as follows.

The time-dependent PFD is defined as the probability that anundetected failure has occurred at or before the time t in which theintervention of the component is needed. In order to decrease thePFD, components are periodically tested at regular time intervals oflength s. If at the generic inspection instant ns the component isfound in a failed state, then it is replaced with another componentassumed “as good as new”. In most applications it is sufficient torefer to the average value over a period of length s (Rausand &Høyland, 2004).

The instantaneous unavailability Q(t) is defined as the proba-bility that a repairable component is in a failed state at time t. Inreal applications, this parameter is approximated by the steady-state unavailability Q (Zio, 2007).

In order to acquire all the input information in the interval formwith the related belief mass, different scenarios can be hypothe-sized. For each basic event, it is supposed that the analyst suggests abelief mass (i.e. m) and receives from experts the correspondingintervals. The two kinds of basic events do not share the same FOD.Actually, ROCOF is defined in [0, þN) and PFD and Q in [0,1].Therefore, the two previous intervals represent the two frames ofdiscernment to be considered for the two parameters. According tothe evidence theory, the quantity (1 � m) indicates the ignorancemass associated to the corresponding FOD.

The collection of the interval-valued information about eachbasic event is only the first step of the whole procedure describedin Fig. 2.

Actually, for each basic event, information coming from thedifferent sources need to be aggregated. In order to perform thisstep, the classical Dempster aggregation rule (6) is applied. How-ever, the aggregation process generates more than one aggregatedinterval for each basic event. As a consequence, all their possiblecombinations must be considered during the propagation phase, asit will be detailed in the next section.

Page 5: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

Informationacquisition

( and PFD/Q)

Judgmentsaggregation

Propagation to theTop Event

Calculation ofBelief andPlausibilityω

Fig. 2. Flow-chart of the proposed procedure.

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292 1289

5.2. Propagation to the top event

Within the classical FTA, the method commonly used to deter-mine the top event probability of occurrence (Ptop) is the minimalcut-set method. A minimal cut-set in a fault tree is the minimal setof independent basic events whose occurrence implies the topevent. Therefore, once minimal cut-sets have been determined,Ptop can be calculated as follows (Modarres, Kaminsky, & Krivtsov,2010).

Ptop ¼ PðC1WC2W:::WCmÞ (19)

where Ck is the kth minimal cut-set within the total number ofminimal cut-sets m.

The occurrence probability, QCkðtÞ, and the unconditional rate of

occurrence, uCkðtÞ, of a generic minimal cut-set Ck are determined

by expressions (20) and (21):

QCkðtÞ ¼

Yn

i¼1

QiðtÞ (20)

uCkðtÞ ¼

Xn

j¼1

ujðtÞ$Yn

i¼1isj

QiðtÞ (21)

where n is the total number of events in the minimal cut-set Ck,Qi(t) is the unavailability or the Probability of Failure on Demand ofthe ith event in the minimal cut-set Ck and uj(t) is the rate ofoccurrence of the jth event in the minimal cut-set Ck. Therefore, thetop event rate of occurrence utop(t) is calculated by the followingequation:

utopðtÞ ¼Xm

k¼1

uCkðtÞ$

Ym

z¼1zsk

�1� QCz

ðtÞ� (22)

The latter allows an exact computation of the utop(t) under thehypothesis of independence of the minimal cut-sets. However, in

To flare

LIC 1125

LCV 1125

Gas Outlet

FluidOutlet

RD

LE 1125

K

M

Gas Inlet

PSV

High levelSIF

1130

LAH1158

V

Fig. 3. Case study process diagram.

the context of high risk process plants, the terms QCzðtÞ in (22) are

negligible because the Qi(t) are very small. In any case, thisapproximation is widely used by risk analysts because it implies anoverestimation of the parameter utop(t). Therefore, Eq. (22) turnsinto the following one:

utopðtÞzXm

k¼1

uCkðtÞ (23)

Eqs. (20)e(23), expressed as a function of the time, can bewritten referring to constant values of the involved parameters aspreviously specified.

Since two kinds of basic events are here supposed to take place,the order of events occurrence needs to be taken into account.Therefore, in the application of Eq. (21) one must consider that notall sequences are allowed. Let suppose the minimal cut-set of ageneric system comprises the basic events A, B, C and D. If all se-quences are admitted, namely no distinction between initiators andenablers is considered, the minimal cut-set rate of occurrence iscalculated by Eq. (21) that turns into the following expression.

ucut ¼ uA$QB$QC$QD þ uB$QA$QC$QD þ uC$QA$QB$QD

þ uD$QA$QB$QC (24)

On the contrary, and this is the case, if A is an initiator event,whereas B, C and D are enablers, the only allowed sequence isA/ðBXCXDÞwhere A precedes the enabler events B, C and D. As aconsequence, (24) turns into:

ucut ¼ uA$QB$QC$QD (25)

Therefore, once all ucut have been calculated taking into accountthe admitted sequences, Eq. (23) can be applied to determine theutop.

In this particular context, Eqs. (20)e(23) previously introducedinvolve the interval-valued parameters arising from the aggrega-tion phase. The computation of such equations is based on the or-dinary arithmetic operations among intervals. Considering thatgenerally different aggregated intervals can be associated to eachbasic event, one needs to consider all their possible combinationsleading to the top. Then, for each combination z, an interval-valuedof utop (i.e. Iutop;z ) is computed with the related bpa. The latter iscalculated by means of the Cartesian product of masses

Table 1List of acronyms.

Acronyms Component

V VesselK CompressorM Compressor engineLE Level elementLIC Level indicator and controllerLCV Level control valveSIF High level safety instrumented functionLAH Level alarm highPSV Pressure safety valveRD Rupture disc

Page 6: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

TOP1

Liquid to thecompressor K

GATE1

Process controlsystem fails

GATE2

Process safetysystem fails

BE1

LE 1125 fails:no signal tothe LIC 1125

I

BE2

LIC 1125 fails:no signal to

the LCV 1125I

BE3

LCV 1125 failsto open

I

GATE3

Operator fails

BE6

High level SIF1130 fails

E

BE4

Alarm LAH1158 fails

E

BE5

Operator doesnot operate on

alarmE

Fig. 4. Fault tree.

Table 3Basic events input data.

Basic event Basic eventtype

Expert Lowerbound (LB)

Upperbound (UB)

m

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e12921290

characterizing those intervals involved in the cut-sets (Ferson,Cooper, & Myers, 2000).

However, the computed Iutop;z can be effectively used to deter-mine the belief and the plausibility of the event utop � uth whereuth is a target threshold desired inside the company or suggested bythe standards. In order to calculate these two belief measures, thefollowing steps have to be performed:

1) increasingly ordering of the lower and upper bounds of allIutop;z ;

2) computation of Belief by adding the belief masses of all thoseintervals Iutop;z whose upper bounds are lower or equal to thethreshold value uth, i.e. intervals completely included in [0, uth]:

Belð½0;uth�Þ ¼X

Iutop;z3½0;uth�m�Iutop;z

�(26)

Table 2Minimal cut-sets.

MCS Basic event

1 BE1, BE4, BE62 BE1, BE5, BE63 BE2, BE4, BE64 BE2, BE5, BE65 BE3, BE4, BE66 BE3, BE5, BE6

3) computation of Plausibility by adding the belief masses ofthose intervals Iutop;z that have a not empty intersectionwith theinterval [0, uth]:

Plð½0;uth�Þ ¼X

Iutop;zX½0;uth�sB

m�Iutop;z

�(27)

6. Case study

The methodology described above has been applied to calculatethe ROCOF of the top event “Liquid to the compressor” referring to a

BE1 I Expert 1 3.00E-02 3.50E-02 0.90Expert 2 3.00E-02 4.00E-02 0.90

BE2 I Expert 1 1.00E-01 1.50E-01 0.90Expert 2 1.00E-01 2.00E-01 0.90

BE3 I Expert 1 1.50E-01 2.00E-01 0.90Expert 2 1.00E-01 2.00E-01 0.90

BE4 E Expert 1 2.00E-01 3.00E-01 0.90Expert 2 3.50E-01 4.00E-01 0.90

BE5 E Expert 1 1.00E-03 2.50E-03 0.90Expert 2 2.00E-03 3.00E-03 0.90

BE6 E Expert 1 1.50E-03 2.50E-03 0.90Expert 2 1.00E-03 2.00E-03 0.90

Page 7: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

Table 4Aggregated opinions of initiator events.

Aggregated opinion 1 (AO1) Aggregated opinion 2 (AO2) Aggregated Opinion 3 (AO3)

LB AO1 UB AO1 m AO1 LB AO2 UB AO2 m AO2 LB AO3 UB AO3 m AO3

BE1 3.000E-02 3.500E-02 9.000E-01 3.000E-02 4.000E-02 9.000E-02 0.000Eþ00 þN 1.000E-02BE2 1.000E-01 1.500E-01 9.000E-01 1.000E-01 2.000E-01 9.000E-02 0.000Eþ00 þN 1.000E-02BE3 1.500E-01 2.000E-01 9.000E-01 1.000E-01 2.000E-01 9.000E-02 0.000Eþ00 þN 1.000E-02

Table 5Aggregated opinions of enabler events.

Aggregated opinion 1 (AO1) Aggregated opinion 2 (AO2) Aggregated opinion 3 (AO3) Aggregated opinion 4 (AO4)

LB AO1 UB AO1 m AO1 LB AO2 UB AO2 m AO2 LB AO3 UB AO3 m AO3 LB AO4 UB AO4 m AO4

BE4 2.0E-01 3.0E-01 4.737E-01 3.5E-01 4.0E-01 4.737E-01 0.0Eþ00 1.0Eþ00 5.263E-02BE5 2.0E-03 2.5E-03 8.1E-01 1.0E-03 2.5E-03 9.0E-02 2.0E-03 3.0E-03 9.0E-02 0.0Eþ00 1.0Eþ00 1.0E-02BE6 1.5E-03 2.0E-03 8.1E-01 1.5E-03 2.5E-03 9.0E-02 1.0E-03 2.0E-03 9.0E-02 0.0Eþ00 1.0Eþ00 1.0E-02

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292 1291

period of time of one year. The case study process diagram is re-ported in Fig. 3. Acronyms used in such a diagram are synthesizedin Table 1. The gas to be compressed is separated from the liquidin the vessel V. Then, the separated gas is led to the compressor K.The process is controlled by a process control system implementedby means of the loop 1125. The latter comprises three components,namely the level sensor (LE), the level indicator and controller(LIC), and the level control valve (LCV). If such a loop fails(initiator event), then an independent process safety system shouldfunction (enabler event) to prevent the top event. In particular, theprocess safety system consists of the two following protectionlayers:

1. the operator that is asked to intervene when the high levelalarm is activated (LAH 1158);

2. the high level Safety Instrumented Function (SIF) (IEC 61508,1999; IEC 61511, 2003) that stops the compressor engine.Such a SIF is supposed to be performed by a Safety Instru-mented System (SIS) that is actually not illustrated in Fig. 3.

The top event fault tree is reported in Fig. 4 where BE1, BE2 andBE3 are the initiators (I) while BE4, BE5 and BE6 are the enablers (E).

By applying the minimal cut-set method, the following minimalcut-sets (MCS) are found (Table 2).

It is supposed that two experts supply the interval-valued inputdata for the parameters u and PFD or Q with a belief mass, sug-gested by the analyst, here fixed to 0.9. The input data are heresimulated so that they match with those reported in databases ofsimilar industrial contexts and summarized in Table 3. Table 4

Fig. 5. Belief and plausibility of the event utop � uth.

reports the aggregated intervals for the initiators, whereas inTable 5 the aggregated intervals for the enablers are shown. In bothTables, related belief masses are reported.

In order to characterize the top event, all combinations havebeen considered. They are 1296 and have been computed andpropagated to the top event through a properly written Visual Basicmacro. By applying the procedure described in Section 5, Belief(BEL) and Plausibility (PL) measures of the event utop � uth arecomputed and shown in Fig. 5.

For instance, with a threshold value uth set to 1E-03, the beliefinterval of the event utop � 1E-03 is [0.95, 1]. On the contrary, theanalyst could be interested in fixing a belief value and in deter-mining the corresponding threshold uth.

7. Conclusions

The paper proposes an imprecise Fault Tree Analysis (FTA) whoseoutput parameter is the Rate of OCcurrence of Failure (ROCOF) of atop event. The proposedmethodology aims at dealing with systemscharacterized by a poor availability of reliability data, as it happensin a high risk plant. Differently from the classical approach focusedon the evaluation of the probability of occurrence of the top, thehere proposed FTA implies the differentiation of basic events intotwo categories, namely initiators and enablers. The first categoryrefers to the component failures or process parameter deviationswith respect to the normal operating conditions, whereas thesecond one represents the failure of the safety barriers to be acti-vated in order to avoid the occurrence of the top event.

The imprecise characterization of the previous events consists inthe application of a mathematical framework (i.e. DempstereSha-fer Theory) based on imprecise probabilities. The consideration oftwo categories of basic events implies different kinds of inputreliability data. Initiators have been characterized by the uncondi-tional intensity of failure (u), whereas enablers by the averageprobability of failure on demand or the steady-state unavailability.The different phases constituting the proposed procedure (infor-mation acquisition, judgments aggregation and propagation to thetop event) have been applied to a real industrial scenario.

The methodology can be considered a very helpful tool for riskanalysts. Actually, it permits to characterize the top event by amoreinformative parameter (ROCOF) than its probability of occurrence.Furthermore, when reliability data are poor, it allows to charac-terize the basic events by the only available source of informationthat is constituted by expert judgments, properly aggregated. Insuch a way, the uncertainty is directly allocated to the original

Page 8: An imprecise Fault Tree Analysis for the estimation of the Rate.pdf

G. Curcurù et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e12921292

sources and properly propagated to the top by an easy resolution ofthe fault tree. The final result is an interval estimation of the beliefthat a fixed threshold value of the ROCOF is not overcome.

References

American Institute of Chemical Engineers (AIChe). (2000). Guidelines for chemicalprocess quantitative risk analysis (2nd ed.). New York: Center for ChemicalProcess Safety of the AIChE.

Bae, H. R., Grandhi, R. V., & Canfield, R. A. (2003). Uncertainty quantification ofstructural response using evidence theory. American Institute of Aeronautics andAstronautics Journal, 41(10), 2062e2068.

Bae, H. R., Grandhi, R. V., & Canfield, R. A. (2004). An approximation approach foruncertainty quantification using evidence theory. Reliability Engineering andSystem Safety, 86, 215e225.

Curcurù, G., Galante, G. M., & La Fata, C. M. (2012a). Epistemic uncertainty in faulttree analysis approached by the evidence theory. Journal of Loss Prevention inthe Process Industries, 25, 667e676.

Curcurù, G., Galante, G. M., & La Fata, C. M. (2012b). A bottom-up procedure tocalculate the top event probability in presence of epistemic uncertainty. InProceedings of the PSAM11 & ESREL 2012, Helsinki, Finland.

Dubois, D., & Prade, H. (1986). On the unicity of Dempster rule of combination.International Journal of Intelligent System, 1, 133e142.

Ferdous, R., Khan, F., Sadiq, R., & Amyotte, P. (2009). Methodology for computeraided fuzzy fault tree analysis. Process Safety and Environmental Protection, 87,217e226.

Ferdous, R., Khan, F., Sadiq, R., Amyotte, P., & Veitch, B. (2012). Handling andupdating uncertain information in bow-tie analysis. Journal of Loss Prevention inthe Process Industries, 25, 8e19.

Ferson, S., Cooper, J. A., & Myers, D. (2000). Beyond point estimates: risk assessmentusing interval. fuzzy and probabilistic arithmetic. In Workshop notes at societyof risk analysis annual meeting.

Ferson, S., & Ginzburg, L. R. (1996). Different methods are needed to propagateignorance and variability. Reliability Engineering and System Safety, 54,133e144.

Guth, M. A. S. (1991). A probabilistic foundation for vagueness & imprecision infault-tree analysis. IEEE Transactions On Reliability, 40(5), 563e571.

Helton, J. C., Johnson, J. D., & Oberkampf, W. L. (2004). An exploration of alternativeapproaches to the representation of uncertainty in model predictions. ReliabilityEngineering and System Safety, 85, 39e71.

Hoffman, F. O., & Hammonds, J. S. (1994). Propagation of uncertainty in risk analysisassessments: the need to distinguish between uncertainty due to lack ofknowledge and uncertainty due to variability. Risk Analysis, 14(5), 707e712.

IEC 61025. (2006). Fault tree analysis (2nd ed.)..IEC 61508. (1999). Functional safety of electrical/electronic/programmable electronic

safety-related systems. Geneva: IEC (International Electrotechnical Commission).IEC 61511. (2003). Functional safety e Safety instrumented systems for the process

industry sector. Geneva: IEC (International Electrotechnical Commission).ISO/IEC Guide 51 IEC 60300-3-9. (1999). Risk analysis of technological systems.Klawonn, F., & Schwecke, E. (1992). On the axiomatic justification of Dempster’s rule

combination. International Journal of Intelligent Systems, 7, 469e478.Markowski, A. S., Mannan, M. S., & Bigoszewska, A. (2009). Fuzzy logic for process

safety analysis. Journal of Loss Prevention in the Process Industries, 22, 695e702.Modarres, M., Kaminsky, M., & Krivtsov, V. (2010). Reliability engineering and risk

analysis: A practical guide (2nd ed.). Taylor and Francis Group: CRC Press.Phillips, M. J. (2001). Estimation of the expected ROCOF of a repairable system with

bootstrap confidence region. Quality and Reliability Engineering International, 17,159e162.

Rausand, M., & Høyland, A. (2004). System reliability theory: Models, statisticalmethods, and applications (2nd ed.). New Jersey: John Wiley & Sons, Inc.

Sentz, K., & Ferson, S. (2002). Combination of evidence in DempstereShafer theory.Technical Report SAND2002e0835. Albuquerque, New Mexico: Sandia NationalLaboratories.

Shafer,G. (1976).Amathematical theoryofevidence. Princeton:PrincetonUniversityPress.Tan, F. R., Jiang, Z. B., & Bai, T. S. (2008). Reliability analysis of repairable systems

using stochastic point processes. Journal of Shanghai Jiaotong University (Sci-ence), 13(3), 366e369.

Voorbraak, F. (1991). On the justification of Dempster’s rule of combinations. Arti-ficial Intelligence, 48, 171e197.

Yager, R. R. (1987). On the DempstereShafer framework and new combinationrules. Info Sciences, 41, 93e137.

Zimmermann, H. J. (1991). Fuzzy set theory and its application (2nd ed.). KluwerAcademic Publishers.

Zio, E. (2007). An introduction to the basics of reliability and risk analysis (series onquality, reliability and engineering statistics). Singapore: World Scientific Pub-lishing Co. Pte. Ltd.