quantifying human reliability in risk assessments

Upload: kit-nottelling

Post on 07-Aug-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/20/2019 Quantifying Human Reliability in Risk Assessments

    1/3

    PETROLEUM REVIEW 30   2012NOVEMBER

    Following the Buncefield accident in2005, operators of bulk petroleumstorage facilities in the UK were

    requested to provide greater assuranceof their overfill protection systems by

    risk assessing them using the layers ofprotection analysis (LOPA) technique. Asubsequent review of LOPAs1 indicateda recurring problem with the use ofhuman error probabilities (HEPs)without an adequate consideration ofthe conditions that influence theseprobabilities in the scenario under con-sideration. It is obvious that the errorprobability will be affected by anumber of factors (eg time pressure,quality of procedures, equipmentdesign and operating culture) that arelikely to be specific to the situationbeing evaluated. Using an HEP from adatabase or table without consideringthe task context can therefore lead toinaccurate results in applications suchas quantified risk assessment (QRA),LOPA and safety integrity level (SIL)determination studies.

    Human reliability analysis (HRA) tech-niques are available to support thedevelopment of HEPs and, in some cases,their integration into QRAs. However,without a basic understanding of humanfactors issues, and the strengths, weak-nesses and limitations of thetechniques, their use can lead to wildly

    pessimistic or optimistic results.Using funding from its TechnicalPartners and other sponsors, the EnergyInstitute’s (EI) SILs/LOPAs WorkingGroup commissioned Human Reliability

    Associates to develop guidance in thisarea. The aim is to reduce instances ofpoorly conceived or executed analyses.The guidance provides an overview ofimportant practical considerations,

    worked examples and supportingchecklists, to assist with commissioningand reviewing HRAs.2

    HRA techniques

    HRA techniques are designed to supportthe assessment and minimisation ofrisks associated with human failures.They have both qualitative (eg taskanalysis, failure identification) andquantitative (eg human error estima-tion) components. The guidance focusesprimarily on quantification, but illus-trates the importance of the associated

    qualitative analyses that can have a sig-nificant impact on the numerical results.Further EI guidance on qualitativeanalysis is also available.3 There are alarge number of available HRA tech-niques that address quantification – one

    review identified 72 different methods.4

    The respective merits of HRA techniquesare not addressed in the new guidance,since this information, and moredetailed discussion of the concept of

    HRA, are available elsewhere.4,5,6Attempts to quantify the probability

    of human failures have a long history.Early efforts treated people like anyother component in a reliability assess-ment (eg what is the probability of anoperator failing to respond to analarm?). Because these assessmentsrequired probabilities as inputs, therewas a requirement to develop HEPdatabases. However, very few industrieswere prepared to invest in the effortrequired to collect the data to developHEPs, so this led to the widespread useof generic data contained in tools such

    as THERP (technique for human errorrate prediction). In fact, the data con-tained in THERP and other popularquantification techniques such asHEART (human error assessment andreduction technique) are actually

    I T E C H N I C A L   S a f e t y  E

    Quantifying human reliability

    in risk assessmentsJamie Henderson and  David Embrey , from Human Reliability Associates, provide anoverview of the new EI Technical human factors publication  Guidance on quantified

    human reliability analysis (QHRA).

    Table 1: Generic HRA process

    1. Preparation and problem definition2. Task analysis3. Failure identification4. Modelling5. Quantification6. Impact assessment7. Failure reduction

    8. Review

    Figure 1: Examples of the potential impact of human failures on an event sequence

  • 8/20/2019 Quantifying Human Reliability in Risk Assessments

    2/3

    PETROLEUM REVIEW    312012NOVEMBER

    derived primarily from laboratory-basedstudies on human performance.

    As it became recognised that peopleand, consequently HEPs, are significantlyinfluenced by a wide range ofenvironmental factors, techniques were

    developed to modify baseline genericHEPs to take into account these contextualfactors (eg time pressure, distractions,quality of training and quality of thehuman machine interface) – known asperformance influencing factors (PIFs) orperformance shaping factors (PSFs). Aparallel strand of development was in theuse of expert judgement techniques, suchas paired comparisons and absolute prob-ability judgement, to derive HEPs. Othertechniques, such as SLIM (success like-lihood index method) used a combinationof inputs from human factors specialistsand subject matter experts to develop a

    context specific set of PIFs/PSFs. These werethen used to derive an index, which couldbe converted to a HEP, based on thequality of these factors in the situationbeing assessed.

    Despite well-known issues with theirapplication, and more recent attemptsto develop new techniques that attemptto address these issues, techniques suchas THERP and HEART are still the mostwidely used.

    Whilst the quantification of HEPs maybe problematic, the importance of thehuman contribution to overall systemrisk cannot be overstated. For example,

    the ‘bow-tie’ diagram in  Figure 1  showshow different human failures can affectthe initiation (left hand side), mitigationand escalation (right hand side) of ahypothetical event.

    Practical issues

    The EI guidance2 provides an overviewof some of the most important factorsthat can undermine the validity of anHRA. These include:

    Expert judgement  – Every HRA tech-nique requires some degree of expert

     judgement in deciding which factors

    influence the likelihood of error in thesituation being assessed and whetherthese are adequately addressed inthe quantification technique. A well-developed understanding of the taskand operating environment is thereforeessential and any HRA report mustinclude a documented record of allassumptions made during the analysis.In particular, this must provide a justifi-cation for any HEPs that have beenimported from an external source suchas a database. It may also be useful, ininterpreting the results, to demonstratethe potential impact of changes to these

    assumptions on the final outcome.Impact of task context upon HEPs – As

    discussed previously, human perfor-

    mance is highly dependent uponprevailing task conditions. For example,a simple task, under optimal laboratoryconditions, might have a failure proba-bility of 0.0001 (ie once in 10,000 times).However, this probability could easily bedegraded to 0.1 (ie once in 10 times) ifthe person is subject to PIFs such as highlevels of stress or distractions. There arevery few HEP databases that specify thecontextual factors that were presentwhen the data were collected. Instead,the usual approach has been to takedata from other sources, such as labora-tory experiments, and modify these HEPsto suit specific situations.

    Sources of data in HRA techniques   –Depending on the HRA technique used,

    it can be difficult to establish the exactsource of the base HEP data. It might befrom operating experience, experi-mental research, simulator studies,expert judgement, or some combinationof these sources. This has implicationsfor the ability of the analyst to deter-mine the relevance of the data source tothe situation under consideration.

    Qualitative modelling

    Some HRA techniques, in addition toHEP estimation, provide the opportunityto consider and model the impact of PIFs

    upon safety critical tasks. This meansthat, whilst the generated HEP may be

    Table 2: Example of commentary from the guidance (Step 3 – Failure identification)

    Commentary – Identifying failures

    Using a set of guidewords to identify potential deviations is a common approachto this stage of the analysis. However, this can be a time-consuming and poten-tially complex process. There are some steps that can be taken to reduce thiscomplexity, and simplify the subsequent modelling stage:  Be clear about the consequences of identified failures. If the outcomes of concern

    are specified at the project preparation stage then some failures will result inconsequences that can be excluded (eg production and occupational safety issues).

      Document opportunities for recovery of the failure before consequences are realised(eg planned checks).

      Identify existing control measures designed to prevent or mitigate the consequencesof the identified failures.

      Group failures with similar outcomes together. For example, not doing something atall may be equivalent, in terms of outcome, to doing it too late. Care should betaken here, however, as whilst the outcomes may be the same, the reasons for thefailures may be different.

    Table 3: Extract from Checklist 3 – Reviewing HRA outputs

    Checklist 3: Reviewing HRA outputs Guidance section Yes/No

    3.1 Was a task analysis developed? Step 2 – Task analysis

    3.2 Did the development of the task Step 2 – Task analysisanalysis involve task experts (ie peoplewith experience of the task)?

    3.3 Did the task analysis involve a Step 2 – Task analysiswalkthrough of the operating

    environment?

    3.4 Is there a record of the inputs to Step 2 – Task analysisthe task analysis (eg operatorexperience, operating instructions,piping and instrumentation diagrams,work control documentation)?

    3.5 Was a formal identification process Step 3 – Failureused to identify all important failures? identification

    3.6 Does the analysis represent all Step 3 – Failureobvious errors in the scenario, or explain identificationwhy the analysis has been limited to asub-set of possible failures?

    continued on p34...

  • 8/20/2019 Quantifying Human Reliability in Risk Assessments

    3/3