contract research report 398/2001 · 5. methods of probabilistic analysis 45 5.1 summary 45 5.2...

Click here to load reader

Upload: others

Post on 16-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

  • HSEHealth & Safety

    Executive

    Probabilistic methods: Uses andabuses in structural integrity

    Prepared by BOMEL Limitedfor the Health and Safety Executive

    CONTRACT RESEARCH REPORT

    398/2001

  • HSEHealth & Safety

    Executive

    Probabilistic methods: Uses andabuses in structural integrity

    BOMEL LimitedLedger House

    Forest Green RoadFifield, Maidenhead

    Berkshire SL6 2NRUnited Kingdom

    This report concerns the uses and abuses of probabilistic methods in structural integrity, in particularthe design and assessment of pressure systems. Probabilistic methods are now used widely in theassessment and design of structures in many industries. For a number of years these methods havebeen applied to offshore pipelines and, following work by a number of companies in this country andworldwide, they are being used in the design and reassessment/requalification of major gas trunklinesonshore.

    Although probabilistic methods have been available for a number of years and are widely used, there isstill a great deal of confusion which arises from vague language, ill-defined and inconsistentterminology, and misinterpretation often present in published material on the topic. This is perhaps themain reason for misuse in some applications of the methods. The report aims to define the terms moreclearly, and to outline the basic principles of the approach.

    The report reviews the development of structural design methods from the earliest building methods,through limit state and partial factor design methods, to probabilistic analysis, and considers theirapplicability to assessing the integrity of pressure systems. Methods of probabilistic analysis arediscussed and key references are identified for further information. The uses of risk and reliabilityanalysis are also discussed, and many of the concerns that are often expressed with the use ofreliability and risk analysis methods are examined.

    Guidelines are presented for regulators and industry to assist in assessing work that incorporates riskand reliability-based analysis and arguments. The guidelines may also be of use to consultants in howto undertake and present risk and reliability analysis.

    This report and the work it describes was funded by the Health and Safety Executive (HSE). Itscontents, including any opinions and/or conclusions expressed, are those of the author(s) alone and donot necessarily reflect HSE policy.

    HSE BOOKS

  • ii

    © Crown copyright 2001Applications for reproduction should be made in writing to:Copyright Unit, Her Majesty’s Stationery Office,St Clements House, 2-16 Colegate, Norwich NR3 1BQ

    First published 2001

    ISBN 0 7176 2238 X

    All rights reserved. No part of this publication may bereproduced, stored in a retrieval system, or transmittedin any form or by any means (electronic, mechanical,photocopying, recording or otherwise) without the priorwritten permission of the copyright owner.

  • iii

    CONTENTS Page No.

    EXECUTIVE SUMMARY vii

    1. INTRODUCTION 1 1.1 INTRODUCTION 1 1.2 PROJECT BACKGROUND 1 1.3 SCOPE AND OBJECTIVES OF THE REPORT 1

    2. HISTORICAL BACKGROUND TO STRUCTURAL DESIGN 3 2.1 SUMMARY 3 2.2 DESIGN METHODS 4 2.2.1 Design by Geometric Ratio 4 2.2.2 Load Factor Design 5 2.2.3 Allowable Stress Design 5 2.2.4 Limit State Design 6 2.3 CODES AND STANDARDS 7 2.3.1 The Historical Development of Codes and Standards 7 2.3.2 Limit State Design Codes 10 2.3.3 Reliability Methods in Codes 10 2.3.4 Model Code 11 2.4 RISK AND RELIABILITY METHODS 11

    3. DETERMINISTIC AND RELIABILITY-BASED DESIGN AND ASSESSMENT PROCEDURES 13 3.1 SUMMARY 13 3.2 CAUSES OF STRUCTURAL FAILURE AND RISK REDUCTION

    MEASURES 13 3.3 FUNDAMENTAL DESIGN REQUIREMENTS 15 3.4 DETERMINISTIC DESIGN AND ASSESSMENT PROCEDURES 16 3.4.1 Allowable, Permissible or Working-Stress Design, and Load Factor

    Approaches 17 3.4.2 Partial Factor, Partial Coefficient or Load and Resistance Factor Design

    Approaches 19 3.4.3 Characteristic Values in Limit State Design 26 3.5 PROBABILISTIC DESIGN AND ASSESSMENT PROCEDURES 26 3.5.1 UK Safety Legislation 28 3.6 TREATMENT OF GROSS OR HUMAN ERROR 29 3.6.1 Control of Errors in Deterministic Design 30 3.6.2 Treatment of Errors in Probabilistic Assessment 31

    4. STRUCTURAL RELIABILITY THEORY, UNCERTAINTY MODELLING AND THE INTERPRETATION OF PROBABILITY 33 4.1 SUMMARY 33 4.2 FREQUENTIST VERSUS BAYESIAN INTERPRETATION OF

    EVALUATED PROBABILITIES 33 4.2.1 Frequentist Interpretation 33

  • iv

    4.2.2 Bayesian or Degree-of-Belief Interpretation 34 4.3 RELATIONSHIP BETWEEN RISK ANALYSIS AND RELIABILITY

    ANALYSIS 35 4.4 OBJECTIVE OF STRUCTURAL RELIABILITY ANALYSIS 36 4.5 TYPES OF UNCERTAINTY 37 4.5.1 Aleatoric Uncertainties 38 4.5.2 Epistemic Uncertainties 39 4.6 STRUCTURAL RELIABILITY THEORY 41

    5. METHODS OF PROBABILISTIC ANALYSIS 45 5.1 SUMMARY 45 5.2 STRUCTURAL RELIABILITY ANALYSIS PROCEDURE 45 5.3 HAZARD ANALYSIS/FAILURE MODE AND EFFECT ANALYSIS 46 5.4 FAULT/EVENT TREE ANALYSIS 46 5.4.1 Event Trees 46 5.4.2 Fault Trees 47 5.5 STRUCTURAL SYSTEM ANALYSIS 48 5.5.1 Series System 48 5.5.2 Parallel System 48 5.6 FAILURE FUNCTION MODELLING 49 5.6.1 The Time Element 50 5.7 BASIC VARIABLE MODELLING 50 5.8 METHODS OF COMPUTING COMPONENT RELIABILITIES 53 5.8.1 Mean Value Estimates 53 5.8.2 First-Order Second-Moment Methods - FORM 54 5.8.3 Second-Order Reliability Methods - SORM 56 5.8.4 Monte Carlo Simulation Methods 56 5.9 COMBINATION OF EVENTS 57 5.9.1 Component and System Reliability Analysis 58 5.10 TIME-DEPENDANT ANALYSIS 59 5.10.1 Annual Reliability 59 5.10.2 Lifetime Reliability 60 5.10.3 Conditional Reliability Given a Service History 61 5.11 ASSESSMENT OF TARGET RELIABILITY 63 5.11.1 Societal Values 63 5.11.2 Comparison with Existing Practice 63 5.11.3 Cost-Benefit Analysis 64 5.11.4 Targets for Pipelines 65 5.12 RISK ASSESSMENT 66

    6. USES OF RELIABILITY ANALYSIS AND PROBABILISTIC METHODS 69 6.1 SUMMARY 69 6.2 SAFETY FACTOR CALIBRATION 69 6.3 PROBABILISTIC DESIGN 70 6.4 DECISION ANALYSIS 70 6.5 RISK AND RELIABILITY-BASED INSPECTION, REPAIR AND

    MAINTENANCE SCHEMES 72 6.5.1 Qualitative Indexing Systems 72 6.5.2 Quantitative Risk Systems 74

  • v

    7. REQUIREMENTS FOR PROBABILISTIC ANALYSIS OF PRESSURE VESSELS AND PIPELINE SYSTEMS 75 7.1 SUMMARY 75 7.2 BASIC DESIGN DATA 75 7.3 DEFINITION OF FAILURE MODES, LIMIT STATES AND TARGET

    RELIABILITIES 75 7.4 PROBABILITY ANALYSIS 76 7.4.1 Assessment of Hazard Likelihood of Occurrence 76 7.4.2 Failure Models 76 7.4.3 Basic Variable Statistics 76 7.4.4 Component and System Reliability Analysis 76 7.5 CONSEQUENCE MODELS 76 7.5.1 Fire and Blast Analysis Results 76 7.5.2 Economic Considerations 77 7.5.3 Environmental Considerations 77 7.5.4 Life-Safety Considerations 77 7.6 INSPECTION METHODS, COSTS AND MEASUREMENT

    UNCERTAINTY 77 7.7 MAINTENANCE AND REPAIR METHODS, AND COSTS 77

    8. CONCERNS WITH STRUCTURAL RELIABILITY AND RISK ANALYSIS 79 8.1 SUMMARY 79 8.2 CONCERNS WITH STRUCTURAL RELIABILITY ANALYSIS 79 8.2.1 Inclusion of Model Uncertainty 79 8.2.2 The ‘Tail’ Sensitivity Problem 80 8.2.3 Small Failure Probabilities 80 8.2.4 Validation 80 8.2.5 Notional Versus “True” Interpretation 81 8.3 CONCERNS WITH RISK ASSESSMENT 81 8.3.1 Generic Data 81 8.3.2 Risk Aversion 82 8.3.3 Numerical Uncertainty and Reproducibility 82 8.3.4 Deterministic Consequence Models 83 8.3.5 The ‘pro forma approach’ 83 8.3.6 Completeness 83

    9. GUIDELINES FOR RELIABILITY AND RISK ANALYSIS 85 9.1 SUMMARY 85 9.2 GUIDELINES 85

    10. GLOSSARY 91

    11. REFERENCES 93

    ANNEX A CASE STUDY 1 PIPELINE DESIGN PRESSURE UPGRADE 99

    ANNEX B CASE STUDY 2 OVERVIEW OF DRAFT EUROCODE prEN 13445-3 - Unfired Pressure Vessels – Part 3: Design 165

  • Printed and published by the Health and Safety ExecutiveC30 1/98

    vi

  • vii

    HEALTH AND SAFETY EXECUTIVE PROBABILISTIC METHODS: USES AND ABUSES IN

    STRUCTURAL INTEGRITY

    Executive Summary

    This report concerns the uses and abuses of probabilistic methods in structural integrity, in particular the design and assessment of pressure systems. Probabilistic methods are now used widely in the assessment and design of structures in many industries. For a number of years these methods have been applied to offshore pipelines, and following work by BG Technology (now Advantica) and a number of other companies in this country and worldwide, they are being used in the design and reassessment/requalification of major gas trunklines onshore.

    Although probabilistic methods have been available for a number of years and are widely used, there is still a great deal of confusion which arises from vague language, ill-defined and inconsistent terminology, and misinterpretation often present in published material on the topic. This is perhaps the main reason for misuse in some applications of the methods. The report aims to define the terms more clearly, and to outline the basic principles of the approach.

    The report reviews the development of structural design methods form the earliest building methods, through limit state and partial factor design methods, to probabilistic analysis, and considers their applicability to assessing the integrity of pressure systems. Methods of probabilistic analysis are discussed and key references are identified for further information. The uses of risk and reliability analysis are also discussed, and many of the concerns that are often expressed with the use of reliability and risk analysis methods are examined.

    Guidelines are presented for regulators and industry to assist in assessing work that incorporates risk and reliability-based analysis and arguments. The guidelines may also be of use to consultants in how to undertake and present risk and reliability analysis.

    Two case studies concerning the application of probabilistic methods have been examined, and are presented in Annexes, these are:

    • Probabilistic analysis similar to that used to justify an upgrade in the pipeline pressure design factor

    • A review of the Draft Eurocode prEN 13445-3 for pressure vessel design.

    The case studies present a trial application of aspects of the guidelines, and highlight some ‘abuses’ with the application of probabilistic methods.

    The first case study involved reliability analysis to assess the failure probability of a pipeline due to dents or gouges as a result of third party interference. The aim of the study was to show that an increase in pipeline operating pressure would not significantly affect the probability of pipeline failure. The study illustrates the use of a number of reliability analysis techniques. By looking at a range of pressures the case study uncovered unexpected differences between first- and second-order reliability results, and found sensitivity in the governing failure modes.

  • viii

    The second case study concerns the draft Eurocode which is introducing alternative methods for pressure vessel design. The traditional approach, known as design by formula (DBF) is based on a prescriptive approach using design formulae incorporating safety factors. The alternative techniques are design by analysis (DBA) and experimental techniques. The case study examines the different philosophies of the DBF and DBA approaches, and has identified possible changes in safety levels between different design methods. The study also found out that safety factors in the draft code have not been calibrated on a consistent basis, but have been extracted from two existing codes – the Danish pressure vessel code and Eurocode 3 for general steel design.

  • 1

    1. INTRODUCTION

    1.1 INTRODUCTION

    Probabilistic structural analysis may be defined as [1]:

    ‘the art of formulating a mathematical model within which one can ask and get an answer to the question “What is the probability that a structure behaves in a specified way, given that one or more of its material properties are of a random or incompletely known nature, and/or that the actions on the structure in some respects have random or incompletely known properties?”’

    With advances in design methods and the advent of the goal-setting regime, probabilistic analysis has become more than a research topic. This report addresses some of the uses of probabilistic analysis. With the rapid advance of the methodology, its widespread use, often by inexperienced personnel using poor or limited data, means that the techniques can and are stretched too far. This report also addresses some of the abuses of probabilistic methodology.

    1.2 PROJECT BACKGROUND

    This project, entitled ‘Probabilistic Methods – Uses and Abuses in Structural Integrity’, was initiated under the UK Health & Safety Executive’s (HSE) 1999 ‘Competition of Ideas’, and is being undertaken by BOMEL on behalf of the Hazardous Installations Directorate.

    This report is the final report of the project and covers the three main tasks. The first task is a review of theory and practice. The second task concerns the application of probabilistic design methods to two case studies. The final task is to prepare guidelines for the correct use of the application of probabilistic methods for design and assessment.

    1.3 SCOPE AND OBJECTIVES OF THE REPORT

    Disparate sources exist describing the basis of the different design methods in use and the approaches to probabilistic analysis. The objective of this report is to explain the development of the different methods of design and probabilistic analysis and the philosophy underpinning them, to summarise the relevant theory and identify key references where appropriate, and to present examples of in-service experience with the aid of two case studies. The main outcome of the report is a set of guidelines for use by regulators and industry to assist in assessing work that incorporates risk and reliability-based analysis and arguments.

    The historical background to structural design including different design methods, the development of codes and standards for design, and the development of risk and reliability methods is introduced in Section 2. The various causes of structural failure and measures to control them are considered in Section 3. The different procedures for deterministic and reliability-based design and assessment are explained in more detail. Gross and human errors are amongst the main causes of structural failure, and methods used to control them and to treat them in probabilistic analysis are also discussed.

  • 2

    Probability, reliability, risk and uncertainty are often misused terms; these terms are explained in Section 4. The different types and sources of uncertainty that influence the probability of an event are discussed, and the objectives of structural reliability and the basic theory behind structural reliability are also presented.

    Some of the various methods of probabilistic analysis are outlined in Section 5, and Section 6 discusses some of the uses of reliability analysis and probabilistic methods.

    Whilst much of the material in this report is generic in nature, the main objective of the study is to consider probabilistic applications in the design and assessment of pressure systems. Therefore, in Section 7, specific factors have been highlighted relevant to pipelines and pressure systems containing hazardous substances.

    In this report, the term ‘structure’ is used to refer to the integrity aspects of such pressure systems.

    Some of the concerns with structural reliability analysis and risk assessment are discussed in Section 8.

    The guidelines for the assessment of reliability and risk analysis are presented in Section 9.

    A Glossary of the main terms is presented in Section 10. The references are given in Section 10, and key references have been highlighted.

    Finally, the two case studies are presented in Annexes A and B.

  • 3

    2. HISTORICAL BACKGROUND TO STRUCTURAL DESIGN

    2.1 SUMMARY

    This Chapter outlines the historical development of design methodology from the earliest methods based on geometric ratio, to load factor and allowable stress approaches, partial factor methods, limit state design, and finally probabilistic and risk-based design and assessment. The development of codes and standards, which strongly reflect the advances of these methodologies, is also discussed.

    Modern engineering design involves two steps, whether explicitly recognised or not; these are:

    • The Theory of Structures, in order to determine the way in which a structure actually carries its loads.

    • The Strength of Materials, in order to assess whether the structural response can safely be withstood by the material; e.g. this may involve comparing (elastic) stresses with material properties.

    In practice, the two steps cannot usually be separated, and design must be iterative - section properties must be assigned before structural forces and stresses can be evaluated; once stresses are evaluated, section properties can be designed. Of particular interest in this document is how safety concepts, and the basis of design procedures have been developed.

    Design practice and methodology is evolving continuously. The historical development of modern design methods began in the ’50s, and can be briefly summarised as follows:

    1950-1970 Development of structural safety concepts (e.g. Pugsley et al [2] in the UK, Freudenthal et al [3] in the US, etc.)

    1970-1985 Development of reliability theory and computational methods

    1975- Reliability-based calibration of partial safety factors and application of limit state codes

    1985- Risk and reliability assessment, primarily of existing structures

    Initially, structural design was based on experience and tradition, which in the most part relied on trial-and-error. With increased understanding in mathematics and physics the effects of loads on structures could be calculated, and knowledge of material and component behaviour was developed through testing. A codified approach evolved, which restricted the working stresses in each component to prescribed limits. Although straightforward to implement, the prescribed or allowable stress design approach has a number of disadvantages and delivers inconsistent levels of safety. Further developments in understanding led to the specification of the performance of the structure through explicit limit states. The partial safety factors, often accompanying limit state design codes, could be calibrated using reliability methods to account for the statistical uncertainties in loading and resistance.

    The direct use of probabilistic methods and structural reliability analysis techniques in design is the latest step in the evolutionary process. Although probabilistic design is at present primarily

  • 4

    used for nationally important structures, or structures with high consequences of failure, e.g. nuclear power plants, dams etc., its use is growing. Many Codes of Practice now have provisions for reliability analysis; either in the calibration of project-specific partial safety factors or as an alternative design approach, and a draft model code has been prepared for the use of reliability methods in design and analysis [4].

    Since the early 1990s, probabilistic and quantified risk assessment is routinely applied to designs in many areas – structural failure represents one failure event, often the most important, in such risk assessments.

    2.2 DESIGN METHODS

    Deterministic design methods include the following:

    • Design by geometric ratio

    • Load factor design

    • Allowable stress design

    • Limit state design.

    Much of the discussion below is based on the development of design methods in building structures and bridges, since for much of the past these fields have led the advances in design methodology. The design of pressure vessels, and pipelines in particular, has lagged behind, and it is only recently that limit state methods and probabilistic methods are being applied to these types of structures.

    2.2.1 Design by Geometric Ratio Before mathematics and science were applied to building work, design rules were largely based on experience and tradition. Many of these rules were often based on geometric ratios to give limits on what could safely be built. These rules were usually established by trial-and-error, and their development involved frequent collapses. However, the approach has produced some magnificent structures from classical times up to the gothic cathedrals and structures of the Renaissance; many of these structures survive today. Such rules work well with masonry where stresses are generally low and where failure involves rigid body motion; i.e. where the strength of the structure depends on its geometry rather than the behaviour and strength of materials. Generally, if such a structure is satisfactory, it would be expected to be satisfactory if built at twice the scale.

    Geometric ratios are still used in the building trade and as rule-of-thumb methods by designers in a first attempt at section sizing.

    Geometric ratios are also still used in many of today’s codes and standards, and they form the basis of the Classification Rules for the design of ships. In many steel design codes they are used to categorise bending or compression members by likely failure modes (plasticity, inelastic buckling, elastic buckling, etc), and to define limiting width-to-thickness ratios to determine an ‘effective width’ for example in stiffened plate design.

    These types of rules are effective methods for simplifying codified design, and are particularly useful where accurate answers could only be obtained by much more complex methods or finite element (FE) analysis.

  • 5

    2.2.2 Load Factor Design Whilst the stone in masonry structures is in one sense ‘brittle’, walls and arches seldom failed by sudden fracture of the material. However, with the development of cast iron and its use in building and bridge construction, first as columns and later beams, such fractures and sudden failures did occur.

    Until at least the 1850s, most approaches in the UK to overcome this were based on large and full scale testing, and proof loading. Telford, in the construction of the Menai Bridge which was opened in 1826, load tested each bar to twice its anticipated load. W Fairbairn was also noted for his experimental expertise; in the late 1840s he undertook an extensive experimental programme to investigate the compressive buckling of thin plates forming large tubular sections used in the Britannia and Conway bridges [5].

    As outlined by Pugsley [2], scientific tests were used to investigate the strength of cast iron columns in 1840 by Eaton Hodgkinson, and these were followed later by others (including L Tetmajer in 1896 and A Ewing in 1898). The results of the tests were used to develop mean strength formulae for columns relating slenderness ratio to axial stress at failure (first by W Rankine in 1866, then by A Ostenfeld using Tetmajer’s results).

    Pugsley [2] explains that at this time design was undertaken using a load factor whereby a safe working load was determined from the mean failure load; from the very first formula developed by W Rankine, the load factor was varied with the nature of the loading. The use of different load factors was investigated further by E Salmon, largely following on from his work on railway bridges. ‘Live’ train loads, which were applied very rapidly, could lead to double the stresses from permanent ‘dead’ loads. Thus it was recommended by Salmon in 1921 that the load factor for live loads should be double the dead load factor.

    However, further tests showed considerable scatter in column strengths, and it was noted that there was considerable difference between laboratory and practical test specimens. Some years earlier W Fairbairn (in 1864) had also recognised the effect of flaws in large castings on the strength of beams, and had noted the significant variation in strength between castings.

    In 1900 J Moncrieff identified that three margins or factors should be adopted for the design of columns to address the three main sources of uncertainty affecting column failure. These were:

    1. Accidental overload leading to elastic instability should be prevented; Moncrieff proposed that the working load should be restricted to one-third of the Euler critical load.

    2. Geometric imperfections in the column, due to out-of-straightness or load eccentricity, should be allowed for; an equivalent eccentricity was allowed for in the column formulae.

    3. Imperfections in the material, a significant source of weakness in large cast iron columns, should be allowed for. Moncrieff proposed reducing the average failure stress to one-third of its value – later a lower bound value came to be used.

    These three aspects of column safety are still relevant today.

    2.2.3 Allowable Stress Design With the introduction of ductile materials - wrought iron in the nineteenth century and later mild steel - the allowable stress approach followed on from the development of linear elastic theories.

  • 6

    These theories accurately represented the behaviour of the new structural materials up to a yield stress which was taken to be the onset of failure. With the application of science and mathematics to engineering, indeterminate structures could be analysed and the distribution of bending and shear stresses could be worked out in detail. Much of our present theory of elastic structures and material behaviour was largely developed in this period with work from such leading scholars as Euler (born in 1757), Coulomb (1773), Navier (1826), Saint-Venant (1855), Mohr (1874), Castigliano (1879), Timoshenko (1910), etc.

    Further development in structural safety occurred following an inquiry set up in 1849 concerning the failure of a number of major railway bridges, including the Dee Bridge. The inquiry heard evidence from I K Brunel, Robert Stephenson, Locke, Fairbairn, and many other eminent engineers of the time. As well as addressing failure, engineers also became aware of the need to consider, and commonly to prevent, the development of permanent set. The natural way to present such calculations was the allowable stress format, in which the stresses caused by the nominal or characteristic design loads should not exceed an allowable or limiting stress. The allowable stress, as it is now defined, is the yield stress or failure stress of the material divided by a safety factor. The factor was intended to cover the uncertainties in loading, material strength and structural behaviour with an adequate margin of safety. The safety factors were developed over time largely, if not exclusively, on the basis of experience, and were rarely, if ever, explicitly stated in design standards.

    2.2.4 Limit State Design Until 1910 structural engineering went into a period of consolidation; this changed with the First World War. With the rapid development of military aircraft during the War, it again became common practice to demonstrate structural efficiency by testing to destruction. Biplane wings were fabricated from timber spars and struts braced by steel wires; failure generally occurred suddenly, and design came to be based on the specification of measured ultimate strengths.

    After the War the ever present need for efficiency and profit meant that much research effort was devoted in many industries, in particular aeronautics, to methods of accurately predicting the strength of redundant frames (this is still an important research area). Pioneering work by Kacinczy and later Maier-Leibnitz, who carried out tests on clamped beams, showed that the yield load and plastic collapse load were distinct. This led to the development of plastic theory and methods in the 1940s, and an improved understanding of structural behaviour. J Heyman [6], amongst others, gives further details of the historical development.

    It became possible to define limits of structural performance, and in 1951 a committee under the chairmanship of Sir Alfred Pugsley was set up to consider ways of specifying design safety margins. Its report, published in 1955 [7], presented a tabular approach for the evaluation of load factors based on subjective ratings given to five effects. The effects were grouped into those influencing the probability of collapse (material, load, and accuracy of analysis), and those influencing the consequences or seriousness of the results of collapse (personnel, and economics). The load factor was derived from the multiple of the outcome of the assessment for probability of collapse and the outcome for the consequences. The basic elements form the rationale behind much of today’s safety factor calibration and design philosophy.

    The work of Pugsley’s committee was devoted largely to the prevention of collapse; later work addressed serviceability. It was suggested that two load factors could be considered, one related

  • 7

    to proof-load, defined as the load just sufficient to start appreciable permanent set, and the other related to breaking load, or the load at which excessive permanent distortion arises.

    Freudenthal sat on a committee with similar aims in the USA, which reported in 1957. Their approach was more quantitative, and used reliability theory to address the uncertainty in the basic parameters influencing failure. They considered risk and acceptable accident rates, and the final results were embodied in tables that, according to Pugsley [2], showed ‘a remarkable degree in similarity in the overall results of the two processes’.

    During the 1960s and ’70s further developments, particularly in Europe, led to the establishment of limit state design methods.

    2.3 CODES AND STANDARDS

    2.3.1 The Historical Development of Codes and Standards The Ancients put the onus for structural safety onto the designer/builder, and had clear rules concerning the fate of a builder responsible for fatalities from a structural failure! The best known example is from King Hammurabi who set down more than 280 rules or laws governing life in Babylon in 1780 BC. The code included a number of rules for builders, which very much followed the manner of the Code’s most well known law that has become known as “an eye for an eye”.

    However, by the Renaissance period, failures were considered the price of progress and were viewed as truly an ‘Act of God’.

    In the UK one of the earliest laws relating to structures was proclaimed by James I in 1620, which contained provisions relating to the thickness of walls, etc. This was followed by the first comprehensive Building Act in 1667 following the Great Fire of London.

    The Board of Trade (following recommendations of the 1849 Inquiry into rail bridge failures) set the limiting stress for iron to be 5 tons/in2 – for wrought iron this corresponded to a factor of safety of at least 4 [2].

    By the late 19th century, with the rapid advancement in understanding of scientific and mathematical knowledge, it was considered that engineers were, or should be, more in control of nature. This led to the formation of the Engineering Institutions in this country and later to codification and standardisation.

    The standardisation of building materials was introduced in the early 1900s. The Engineering Standards Committee was formed in 1904 by the various engineering institutions; the committee’s publications became to be known as British Standards. The first standard involved the standardisation of section sizes; others covered specifications for steel, and standardisation for testing. The specification of a working stress limit for steel was introduced as early as 1909 by the London Building Byelaws. Similar regulations had been introduced in America around the same time.

    One of the first British Standards for structural design was published in 1922 for steel girder bridges; this was based on permissible stresses and was the forerunner of BS 153 (the former steel bridge code, published in 1958). BS 449, for structural steel design in buildings, was published in 1932.

  • 8

    Many Building Regulations, and many of the Standards, of this period were very prescriptive in nature, for example the London Building Act of 1935 specified the thickness of an external or party house wall for a given height. The builder’s responsibility was only to see that the regulation was satisfied. If the building then collapsed, the builder would be exonerated.

    Regional Byelaws, and from 1965 the Building Regulations, frequently referred to British Standards, and this gave rise to the standpoint that compliance to the standard was satisfactory for design purposes. However, nowadays most British Standards for design state that: ‘Compliance with a British Standard does not of itself confer immunity from legal obligations’. Furthermore, most British Standards for design, in particular structural engineering design, are in fact Codes of Practice.

    There is some confusion in the industry between the terms code and standard.

    Compliance with standards tends to be mandatory, whereas codes tend to be merely advisory offering guidance on what the code committees considered to be best practice at the time of drafting. Many engineers do not appreciate this distinction, and consider codes to have a higher standing than they actually have. This situation is not helped by the fact that in the UK both standards and codes of practice are published by the British Standards Institution and are referred to as British Standards. Understanding is not aided by terminology such as Eurocodes and International Standards.

    Standards tend to be used for materials and products where compliance with that standard must be achieved for a material or product to be acceptable. Codes of Practice tend to be used for design purposes where what is required is a set of principles with accompanying design rules that enable these principles to be achieved. Traditionally, UK Codes of Practice have acted as handbooks for designers. This was not always so in mainland Europe where codes could be relatively small, but large handbooks were developed to help with interpreting the codes.

    Typically, Standards tend to be expressed in a prescriptive way, indicating how things should be done and not justifying reasons for doing so, nor stating the aims to be attained. Many standards and early codes have been empirical, being based on a limited series of tests. This can make it difficult for designers looking to go beyond the bounds of the code as the limitations of applicability and assumptions are not always stated. Similarly, when developing early codes, there would have been situations where test data were not available and assumptions regarding best practice would have had to be made by the code committee. In later years these assumptions may well become ‘cast in stone’ and treated with the same standing as those clauses that were developed on the basis of relevant test data. Again, this leads to difficulties when going beyond the bound of the code, for example in the reassessment of existing structures. Codes also take a very long time to develop and ratify, which means that they do not necessarily reflect the latest research and good practice.

    The drawbacks with this approach were first recognised in the aviation industry, which in 1943 recommended that codes should be stated in terms of objectives rather than specifications [8]. That is, a code should define what is to be achieved and leave the designer to choose how this will be achieved. This is the approach adopted by most modern codes.

    The 1970s were a time of marked activity in code development (that is still continuing). As discussed by Thoft-Christensen & Baker [9] the main features have been:

  • 9

    the replacement of many simple design rules by more scientifically-based calculations derived from experimental and theoretical research,

    • the move towards Limit State design …. ,

    • the replacement of single safety factors or load factors by sets of partial coefficients,

    • the improvement of rules for the treatment of combinations of loads and other actions,

    • the use of structural reliability theory in determining rational sets of partial coefficients, and

    • the preparation of model codes for different types of structural materials and forms of construction; and steps towards international code harmonisation ….

    Code drafting committees face a dilemma as, on one hand, they see the advantages and flexibility that is offered by a code based on objectives. On the other hand, many designers want prescriptive rules that enable them to produce designs quickly, safely and efficiently. This is an issue that persists to the present day, particularly in relation to the new Eurocodes.

    There were long delays in the development of early EC/EU product legislation, which was based on the practice of incorporating detailed technical specifications in directives. The “New Approach” policy for European product legislation (first adopted in 1990) has to some extent solved this problem by specifying essential safety requirements in directives, supported by technical detail in harmonised standards. This approach also enables the adoption of new technology, since the essential safety requirements, which are goal setting, can be complied with directly.

    European Committee for Standardisation (CEN) is currently producing a suite of Eurocodes for all of the major construction materials, structural types and loading. The majority of these Eurocodes will become available for use between 2003 and 2005. They mark a departure from the traditional basis of preparing codes in that they represent the views of all of the 19 European countries represented by CEN, not just one nation.

    Eurocodes make a distinction between Principles and Application Rules. Principles are differentiated from Application Rules by applying the letter P following the number. The Principles comprise:

    • general statements and definitions for which there is no alternative, as well as:

    • requirements and analytical models for which no alternative is permitted unless specifically stated.

    The Application Rules are generally recognised rules that enable the designer to comply with the Principles. Alternative Application Rules can be used to those given in the Eurocodes. However, the user has to demonstrate that the alternatives satisfy the relevant Principle and are at least equivalent with regard to resistance, serviceability and durability to the Application Rules contained in the Eurocode.

    With the current UK HSE goal-setting regime, it may be considered that safety regulations have gone full circle, and moved away from prescriptive requirements to put the onus for safety back on the designer/builder.

  • 10

    2.3.2 Limit State Design Codes The concepts of limit state and of probabilistic safety were first presented by Max Mayer in a thesis published in 1926. Although the concepts were well expressed, it was not until the middle 1940s that limit state methods were first introduced into design codes in the USSR; this was the first codified attempt to link all aspects of structural analysis, including the specification of loads and the analysis of safety. The ideas and use of partial factors were subsequently adopted in 1963 by the Comité Européen du Béton (CEB) [10] for reinforced concrete design.

    In the late 1970s, the first attempt at unifying design rules for different types of structural materials was undertaken [11] by the Joint Committee on Structural Safety (JCSS). JCSS also prepared General Principles on Reliability for Structural Design (later used by the International Organisation for Standardisation (ISO) in the revision of ISO 2394 [12]). The work of the international JCSS formed the basis of the development of the Eurocodes. The Eurocodes contain a general principles section, Eurocode 0 (EN 1990 – Basis of Design), which gives guidelines on structural reliability relating to safety, serviceability and durability for general cases and those cases not covered by the other structural Eurocodes. As such, codes can be developed for other materials or structures not covered by the Eurocodes which will be compatible in concept and reliability with the main structural Eurocodes.

    Load models in the American ANSI Standard A58 ‘Building Code Requirements for Minimum Design Loads in Buildings and Other Structures’ [13] were developed using probabilistic criteria in the 1980s. In addition, a reliability-based code for the design of structural steel buildings was developed by the American Institute of Steel Construction (AISC) [14].

    For offshore structures, the first probability based limit state code was introduced in 1977 by the Norwegian Petroleum Directorate (NPD) [15]. The Canadian CSA code for the design, construction and installation of fixed offshore structures was introduced in 1989 (revised in 1992 [16]), and is believed to be the first code to use explicit target reliabilities. The American Petroleum Institute (API) also commissioned work (by Professor Moses [17]) in the late 1970s to develop an LRFD (Load and Resistance Factor Design) version of the popular and widely used WSD (Working Stress Design) version of RP2A, but it was not until 1993 that the 1st Edition of RP2A-LRFD was published [18] (although it has still not been accepted for use in the US). RP2A-LRFD forms the basis of the forthcoming ISO Standard for Fixed Steel Structures [19], but the safety factors, and in particular regional load factors, have not yet been calibrated (although work is underway).

    2.3.3 Reliability Methods in Codes Many current Codes of Practice allow for the explicit calibration of project-specific partial safety factors for unusual or novel structures, or structures subject to special circumstances - typically the safety factor calibration would be undertaken using reliability-based methods.

    Many current Codes also allow for the direct use of reliability methods in design. The DNV (Det Norske Veritas) Rules for the design of fixed offshore structures have for many years allowed three alternative design approaches: allowable stress, partial coefficient, and reliability design. DNV also have a useful Classification Note for the practical use of structural reliability methods [20] – this is primarily intended for marine structures, but much of the material is generic in nature. Since 1998 there has also been an ISO standard covering the general principles for the use of structural reliability [12].

  • 11

    2.3.4 Model Code In 1989 the first steps towards a model code for the direct use of reliability in design were taken by Ditlevsen & Madsen [4]. This document is a proposal, or “working document” developed under the auspices of the Joint Committee on Structural Safety (JCSS). Unfortunately, little further development has been published and the document, which is viewed by some to be rather academic, is not believed to be widely used.

    Renewed effort by the JCSS is currently underway to develop a new JCSS Probabilistic Model Code [21]. A draft was first published on the Internet in March 2001, and is intended to be adapted and extended a number of times to cover all aspects of structural engineering.

    2.4 RISK AND RELIABILITY METHODS

    The subject of structural reliability assessment has its origins in the work of Freudenthal [3], Pugsley [2], Torroja and others carried out in the 1940s and 1950s. From these early times when the basic philosophy and some simple calculation procedures were first conceived, there have been extensive and far-reaching developments, so that at the present time there are now well-developed theories and a number of basic methods which have very wide support on a global scale.

    Research into Probabilistic Risk Assessment (PRA; similar terms used in other industries include PSA – probabilistic safety assessment, and QRA – quantified risk assessment) in the offshore industry started in the late 1970s [22] based on experiences in the nuclear and aeronautical industries. The first guidelines on risk assessment for safety evaluation were published by NPD [23]; these required risk assessment studies to be carried out for all new offshore installations at the conceptual design stage.

    Probabilistic methods and risk assessment also started to be applied in the process industry in the late 1970s following a number of major disasters, Flixborough, etc. It is now a well established tool for assessing most types of planned and existing chemical and hazardous materials installations, i.e. major accident hazard installations.

    Methodologies for both reliability and risk analysis were advanced significantly in the 1980s, and with advances in cheap computing software was developed. Techniques for accounting for the benefits of inspection on reliability using Bayesian updating were developed from research in the aeronautical and offshore industries.

    The Piper Alpha Disaster in 1988, and subsequent inquiry reported by Lord Cullen in 1990 [24] was the impetus for fundamental changes in the way in which safety is managed and regulated in the UK, especially for the offshore industry. There has been a move away from prescriptive regulations to those which set safety goals, allied to greater emphasis on the explicit assessment of risks and their management.

    The three key features of the UK approach are:

    • hazard identification

    • risk analysis

    • formal demonstration that major risks had been reduced 'to as low as is reasonably practicable' (ALARP).

  • 12

    QRA is used by regulators and industry for the assessment of risks from onshore hazardous installations, and is becoming an increasingly widely used tool within the major hazard and transportation industries in the UK. In particular, PRA and QRA methods are now being applied to pipelines; this was led by offshore pipelines, but the methods are now increasingly being applied to onshore lines as well.

    The quantitative risk assessment process is based on answering three questions:

    i. What can happen, i.e. scenarios?

    ii. How likely is a scenario to happen and to lead to failure, i.e. probabilities?

    iii. If a failure scenario does happen, what are the consequences?

    As explained later in this report, structural reliability analysis is primarily concerned with addressing the second question, and (usually) only those scenarios that involve structural failure.

  • 13

    3. DETERMINISTIC AND RELIABILITY-BASED DESIGN AND ASSESSMENT PROCEDURES

    3.1 SUMMARY

    The fundamental aim of structural design procedures and maintenance decisions is to obtain a structure with an economical design and a sufficient degree of reliability.

    The reliability of structures is traditionally achieved by deterministic methods employing one or more explicit safety factors and a number of implicit measures. The explicit safety factors depend on the safety format adopted, which may be either allowable stress (also known as permissible stress or working stress design) or partial factor design. Safety margins are enhanced implicitly by a number of other factors, including the use of conservative estimates of parameters, and using methods of analysis that give lower bound solutions to collapse loads.

    Structural reliability analysis methods and probabilistic techniques have been used for a number of years to assess and calibrate the safety factors in many design codes throughout the world. For a number of years they have been used to set inspection and maintenance programmes, particularly for offshore structures. These methods are now being used explicitly to design structures.

    This Chapter introduces the basic causes of structural failure and the various risk reduction measures that are used to control them. The fundamental requirements of structural design are presented and the various design approaches are then discussed in detail, including the UK Safety Case Regulations. The main drawbacks with each approach, which provided motivation for further development, are also discussed.

    3.2 CAUSES OF STRUCTURAL FAILURE AND RISK REDUCTION MEASURES

    Before looking at methods and principles of design, it is necessary to consider the causes of structural failure. The basic causes of structural failure can be classified into four categories as shown in Table 3.1. This classification only highlights the main causes, and is a gross simplification of reality because structures rarely fail solely due one shortcoming, but due to a sequence of events.

    The table also shows some of the types of measures that may be taken to reduce the risk for each category of failure cause.

    Table 3.1 is based on a simple categorisation. Alternative categorisations include work by Blockley [25], who suggests eight categories for the causes of failure and proposes an approach to judging the likelihood of a structural failure based on a list of 25 questions.

  • 14

    Cause of failure Risk reduction method Limit states Overload: geophysical, dead, internal pressure,

    temperature, wind, etc Under strength: materials, instability Movement: settlement, creep, shrinkage, etc Deterioration: fatigue, corrosion, hydrogen

    embrittlement, stress corrosion cracking, erosion, etc

    • Increased safety factors • Testing (e.g. hydrotest) • In-service inspection

    Accidental or random hazards Fire Explosion – accidental, sabotage Third party activity - impact

    • Design for damage tolerance (including selection of material)

    • Protective measures (e.g. pressure/explosion relief valves, fire protection)

    • Event Control Human errors or gross errors in Design Fabrication Operation

    • Quality Assurance and Quality Control

    • Independent verification/assessment/peer review

    • Event Control • Inspection/repair • Design for damage tolerance • Protective measures

    Unknown phenomena Research and development

    Table 3.1 Causes of structural failure and risk reduction methods

    This report primarily concerns the first two categories for causes of failure in Table 3.1.

    The methods used to reduce the risk of failure due to the first category are generally considered to be fundamental design requirements. Failure under a design limit state generally occurs because a less than adequate safety margin was provided to cover “normal” uncertainties in loads and resistances.

    Probably the most important step to reducing the risk of failure due to accidental or random hazards is to identify the hazards - techniques such as HAZID and HAZOP analysis are used. Once a hazard has been identified it becomes a foreseeable event, and limit states can be developed so that it can be considered part of the fundamental design requirements. System design methods to improve robustness and redundancy are also important steps for improving damage tolerance.

    The nature of human errors or gross errors differs from that of natural phenomena and “normal” man-made variability and uncertainty, and different safety measures are required to control error-induced risks; gross errors and their treatment are briefly discussed in Section 3.6.

    Unknown phenomena are not addressed; for all but exceptional or unique structures, failure due to totally unknown phenomena is now very rare. The prevention of failures due to unknown

  • 15

    phenomena that arise from a lack of knowledge within the profession as a whole is clearly impossible. However, it is important to distinguish between unknown phenomena and unidentified phenomena due to a lack of awareness of the designer. Unidentified phenomena can be categorised as human or gross errors, and the risk may be reduced by independent checking or peer review.

    Typically, the risk reduction measures in Table 3.1 are either aimed at:

    • reducing the likelihood and/or probability of a failure event

    − increasing factors of safety − in-service inspection − assuring quality in design, fabrication and operation

    • reducing and/or mitigating the consequences of a failure event

    − increasing damage tolerance, through redundancy and material selection − event control measures, e.g. water curtain sprinklers, safe refuges, evacuation

    procedures.

    Where possible, the most effective measure is to seek to eliminate a risk through good design. This principle of prevention is also stated in the EU Framework Directive, which indicates that, for safety, risks should preferably be avoided. Clearly, it is not always possible to completely avoid risks, particularly in pressure systems. Engineering is about looking for and facing up to risks and minimising and dealing with them safely by adopting a balanced and informed response.

    3.3 FUNDAMENTAL DESIGN REQUIREMENTS

    The ISO’s (International Organisation for Standardisation) Standard definition of the fundamental requirements for structures [12], which is as good as any, is given as:

    ‘Structures and structural elements shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life and in an economical way.’

    The ISO standard for General Principles then identifies a number of requirements that should be fulfilled, with appropriate degrees of reliability. These requirements for structures are:

    - They shall perform adequately under all expected actions

    - They shall withstand extreme and/or frequently repeated actions occurring during their construction and anticipated use

    - They shall not be damaged by events like fire, explosions, impact or consequences of human errors, to an extent disproportionate to the original cause [a robustness requirement].

    The appropriate degree of reliability should be judged with due regard to the possible consequences of failure and the expense, level of effort and procedures necessary to reduce the risk of failure.

  • 16

    These requirements are fundamental for structural design, and most design codes and standards adhere to them in some way or another. In traditional design codes the first requirement may be termed the Operating or Normal condition, the second requirement the Extreme or Abnormal condition, and the third a Robustness or Survivability requirement. In limit state codes the first is termed a Serviceability Limit State requirement, the second an Ultimate Limit State requirement, and the third a Progressive Collapse or Accidental Limit State requirement.

    Some codes may also explicitly specify additional requirements. For instance the DNV Rules [26] for offshore structures aims for structures and structural elements to be designed to:

    - have adequate durability against deterioration during the design life of the structure.

    In the Norwegian offshore structure codes, NPD, DNV, the recent NORSOK standards, this is termed a Fatigue Limit State requirement.

    3.4 DETERMINISTIC DESIGN AND ASSESSMENT PROCEDURES

    In traditional deterministic methods of design and analysis, required levels of structural safety (structural reliability) are achieved by the use of:

    • Conservatively assessed characteristic or representative values of the basic design variables; in conjunction with

    • A safety factor, or set of partial safety factors (partial coefficients) based on judgement and evidence of satisfactory performance over a period of time, or more recently on reliability-based calibration exercise; and using

    • An appropriate method of global structural analysis (e.g. linear static, linear dynamic, nonlinear static, nonlinear dynamic, etc); together with

    • A particular set of equations defining the capacity of individual structural components – usually contained in the relevant Code of Practice.

    Such deterministic methods of design or safety checking can be defined as Level 1 design methods [27]. Level 1 methods are deterministic reliability methods and are defined as:

    Design methods in which appropriate degrees of structural reliability are provided on a structural element basis (occasionally on a structural basis) by the use of a number of partial safety factors, or partial coefficients, related to pre-defined characteristic or nominal values of the major structural and loading variables.

    As discussed in Section 3.5, the partial factors may be calibrated using higher level reliability methods to achieve a specified target reliability.

    The main disadvantages of all deterministic design methods are that:

    (a) Properties and partial safety factors (where used) are often not given as best estimates or most likely values, with the result that it is not possible to estimate the most likely strength of the structure.

  • 17

    (b) The risk of failure or collapse, or the overload necessary to cause failure or collapse, may vary widely for different structural members and components, and different types of structure.

    (c) The assumption that most design parameters are known constants rather than statistical variables is in most cases a gross simplification.

    (d) The safety factor approach is not so easy to apply in assessment of existing structures and for making maintenance decisions.

    The different deterministic design approaches are discussed further in the following sections.

    3.4.1 Allowable, Permissible or Working-Stress Design, and Load Factor Approaches

    Allowable or permissible stress design, or working stress design as it is referred to in the US, is a traditional elastic design method that has been used extensively for the design of many types of structures worldwide.

    The design formats in design codes or standards based on allowable stress principles is of the form:

    SF1RS cc ≤ (3.1)

    where Sc is the load effect or stress in the component due to the applied design loading, Rc is the specified resistance or design resistance of the component for the considered

    failure mode, a function of the specified yield strength of the material, and SF is the corresponding safety factor accounting for all uncertainties in load,

    resistance, analysis methods, etc.

    By factoring the yield stress the intention is for linear elastic materials that the stresses should remain elastic, and for this reason the approach is sometimes referred to as the elastic theory design method.

    • In this form, a load factor(s) is not applied and the design load equals the characteristic load.

    • The safety factor may be implicit in the design checking formulae, it may be explicitly stated, or it may be a combination of the two.

    • The basic philosophy is very simple, and this combined with its ease of application is the main advantage of the approach.

    However, as discussed below, complications arise for a number of reasons. These complications have led to a number of interpretations of the basic format in different codes and standards, and in different countries.

    A significant complication arises with the format because, in general, each component in a structure needs to be checked for a number of different combinations of loading. Many of the sources of loading vary with time, and it would lead to unconservative design if all the sources of load were considered to be acting with their full design value while maintaining the same safety factor.

  • 18

    This is overcome in a variety of ways. In some codes, different characteristic loads or return periods are specified for different combinations of loads. In others, the safety factor may be reduced (or an increase in allowable stresses permitted) for combinations involving less frequent or very short duration loading events (e.g. extreme storm or rare intense earthquake). Some codes employ a mixture of the two methods.

    The main disadvantage with the allowable stress approach is that all of the associated uncertainties are incorporated into one safety factor. As a result, this approach may give inconsistent safety levels which in general are conservative, but which in some cases can lead to unconservative design.

    This is particularly serious where loads from different sources (and different levels of uncertainty) are in opposition. One of the most notable examples where this was the main cause of failure is the Ferrybridge Cooling Towers in Yorkshire, which collapsed in 1965. The gravity force just opposed the design wind pressure and led to an omission of vertical tensile reinforcement over much of the towers. Collapse occurred because of an underestimate of the design wind pressure. (The wind pressure was based on an isolated tower with no allowance for the fact that there were eight towers closely grouped together, and due to a misinterpretation of wind tunnel model test results.)

    A further difficulty arises in the assessment of buckling; although the material may behave elastically the member as a whole behaves nonlinearly. The allowable stress format must be modified to accommodate this, with the unfortunate consequence that either the calculated stress, the allowable stress, or both become rather artificial concepts and do not reflect the elastic stresses that actually occur at failure.

    There are also a number of shortcomings of elastic theory design methods when applied to reinforced concrete. The stress-strain behaviour of concrete is time-dependent, and creep strains can cause a substantial redistribution of stress in a reinforced concrete section which means that the stresses that actually exist at the service loads bear little relation to the calculated design stresses.

    The philosophy of the allowable stress approach is also stretched in a number of other areas, particularly bolted or welded connections. With the advent of computers and detailed analysis, local areas of connections can often be shown to exhibit high theoretical elastic stresses. In many circumstances for steel structures this is not a problem because of the ability of mild steel to yield locally and redistribute forces. The search for improved analysis tools to better predict the strength of connections, plates in bending, and redundant frames led to the development of yield-line analysis and plastic theory.

    A further criticism of the approach is that it does not provide a framework of logical reasoning through which all the limiting conditions on a structure can be examined, i.e. deflections, cracking, etc. It is often said that there is too much emphasis on elastic stresses and too little emphasis on the limiting conditions controlling the success of the structure in use.

    One modification is the load factor method, in which the safety factor is applied to the load and not the material. The load factor is the theoretical factor by which a set of loads acting on the structure must be multiplied to just cause structural or component failure (collapse).

  • 19

    The load factor method was originally used for brittle materials, particularly cast iron. It was also popular with reinforced concrete design in the mid 1950s. With advances in knowledge of the actual behaviour of structural concrete, design could be based on ‘ultimate strength’ in which inelastic (plastic) strains are taken into account. The load factor concept was also used in the plastic theory of structures, and is used in BS 5950 for the design of steelwork [28] (although in this standard the approach is perhaps more correctly denoted a partial factor approach with the material factor set to unity).

    The load factor method overcomes the difficulty with buckling in the allowable stress method, but its disadvantage is that it becomes difficult to apply when the structure is composed of different materials that require different safety factors.

    3.4.2 Partial Factor, Partial Coefficient or Load and Resistance Factor Design Approaches

    The partial factor or partial coefficient approach, or Load and Resistance Factor Design (LRFD) approach as it is referred to in the US, uses a number of partial factors that are applied to the resistance terms for different component types, and also to the basic load types prior to structural analysis. Basic load types depend on the type of structure, but include:

    • permanent loads, e.g. dead or gravity,

    • live loads, e.g. operational loads, pressure and temperature, etc.

    • dynamic loads, e.g. impact and shock loads, slugs in pipelines, etc.

    • environmental loads, e.g. wind, snow, etc.

    The partial factors reflect the level of uncertainty of the basic terms, and vary in magnitude according to the component and combinations to which they are applied.

    The partial factor format for a basic component design check is at its simplest:

    ( )

    γ

    =≤=γ∑ 1RRSL kddii (3.2)

    where Rd is the design resistance of the component for the considered failure mode, Sd is the internal design load effect (or stress) on the component, and is evaluated from the most unfavourable combination of factored applied loads, γi is the load factor, or coefficient, for load type i (> 1.0 for detrimental effects, < 1.0 if beneficial), Li is the load type i based on the characteristic loading (e.g. dead or gravity, live or operational, environmental), γ is the component resistance factor, or sometimes material factor, (in the US LRFD format this is replaced by a φ-factor = 1/γ), Rk is the nominal resistance derived from formulae evaluated with the specified characteristic values of material and geometry.

    The advantage of the partial factor approach is that the uncertainty is reflected in both the loading and the strength terms rather than a single safety factor as WSD.

  • 20

    However, the partial factor format represented in Eqn (3.2), although simple, is often a source of confusion.

    The main misunderstanding arises from the determination of the design load effect in a component. This should be evaluated in practice by factoring the basic load cases to form a load combination of the design loads before undertaking a structural analysis to determine local member forces or stresses. Due to the large number of load combinations that have to be analysed in practice, a short-cut is to undertake the structural analyses for basic load cases, and then factor and combine the component load effects, forces or stresses, by superposition. For linear elastic structures the two approaches are of course equivalent, and many engineers (particularly those experienced in WSD methods, where the safety factor is applied at component level, see Eqn (3.1)) may be unaware of the distinction. However, for structures influenced by dynamic effects or nonlinear behaviour, even in part, superposition is no longer valid, and the distinction is important.

    Unfortunately, for some types of structures and analyses the load effects have to be determined by factoring and combining the results of analysis for basic load cases. This situation arises when the structure is in equilibrium (e.g. in an installation phase), and in particular where all or part of the structure is buoyant (e.g. subsea pipeline spans).

    Whilst Eqn (3.2) illustrates the principles of partial factor design, it is not general enough for many situations, e.g. design checks involving stress interaction effects, composite materials, etc. Thus, the basic format can be expressed more generally as:

    ( ) 0R,Sfunction dd ≥ (3.3)

    Any number of partial factors can be introduced, and ideally each source or uncertainty should have its own associated partial factor, although in practice this may make a code very unwieldy. In general, partial factors are applied as follows:

    AkkAd /AorAA γγ= (3.4)

    where Ak is the characteristic or representative value of the variable, Ad is the design value of the variable

    The format was first internationalised by ISO in 1983 [29], which introduced partial factors for seven basic sources of uncertainty. This format formed the basis of BS 5400 [30], the Eurocodes [31], etc. However, for a number of reasons the format was made more general in a later revision of the ISO standard [12].

    The internationally approved format in the ISO standard [12], which is of course intended for use with limit state design (see Section 0), is:

    ( ) 0,C,,a,f,Ffunction ndddd ≥γθ (3.5)

    where Fd are the design values of the actions (or loads), determined from rfd FF γ= where Fr are representative values of the actions, fd are the design values of material properties determined from fkd /ff γ= ,

  • 21

    ad are the design values of geometric quantities determined from aaa kd ∆±= , θd are the design values of the model uncertainties not included in the load and resistance variables, they are determined from DDd /1or γγ=θ , C is a vector of serviceability constraints, e.g. acceptable deflection, γn is a coefficient reflecting the importance of the structure.

    Eqn (3.5) should be regarded only as a symbolic description of the principles, and each symbol in Eqn (3.5) may be regarded as a single variable or a vector containing several variables. In this generalised form the format is cumbersome, but in most cases many of the partial factors are set to unity.

    The internationally approved format uses partial factors as follows:

    γf for actions (or loads) which take account of: − the possibility of unfavourable deviation of the action value (or load) from its

    representative value – separate factors may be defined for each type of loading − the uncertainty in the assessment of the effects of action (or loading), i.e.

    unforeseen stress distribution in the structure, and variations in dimensional accuracy achieved during fabrication

    γm for materials which take account of − the possibility of unfavourable deviations in the material properties from the

    characteristic value – separate factors may be defined for each type of material − uncertainties in the conversion of parameters derived from test results into design

    parameters.

    ∆a are additive geometric quantities, i.e. aaa kd ∆±= , which take account of − the possibility of unfavourable deviations in the geometric properties from the

    characteristic value

    γD for model uncertainties which take account of − uncertainties of models (i.e. the codified formulae used to predict capacity) as far

    as can be found from measurements or comparative calculations

    γn is a coefficient by which the importance of the structure and consequences of failure, including the significance of the type of failure, are taken into account.

    The representative value of an action (or load) is derived from the characteristic value and is factored by load combination factors to take into account the reduced probability that various loadings acting together will attain their nominal values simultaneously. A factor is also introduced to account for favourable or unfavourable contributions from an action.

    The values of the partial factors depend on the design situation and the limit state considered. The basic format is often simplified in practice by combining together many of these factors, or by taking some to be unity.

    The partial safety factors applied to both loads and strength can be calibrated using reliability methods; this is discussed further in Section 6.2. This permits the loading uncertainty to be

  • 22

    accounted for in the load factors, and the uncertainty in yield stress and resistance modelling to be accounted for in the resistance and material partial factors. Whilst the partial factors may be derived using structural reliability methods, this is transparent to a designer using the code.

    A disadvantage with the partial factor approach, that is a prime reason why the approach is not more widely and readily adopted, is that the increased likelihood of design error (because of the increased complexity) may outweigh the benefits of a theoretically better method.

    Limit State design philosophy A limit state is generally understood as a state of the structure or part of the structure that no longer meets the requirements laid down for its performance or operation. Thus, limit states can be defined as a specified set of states that separate a desired state from an undesirable state which fails to meet the design requirements. More generally, they may be considered without a specific physical interpretation, such that a Limit State is a mathematical criterion that categorises any set of values of the relevant structural variables (loads, material and geometrical variables) into one of two categories - the ‘desirable’ category (also known as the ‘safe’ set) and the ‘adverse’ category (often referred to as the ‘failure’ set). The word ‘failure’ then means ‘failure of satisfying the Limit State criterion’, rather than a failure in the sense of some dramatic physical event.

    In Codes of Practice, Limit States are considered to represent the various conditions in which a structure would be considered to have failed to fulfil the purposes for which it was built. Normally, limit states relate to material strength, but they are affected by use, performance, environment, material behaviour, shape, quality, protective measures and maintenance.

    Limit States may be defined for components or parts of a structure such as stiffened panels, stiffeners, etc. or for the complete structural system i.e. pressure vessel, pipeline, etc.

    A component, or system, may fail a limit state in any (one) of a number of failure modes. Modes of failure (at both component and system levels) may include mechanisms such as:

    − yielding − denting − bursting − fatigue − ovality − fracture − bending − corrosion (internal and/or external) − buckling (local or large scale) − erosion − creep − environmental cracking − ratcheting − excessive displacement − de-lamination − excessive vibration

    which, in the extreme, lead to the loss of

    • structural integrity

    • containment

    The consequences of such failures can affect

    • safety of life

    • environment (e.g. pollution)

  • 23

    • operations

    • economics

    A limit state code may be based either on an allowable stress format or a partial factor format, although most are based on the latter. In older traditional allowable stress codes the limit states are normally inherent or implicitly implied within the code. In a limit state code they are explicitly referenced.

    Alternatively, the code-checking equations for the various limit states can be used (without partial safety factors) in a reliability analysis to ensure that the failure probability of components or the structural system do not exceed an acceptable target level.

    The internationally approved format in ISO 2394 [12] for general principles is to categorise Limit States as:

    • Ultimate limit states (ULS), which correspond to the maximum load carrying capacity, and include all types of collapse behaviour.

    • Serviceability limit states (SLS), which concern normal functional use and all aspects of the structure at working loads.

    Conditions exceeding some serviceability limit states may be reversible; conditions exceeding ultimate limit states are never reversible.

    This is also the format adopted in the Eurocodes [31].

    However for many codes, including other ISO standards, additional limit states are defined. For example, two further limit states are defined in the (Draft) ISO standard 13819-1 for fixed offshore structures [19], and some Norwegian standards (the DNV Rules for fixed offshore structures [26] and subsea pipelines [32], and the NORSOK Standards [33] which largely replace the DNV and NPD standards). These other limit states are as follows:

    • Fatigue Limit State (FLS) — A condition accounting for accumulated cyclic or repetitive load effects during the life span of the structure.

    • Accidental or Accidental damage Limit State (ALS) — A condition caused by accidental loads, which if exceeded implies loss of structural integrity. Two conditions may be defined:

    − Resistance to abnormal loads

    − Resistance in a damaged condition

    For some forms of structure, an ALS is sometimes referred to as a Progressive Limit State (PLS).

    Some examples of these Limit States for generic structures are listed here, followed by some specific examples for pressure vessels and pipeline systems.

    Ultimate Limit State (ULS) This corresponds to the maximum resistance to applied actions which includes:

  • 24

    • failure of critical components of the structure caused by exceeding the ultimate strength or the ultimate deformation of the components,

    • transformation of the structure or part of it into a mechanism (collapse or excessive deformation),

    • instability of the structure or part of it (buckling, etc.).

    Serviceability Limit State (SLS) This relates to limits of normal operations, which include:

    • deformations or movements that affect the efficient use of structural or non-structural components, e.g. as would prevent pipeline pigging,

    • excessive vibrations producing discomfort or affecting non-structural components or equipment (especially if resonance occurs),

    • local damage that affects the use of structural or non-structural components,

    • corrosion that affects the properties and geometrical parameters of the structural and non-structural components.

    Fatigue Limit State (FLS) This refers to cumulative damage due to repeated actions leading to fracture.

    Detail connections of structural components under repetitive loading are prone to fatigue, examples include:

    • tubular joints in offshore structures subject to large numbers of wave cycles

    • stiffeners and attachments to road and railway bridges

    • connections in radio masts and transmission towers subject to wind induced vibration

    • welds in pipelines subject to start-up and shutdown cycles

    • nozzles, brackets and longitudinal connections to pressure vessels subject to operational cycles.

    Inspection and any required maintenance must be carried out in the field and generally with as little interruption to operation and production as possible. Inspection is costly, can be hazardous, and often difficult because of access limitations. Thus, there is a very strong desire to prevent any fatigue failures initiating.

    Accidental Limit State (ALS) This Limit State is primarily used for offshore structures, where the intention is to ensure that the structure can tolerate the damage due to specified foreseeable accidental events and subsequently maintain structural integrity for a sufficient period under specified environmental conditions to enable evacuation to take place.

    For pipelines, impacts due to third party external interference, which may lead to pipeline rupture, may be considered in this category.

    These loads (actions) are often defined as events with return periods of 10,000 years or more, compared with those for Ultimate Limit States that are generally based on events with return periods that are a much smaller multiple of the design life, typically 50- or 100-year events.

  • 25

    Limit States for pressure vessel design The (draft) Eurocode for pressure vessels prEN 13445-3 [34] is based on traditional permissible stress methods, but it does include an Annex covering alternative design methods. In the Design By Analysis method limit states are classified as either ultimate or serviceability:

    • an ultimate limit state is defined as a structural condition (of the component or vessel) beyond which the safety of personnel could be endangered

    • a serviceability limit state is defined as a structural condition (of the component or vessel) beyond which the service criteria specified for the component are no longer met.

    Limit States for pipeline design A number of definitions for limit states for operating pipelines have been proposed. Most use the concepts of Serviceability Limit State (SLS) and Ultimate Limit State (ULS) and many of these are confined to these two limit states only. Descriptions or definitions vary, as illustrated in the following, taken from References [32], [35], and [36]:

    Reference Serviceability Limit State (SLS) Ultimate Limit State

    (ULS)

    DNV [32] A condition, which if exceeded, renders the pipeline unsuitable for normal operations

    A condition, which if exceeded, compromises the integrity of the pipeline

    Oude Hengel [35] [A limit state] that may lead to a restriction of the intended operation of the pipeline

    [A limit state] that could result in burst or collapse of the pipeline

    Zimmerman [36] [A limit state] related to functional requirements

    [A limit state] related to load carrying capacity

    Key Words Impediment to: normal operations, intended operation, functional requirements

    Loss of integrity, load carrying capacity; burst/collapse,

    Examples of Ultimate Limit States include leaks and ruptures. Examples of Serviceability Limit States include permanent deformation due to yielding or denting.

    As discussed above, the DNV Rules also use limit states for:

    • Fatigue Limit State (FLS) — A ULS condition accounting for accumulated cyclic load effects.

    • Accidental Limit State (ALS) — A condition, which if exceeded, implies loss of structural integrity and caused by accidental loads.

    Kaye [37], takes a different, more practical, approach to the definition of limit states. He defines four limit states, in descending order of severity, in the following way:

    i. Major System Failure: causing or leading to sudden failure (rupture), possibly resulting in fatalities, damage to installations and environmental damage.

  • 26

    ii. Minor System Failure: causing or leading to loss of containment, possibly resulting in environmental damage.

    iii. Operability: causing loss of operability, without loss of safety. The transport of product is reduced or ceases. Pipeline operation may be recovered by repair or revision of operating procedures.

    iv. Serviceability: causing impairment and possible loss of serviceability. The pipeline is able to operate but integrity is impaired. Remedial action may be necessary to service or maintain the system.

    3.4.3 Characteristic Values in Limit State Design The term characteristic value for strength and load variables were originally introduced in the late 1950s by the Conseil International du Bâtiment pour la Recherche, l’Etude et la Documentation (CIB) and were first discussed in the UK by Thomas [38].

    Ideally in structural design, loading intensities and material strengths would be chosen on the assumption that they represent the maximum load intensity to which the structure will be subjected and the minimum strength that can occur. In reality few basic variables have clearly defined upper or lower limits that can be sensibly used in design. A more rational approach is to specify a characteristic value of load which has a stated small probability of being exceeded during the life of the structure, and for materials, a characteristic value of strength to be exceeded by all but some stated small proportion of test results.

    The characteristic value of a basic variable X is defined as the pth fractile of X (taken towards unfavourable values). Statistical uncertainty, which is often present due to small datasets in practice, is included by defining a confidence level in the value, e.g. the 5% fractile at the 75% confidence level. The basis of the selection of the probability p is to some extent arbitrary.

    In practice a specified characteristic value (specified value) is used for design, since the actual distribution of a particular material strength, for instance, will evolve with manufacturing processes etc., and will vary from producer to producer.

    3.5 PROBABILISTIC DESIGN AND ASSESSMENT PROCEDURES

    Probabilistic analysis, based on structural reliability analysis methods, is an extension of deterministic analysis since deterministic quantities can be interpreted as random variables of a particularly trivial nature in which their density functions are contracted to spikes and in which their standard deviations tend to zero.

    Variations in the values of the basic engineering parameters occur because of the natural physical variability, because of poor information, and because of accidental events involving human error. In the past, emphasis has been focused on the former categories, but the last is equally if not more important (see Section 3.6).

    In addition to the uncertainties associated with the individual load and strength parameters (basic variables) which are mentioned above, it is well known that both the methods of global analysis and the equations used for assessing the strength of individual components are not exact.

    In the case of global structural analysis, the true properties of the materials and components often deviate from the idealisations on which the methods are based. Without exception, all

  • 27

    practical structural systems exhibit behaviour that (to a certain extent) is nonlinear and dynamic, and have properties that are time-dependent, strain-rate dependent and spatially non-uniform. Furthermore, most practical structures are statically indeterminate and contain high levels of residual forces (and hence stresses) resulting from the particular fabrication and installation sequence adopted; in addition they often contain so-called non-structural components which are normally ignored in the analysis, but which often contribute in a significant way, particularly to stiffness. These differences between real and predicted behaviour can be termed global analysis model uncertainty. In general, this is extremely difficult to quantify. Estimates of the magnitude of this type of model uncertainty can be obtained by comparisons using more refined analysis tools and sensitivity studies, or by full-scale physical testing.

    As far as individual components are concerned, the design equations given in Codes of Practice are generally chosen to be conservative, but there are often large variations in the ratio of real to predicted behaviour, even when the individual parameters in the equations are known precisely (e.g. Poisson’s ratio).

    The variability in load and strength parameters (including model uncertainty) arising from physical variability and inadequacies in modelling are allowed for in deterministic design and assessment procedures by an appropriate choice of safety factors and by an appropriate degree of bias in the Code design equations. In probabilistic methods the variability in the basic design variables, including model uncertainty, is taken into account directly in the probabilistic modelling of the quantities.

    Following on from the definition of Level 1 methods in Section 3.4, methods of structural reliability can be divided into two broad classes. From [27] these are:

    Level 2: Methods involving certain approximate iterative calculation procedures to obtain an approximation to t