safety and security in distributed systems
TRANSCRIPT
Industries with the potential to injure or kill people or to do serious damage on the environment
hazardous industry
Require high-integrity systems and safety management processes to ensure safety
high integrity systemsSystems where failure could lead to an accident and for which high reliability are claimed
- Pressure boundaries- Oil & Gas wells- Boilers
- Instrumentation & Control Systems- Emergency shutdown - Fire and gas leak detection
- Life supporting devices- Pacemakers- Infusion pumps
system criticalityNon - Critical
Useful system- Low dependability
- System does not need to be trusted
Business - Critical Mission - Critical Safety - Critical
High Availability- Focus on cost s of
failure caused by system downtime, cost of spares, repair equipment and personnel and warranty claims
High Reliability- Increase the
probability of failure free system operation over a specified time in a given environment for a given purpose
High Safety & Integrity Level- High reliability
- High availability
- High security
- Focus is not on cost, but on preserving life and nature
Troll A, 472 meters, the largest man made “thing” ever moved
Software was an alien concept
things anno 1995
Fallacies of distributed computing:1. The network is reliable
2. Latency is zero
3. Bandwidth is infinite
4. The network is secure
5. Topology doesn’t change
6. There is one administrator
7. Transport cost is zero
networked everything's
A distributed system is one in which the failure of a computer youdidn’t even know existed can render your own computer unusable.
Leslie Lamport
software is ubiquitousDefines the behaviour of1. Mobile devices
2. Medical devices
3. Computer Networks
4. Industrial control systems
5. Supply chains and logistics
6. Robots, cars & aircrafts
7. Human-Machine Interfaces
Institutionalizes our insights and knowledge
before softwareTangible control logic
• Design level
• Implementation level
• Verification & test level
No cyber threats
• Intrusion
• Viruses
• Theft
• Identity
two unique propertiesInspection & Test • Software can’t be inspected and
tested as analogous components
CPU – the single point of failure• All signals are threaded through the
one single element.
• Execution sequence is un-known
• Same defect is systemized acrossmultiple instances
Impacts how we must manage software for critical systems
some specific challengesCommon mode failure
Malware, Viruses and Hacking
Human Factors
Blurred boundaries
Identity management
common mode failure“results from an event which because of dependencies causes a coincidence of failure states of components in two or more separate channels of a redundancy system, leading to the defined systems failing to perform its intended function”.
Ariane 5 test launch, 1996
malware, viruses and hacking
Motivated by financial, political, criminal or idealistic interests
Software created to cause harm• Change of system behaviour• Steal / destroy data or machines
Exploits weaknesses in• Human character• Technical designs
Horror stories:• Stuxnet and the Iranian centrifuges (Siemens control system)• Saudi Aramco hack of 35000 computers (Windows back office)
human factors
How to minimize the probability?
Mistakes occur everywhere• Specification• Design• Implementation• Deployment• Operations
Humans make mistakes• By commission • By omission• By carelessness
blurred boundaries
Conflicting interests, divergent situational understanding acrossdisciplines and roles.
Architects thinks and designs in terms of hierarchy and layering
Programmers thinks and designs in terms of threads of execution
Users need systems that works and solves a real world problems
Operations needs to get the job done
identity
How to ensure that a thing or person is the one theyclaim to be?
What are the impacts on- Security
- Safety
- Integrity
- Availability
- Reliability
systems engineeringArchitecture centric• Design• Implementation• Deployment• Usage
Risk based• Requirements• Design• Implementation• Commissioning• Usage
Holistic and remember higher order effects
Human brain - planets most sophisticatedand vulnerable decision maker
human factors
• Emotions trumps facts (irrationality)
• Limited processing capacity
• Need to rest, easily bored
• Inconsistency across exemplars
• Creative, easily distracted
• Values (ethics and morale)
• Mental illness
Address our inherent weaknesses from day one
• I have to make frequent decisions and many of them depend upon readings from sensors that can be correct, noisy, random, unavailable, or in some other state.
• The decisions I have to make often have safety consequences, they certainly have economic consequences, and some are irreversible.
• At any point in time there may be three or four actions I could take based on my sense of what’s happening on the rig
• I would like better support to determine how trustworthy my readings are, what the possible situations are and the consequences of each action.
What is the best actionto take?
enhance human decision making
use and adhere to standards
IEC 61508 Functional safety of safety instrumented systems for the process industry sector
IEC 61511 Safety instrumented systems for the process industry sector
DO-178C Software considerations in airborne systems and equipment certification
The good thing about standards is that there are so many to choose fromAndrew S. Tanenbaum
Not sufficient on their own
Represents insights
Must be tailored to be useful
summaryHeading toward a world of interconnected every-things
Some of these things support hazardous industries and critical functions
Exposed to the inherent vulnerabilities in computers and software
Hazardous industries need high-integrity systems
Non-critical software practice fails for critical systems Rigorous Systems Engineering, Safety & Security Architecture and Standards
Human factors must be addressed from day oneThrough engineering and operations and use
Safety and security in distributed systems
Einar LandreLeaderE-mail [email protected]: +4741470537
www.statoil.com
Thank you