software engineering critical systems
DESCRIPTION
Critical SystemsTRANSCRIPT
Critical Systems
Loganathan R.
1Prof. Loganathan R., CSE, HKBKCE
Objectives
• To explain what is meant by a critical systemwhere system failure can have severe humanor economic consequence.
• To explain four dimensions of dependability -• To explain four dimensions of dependability -availability, reliability, safety and security.
• To explain that, to achieve dependability, youneed to avoid mistakes, detect and removeerrors and limit damage caused by failure.
2Prof. Loganathan R., CSE, HKBKCE
Critical Systems
• If the system failure results in significant economic losses,physical damages or threats to human life than the system iscalled critical systems. 3 types of it are:
• Safety-critical systems– Failure results in loss of life, injury or damage to the environment;– Failure results in loss of life, injury or damage to the environment;
– Chemical plant protection system;
• Mission-critical systems– Failure results in failure of some goal-directed activity;
– Spacecraft navigation system;
• Business-critical systems– Failure results in high economic losses;
– Customer accounting system in a bank;
3Prof. Loganathan R., CSE, HKBKCE
System dependability
• The most important emergent property of a criticalsystem is its dependability. It covers the relatedsystem attributes of availability, reliability, safety &security.
• Importance of dependability– Systems that are unreliable, unsafe or insecure are often rejected
by their users(refuse to the product from the same company).
– System failure costs may be very high.(reactor / aircraft navigation)
– Untrustworthy systems may cause information loss with a highconsequent recovery cost.
4Prof. Loganathan R., CSE, HKBKCE
Development methods for critical systems
• Trusted methods and technique must be used.
• These methods are not cost-effective for othertypes of system.
• The older methods strengths & weaknesses• The older methods strengths & weaknessesare understood
• Formal methods reduce the amount of testingrequired. Example :
– Formal mathematical method
5Prof. Loganathan R., CSE, HKBKCE
Socio-technical critical systems Failures
• Hardware failure– Hardware fails because of design and manufacturing
errors or because components have reached the endof their natural life.
• Software failure• Software failure– Software fails due to errors in its specification, design
or implementation.
• Human Operator failure– Fail to operate correctly.– Now perhaps the largest single cause of system
failures.
6Prof. Loganathan R., CSE, HKBKCE
A Simple safety Critical System
• Example of software-controlled insulin pump.
• Used by diabetics to simulate the function ofinsulin, an essential hormone that metabolisesblood glucose.blood glucose.
• Measures blood glucose (sugar) using a micro-sensor and computes the insulin doserequired to metabolise the glucose.
7Prof. Loganathan R., CSE, HKBKCE
Insulin pump organisation
Needleassembly
Pump Clock
Insulin reservoir
8Prof. Loganathan R., CSE, HKBKCE
Sensor
Display1 Display2
AlarmController
Power supply
Insulin pump data-flow
Blood sugaranalysis
Blood sugarsensor
BloodBlood
parameters
Blood sugarlevel
Insulin
9Prof. Loganathan R., CSE, HKBKCE
Insulindelivery
controller
Insulinpump
Insulin Pump controlcommands
Insulinrequirement
Insulinrequirementcomputation
Dependability requirements
• The system shall be available to deliver insulinwhen required to do so.
• The system shall perform reliably and deliverthe correct amount of insulin to counteractthe correct amount of insulin to counteractthe current level of blood sugar.
• The essential safety requirement is thatexcessive doses of insulin should never bedelivered as this is potentially life threatening.
10Prof. Loganathan R., CSE, HKBKCE
System Dependability
• The dependability of a system equates to itstrustworthiness.
• A dependable system is a system that is trusted by itsusers.
• Principal dimensions of dependability are:• Principal dimensions of dependability are:– Availability :- Probability that it will be up & running & able to deliver
at any given time ;
– Reliability :-Correct delivery of services as expected by user over agiven period of time;
– Safety :-A Judgment of how likely the system will cause damage topeople or its environment;
– Security :- A Judgment of how likely the system can resist accidental
or deliberate intrusions;
11Prof. Loganathan R., CSE, HKBKCE
Dimensions of dependability
Availability Reliability Security
Dependability
Safety
12Prof. Loganathan R., CSE, HKBKCE
Availability Reliability Security
The ability of the systemto deliver services when
requested
The ability of thesystem to deliver
services as specified
The ability of the systemto operate withoutcatastrophic failure
The ability of the systemto protect itself againstaccidental or deliberate
intrusion
Safety
Other dependability properties
• Repairability– Reflects the extent to which the system can be repaired in the event of
a failure
• Maintainability– Reflects the extent to which the system can be adapted to new
requirements;requirements;
• Survivability– Reflects the extent to which the system can deliver services while it is
under hostile attack;
• Error tolerance– Reflects the extent to which user input errors can be avoided and
tolerated.
13Prof. Loganathan R., CSE, HKBKCE
Dependability vs performance
• It is very difficult to tune systems to makethem more dependable
• High level dependability can be achieved byexpense of performance. Because it includeexpense of performance. Because it includeextra/ redundant code to perform necessarychecking
• It also increases the cost.
14Prof. Loganathan R., CSE, HKBKCE
Dependability costs
• Dependability costs tend to increase exponentially asincreasing levels of dependability are required
• There are two reasons for this
– The use of more expensive development techniques and– The use of more expensive development techniques andhardware that are required to achieve the higher levels ofdependability
– The increased testing and system validation that isrequired to convince the system client that the requiredlevels of dependability have been achieved
15Prof. Loganathan R., CSE, HKBKCE
Costs of increasing dependability
Cost
16Prof. Loganathan R., CSE, HKBKCE
Low Medium High Veryhigh
Ultra-high
Dependability
Availability and reliability
• Reliability– The probability of failure-free system operation
over a specified time in a given environment for aspecific purpose
• Availability• Availability– The probability that a system, at a point in time,
will be operational and able to deliver therequested services
• Both of these attributes can be expressedquantitatively
17Prof. Loganathan R., CSE, HKBKCE
Availability and reliability
• It is sometimes possible to include systemavailability under system reliability– Obviously if a system is unavailable it is not
delivering the specified system services
• However, it is possible to have systems with• However, it is possible to have systems withlow reliability that must be available. So longas system failures can be repaired quickly anddo not damage data, low reliability may not bea problem
• Availability takes repair time into account
18Prof. Loganathan R., CSE, HKBKCE
Faults, Errors and failures
• Failures are a usually a result of system errors thatare derived from faults in the system
• However, faults do not necessarily result in systemerrors– The faulty system state may be transient and ‘corrected’– The faulty system state may be transient and ‘corrected’
before an error arises
• Errors do not necessarily lead to system failures– The error can be corrected by built-in error detection and
recovery
– The failure can be protected against by built-in protectionfacilities. These may, for example, protect systemresources from system errors
19Prof. Loganathan R., CSE, HKBKCE
Reliability terminology
Term Description
System failureAn event that occurs at some point in time when the systemdoes not deliver a service as expected by its users
20Prof. Loganathan R., CSE, HKBKCE
System errorAn erroneous system state that can lead to system behaviourthat is unexpected by system users.
System fault
A characteristic of a software system that can lead to asystem error. For example, failure to initialise a variablecould lead to that variable having the wrong value when it isused.
Human error ormistake
Human behaviour that results in the introduction of faultsinto a system.
Reliability Improvement
• Three approaches to improve reliability
• Fault avoidance– Development technique are used that either minimise the possibility
of mistakes or trap mistakes before they result in the introduction ofsystem faults
• Fault detection and removal• Fault detection and removal– Verification and validation techniques that increase the probability of
detecting and correcting errors before the system goes into service areused
• Fault tolerance– Run-time techniques are used to ensure that system faults do not
result in system errors and/or that system errors do not lead to systemfailures
21Prof. Loganathan R., CSE, HKBKCE
Reliability modelling
• You can model a system as an input-outputmapping where some inputs will result inerroneous outputs
• The reliability of the system is the probabilitythat a particular input will lie in the set ofthat a particular input will lie in the set ofinputs that cause erroneous outputs
• Different people will use the system indifferent ways so this probability is not a staticsystem attribute but depends on the system’senvironment
22Prof. Loganathan R., CSE, HKBKCE
Input/output mapping
Input set Ie
Inputs causingerroneous outputs
23Prof. Loganathan R., CSE, HKBKCE
Output set Oe
Program
Erroneousoutputs
Reliability perception
User
Possibleinputs
24Prof. Loganathan R., CSE, HKBKCE
User3
User1 Erroneous
inputs
User2
Reliability improvement
• Removing X% of the faults in a system will not necessarilyimprove the reliability by X%. A study at IBM showed thatremoving 60% of product defects resulted in a 3%improvement in reliability
• Program defects may be in rarely executed sections of the• Program defects may be in rarely executed sections of thecode so may never be encountered by users. Removing thesedoes not affect the perceived reliability
• A program with known faults may therefore still be seen asreliable by its users
25Prof. Loganathan R., CSE, HKBKCE
Safety
• Safety is a property of a system that reflects the systemshould never damage people or the system’s environment
• For example control & monitoring systems in aircraft
• It is increasingly important to consider software safety asmore and more devices incorporate software-based controlmore and more devices incorporate software-based controlsystems
• Safety requirements are exclusive requirements i.e. theyexclude undesirable situations rather than specify requiredsystem services
• Safety critical software are 2 types
26Prof. Loganathan R., CSE, HKBKCE
Types of Safety-critical software
• Primary safety-critical systems
– Embedded software systems whose failure can causehardware malfunction which results inhuman injury orenvironmental damage.
• Secondary safety-critical systems• Secondary safety-critical systems
– Systems whose failure indirectly results in injury.
– Eg. Medical Database holding details of drugs
• Discussion here focuses on primary safety-criticalsystems
27Prof. Loganathan R., CSE, HKBKCE
Safety and reliability
• Safety and reliability are related but distinct– In general, reliability and availability are necessary
but not sufficient conditions for system safety
• Reliability is concerned with conformance to agiven specification and delivery of service
• Reliability is concerned with conformance to agiven specification and delivery of service
• Safety is concerned with ensuring systemcannot cause damage irrespective of whetheror not it conforms to its specification
28Prof. Loganathan R., CSE, HKBKCE
Unsafe reliable systems
• Reasons why reliable system are notnecessarily safe:• Specification errors
• It does not describe the required behaviour in somecritical situationscritical situations
• Hardware failures generating spurious inputs• Hard to anticipate in the specification
• Operator error• Context-sensitive commands i.e. issuing the right
command at the wrong time
29Prof. Loganathan R., CSE, HKBKCE
Safety terminology
Term Definition
Accident (ormishap)
An unplanned event or sequence of events which results in human death or injury,damage to property or to the environment. A computer-controlled machine injuring itsoperator is an example of an accident.
HazardA condition with the potential for causing or contributing to an accident. A failure of thesensor that detects an obstacle in front of a machine is an example of a hazard.
A measure of the loss resulting from a mishap. Damage can range from many people
30Prof. Loganathan R., CSE, HKBKCE
DamageA measure of the loss resulting from a mishap. Damage can range from many peoplekilled as a result of an accident to minor injury or property damage.
Hazard severityAn assessment of the worst possible damage that could result from a particularhazard. Hazard severity can range from catastrophic where many people are killed tominor where only minor damage results.
Hazardprobability
The probability of the events occurring which create a hazard. Probability values tendto be arbitrary but range from probable (say 1/100 chance of a hazard occurring) toimplausible (no conceivable situations are likely where the hazard could occur).
RiskThis is a measure of the probability that the system will cause an accident. The risk isassessed by considering the hazard probability, the hazard severity and theprobability that a hazard will result in an accident.
Ways to achieve Safety
• Hazard avoidance– The system is designed so that hazard simply cannot arise.
– Eg. Press 2 buttons at the same time in a cutting machine to start
• Hazard detection and removal– The system is designed so that hazards are detected and removed– The system is designed so that hazards are detected and removed
before they result in an accident.
– Eg. Open relief valve on detection over pressure in chemical plant.
• Damage limitation– The system includes protection features that minimise the damage
that may result from an accident
– Automatic fire safety system in aircraft.
31Prof. Loganathan R., CSE, HKBKCE
Security
• Security is a system property that reflects theability to protect itself from accidental ordeliberate external attack.
• Security is becoming increasingly important as• Security is becoming increasingly important assystems are networked so that external accessto the system through the Internet is possible
• Security is an essential pre-requisite foravailability, reliability and safety
• Example : Viruses, unauthorised use of service/data modification
32Prof. Loganathan R., CSE, HKBKCE
Security terminology
Term Definition
ExposurePossible loss or harm in a computing system. This can be loss ordamage to data or can be a loss of time and effort if recovery isnecessary after a security breach.
VulnerabilityA weakness in a computer-based system that may be exploited tocause loss or harm.
33Prof. Loganathan R., CSE, HKBKCE
cause loss or harm.
AttackAn exploitation of a system vulnerability. Generally, this is fromoutside the system and is a deliberate attempt to cause somedamage.
ThreatsCircumstances that have potential to cause loss or harm. You canthink of these as a system vulnerability that is subjected to anattack.
ControlA protective measure that reduces a system vulnerability.Encryption would be an example of a control that reduced avulnerability of a weak access control system.
Damage from insecurity
• Denial of service
– The system is forced into a state where normal servicesbecome unavailable.
• Corruption of programs or data
– The system components of the system may altered in anunauthorised way, which affect system behaviour & henceits reliability and safety
• Disclosure of confidential information
– Information that is managed by the system may beexposed to people who are not authorised to read or usethat information
34Prof. Loganathan R., CSE, HKBKCE
Security assurance
• Vulnerability avoidance– The system is designed so that vulnerabilities do not occur.
For example, if there is no external network connectionthen external attack is impossible
• Attack detection and elimination• Attack detection and elimination– The system is designed so that attacks on vulnerabilities
are detected and remove them before they result in anexposure. For example, virus checkers find and removeviruses before they infect a system
• Exposure limitation– The consequences of a successful attack are minimised.
For example, a backup policy allows damaged informationto be restored
35Prof. Loganathan R., CSE, HKBKCE