evaluation of safety critical software

15
1 841f05parnas11 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

Upload: howard-herrera

Post on 31-Dec-2015

26 views

Category:

Documents


1 download

DESCRIPTION

Evaluation of Safety Critical Software. David L. Parnas, C ACM, June 1990. Overview of Parnas’s article. What was the main point? What did you learn? What did you find confusing? Has anything changed since 1990?. Initial Faults. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Evaluation of  Safety Critical Software

1841f05parnas11

Evaluation of Safety Critical Software

David L. Parnas, C ACM, June 1990

Page 2: Evaluation of  Safety Critical Software

2841f05parnas11

Overview of Parnas’s article What was the main point?

What did you learn?

What did you find confusing?

Has anything changed since 1990?

Page 3: Evaluation of  Safety Critical Software

3841f05parnas11

Initial Faults

As a rule software systems do not work well until they have been used, and have failed repeatedly, in real applications. Generally, many uses and many failures are required before a product is considered reliable. Software products, including those that have become relatively reliable, behave like other products of evolution-like processes; they often fail, even years after they were built, when the operating conditions change.

Page 4: Evaluation of  Safety Critical Software

4841f05parnas11

Terms

Safety critical Weak link behavior Silver bullet Clean room development Trustworthiness

Page 5: Evaluation of  Safety Critical Software

5841f05parnas11

Software Controllers

It is important to recognize that, in theory, software implemented controllers can be described in exactly the same way as black box mathematical models. They can also be viewed as black boxes whose output is a mathematical function of the input. In practice, they are not viewed this way. One reason for the distinction is that their functions are more complex (i.e. harder to describe) than the functions that describe the behavior of conventional controllers. However, [4] and [17] provide ample evidence that requirements for real systems can be documented in this way.

Page 6: Evaluation of  Safety Critical Software

6841f05parnas11

Difficulties

Why is software hard to test

Software Testing Concerns

Software Reviewability Concerns

Page 7: Evaluation of  Safety Critical Software

7841f05parnas11

Necessary Reviews

Page 8: Evaluation of  Safety Critical Software

8841f05parnas11

Does OO change this?

Page 9: Evaluation of  Safety Critical Software

9841f05parnas11

Software Reliability

Nonetheless, our practical experience is that software appears to exhibit stochastic properties. It is quite useful to associate reliability figures such as MTBF (Mean Time Between Failures) with an operating system or other software product. Some software experts attribute the apparently random behavior to our ignorance. They believe that all software failures would be predictable if we fully understood the software, but our failure to understand our own creations justifies the treatment of software failures as random.

Page 10: Evaluation of  Safety Critical Software

10841f05parnas11

Operational Profile?

For systems that function correctly only in rare emergencies, we wish to measure the reliability in those situations where the system must take corrective action, and not include data from situations in which the system is not needed. The input sequence distributions used in reliability assessment should be those that one would encounter in emergency situations, and not those that characterize normal operation.

Page 11: Evaluation of  Safety Critical Software

11841f05parnas11

Error counts

In other words, even if we could count the number of errors, reliability is not a function of the error count. If asked to evaluate a safety-critical software product, there is no point in attempting to estimate or predict the number of errors remaining in a program

Page 12: Evaluation of  Safety Critical Software

12841f05parnas11

Table 1

Table I shows that, if our design target

was to have the probability of failure be

less than 1 in 1000, performing between

4500 and 5000 tests (randomly chosen

from the appropriate test case distribution)

without failure would mean that the

probability of an unacceptable product

passing the test was less than 1 in a

hundred.

Page 13: Evaluation of  Safety Critical Software

13841f05parnas11

Table II

Page 14: Evaluation of  Safety Critical Software

14841f05parnas11

Parnas’ model

Is this the marble model?

We want 95% confidence that the reliability of the software running for 100 days is more than 80 percent.

Page 15: Evaluation of  Safety Critical Software

15841f05parnas11

1 minute paper

What issues/concerns/opinions/questions do you have about the Parnas paper?