1 quality management (qm) fall 2008 /9. 2 quality management some software developers still believe...

42
1 QUALITY MANAGEMENT (QM) Fall 2008 /9

Upload: earl-hood

Post on 26-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

1

QUALITY MANAGEMENT (QM)

Fall 2008 /9

2

QUALITY MANAGEMENT

• Some Software developers still believe that Softwatre Qualıty is something you begin to wory about after Program coding has been generated. “Nothing could be further than the truth”.

• Quality Management (QM) (often called Quality Assurance (QA) is an umbrella activity that is applied throughout the Software process.

Quality Management encompasses: 1. A Software Quality Assurance (SQA)2. Specific Quality Assurance and Quality Control tasks3. Effective Software Engineering practice 4. Control of all Software work products and the changes made to them5. A Procedure to ensure compliance with software development

standards6. Measurement and reporting mechanisms

3

QUALITY CONCEPTS

• Variation Control is the heart of Quality Control.

• We want to minimize the differences between the predicted resources needed to complete a project and actual resources used, including staff, equipment, and calendar time .

In general we would like to make sure our testing program covers a known percentage of the software, from one release to another.

Not only minimize the number of defects but we have to ensure that the variance in the number of bugs is also minimized from one release to another.

4

WHAT IS QUALITY

• Quality is the characteristic or attribute of something (i.e. measurable characteristics) we can compare to known standards such as length, colour, etc…. of a physical object.

• Software is largely an intellectual entity, that is more challenging to characterize than physical objects. Nevertheless, measures of a (Software) program’s characteristics do exist.

• The Software properties include cyclomatic complexity, cohesion, No of Function points (FP), number of (LOC) , and many others.

5

Two kinds of quality can be encountered when examining the measurable characteristics of an item.

a) Quality of Design- Characteristics that designer specify for an item, including Requirements, Specifications, and the Design of the system.

b) Quality of Conformance- Degree to which the Design Specifications are followed during Implementation.

If Implementation follows the Design Specifications and the resulting system meets its requirements and performance goals, then the Conformance quality of the software is high.

• Robert Glass argues that besides Quality of Design and Quality of Conformance, a more ‘Intuitive’ relationship is in order.

User Satisfaction = Compliant product + Good quality + Delivery within budget and scale

6

• |According to Glass, quality is important , but if the user is not satisfied, nothing else really matters.

• According to Demarco, A product’s quality is a function of how much it changes the world for the better. “If a Software product provides substantial benefits to its end-users,

they may be willing to tolerate occasional reliability or performance problems. “

7

QUALITY CONTROL

• Variation Control may be equated to Quality Control. But how do we achieve Control?

• Quality Control (QC) involves the series of Inspections, Reviews, and Tests used throughout the Software process to ensure each work product meets the requirements placed upon it.

• QC includes a feedback loop to the process that created the work product.

• Feedback loop is essential to minimize the defects produced.

• The combination of Measurement and Feedback enables the tuning of the Software process when the work products created fail to meet their specifications.

8

QUALITY ASSURANCE (QA)

• QA consists of a set of Auditing and Reporting functions that assess the effectiveness and completeness of Quality Control activities.

• The goal of QA is to gain insight and confidence that product quality is meeting its goals.

• If the data provided through QA identify problems, it is the management’s responsibility to address the problems and apply the necessary resources to resolve quality issue.

9

COST OF QUALITY

• All costs incurred in the pursuit of Quality or in performing Quality related activities.

• The Quality cost studies are conducted to:

• Provide a baseline for the current Cost of Quality, • Identify opportunities for reducing the cost of quality, • Provide a normalized basis of comparison.

(The basis of quality is always money)

• Once the Quality Costs have normalized on money basis, we have the necessary data to evaluate where the opportunities lie to improve the processes.

• QA Costs may be divided into costs associated with: - Prevention Costs - Appraisal Costs - Failure Costs

10

COST OF QUALITY

Prevention Costs include:

• Quality Planning, • Formal Technical Reviews• Test equipment• Training

Appraisal Costs include

• Activities to gain insight into product condition the “first time through” each process. E.g. In-process and Inter-process inspection, Equipment calibration and maintenance, and Testing.

Failure costs include

• Are costs that would disappear if no defects appeared before shipping a product to customer. May be divided into Internal failure costs and External failure costs.

11

COST OF QUALITY

• Internal Failure costs

Costs that incurred when a defect is detected prior to shipment.

• Internal Failure Costs include: Network, Repair and Failure mode analysis.

• External Failure costs

Costs that are associated with defects found after the product has been shipped to the customer.

E.g. Complaint resolution, product return and replacement, help line support, and warranty work.

The relative costs to find and repair a defect increase dramatically as we go from Prevention to Detection to Internal Failure to External Failure costs.

12

SOFTWARE QUALITY ASSURANCE (SQA)

Software Quality is defined as:• Conformance to explicitly stated Functional and Performance requirements,• Explicitly documented Development Standards, • Implicit characteristics that are expected from all professionally developed

Software.

This definition serves to emphasize three important points:

1. Software Requirements are the foundation from which quality is measured. Lack of Conformance to Requirements is lack of Quality.

2. Specified Development Standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not followed, lack of quality will almost surely be the result.

3. A set of Implicit Requirements often goes unmentioned. If software conforms to its explicit requirements but fails to meet implicit requirements, software quality is suspect.

High Quality software is an important goal lies in the heart of every jaded software developer.

13

High Quality software is an important goal lies in the heart of every jadedSoftware developer.

• Software Quality Assurance (SQA) is a planned and systematic pattern of actions that are required to ensure high Quality in software.

• Many different constituencies have Software Quality Assurance (SQA) responsibilities.

• Such as; Project Managers, Software Engineers, Customers, Salespeople, Individuals who serve within Software Quality Assurance

(SQA) Group

• The SQA Group serves as the customer’s in house representative. They must look at software from the customer’s point of view.

14

SQA ACTIVITIES

SQA is composed of a variety of tasks associated with two different Constituencies:

a) The Software Engineers who do technical work.

• They apply solid technical methods and measures, conducting Formal Technical Reviews, and performing well-planned Software Testing.

b) An SQA Group that has responsibility for Quality Assurance Planning, Oversight (Controlling) , Record keeping, Analysis, and Reporting.

• Moreover, SQA Group is responsible for coordination, and control and management of changes and help to collect and analyze Software Metrics.

The SQA Plan is developed during Project planning and is reviewed by all stakeholders.

QA Activities performed by the Software Engineering team and SQA Group are governed by the plan.

The QA Plan identifies evaluations to be performed, audits and reviews to be performed, standards that are applicable to the project, procedures for error reporting and tracking, documents to produced by the SQA Group, and amount of feedback provided to the software project team.

15

Quality Assurance Plan Identifies :

• Evaluations to be performed, • Audits and Reviews to be performed, • Standards that are applicable to the project, • Procedures for error reporting and tracking, • Documents to produced by the SQA Group, • Amount of feedback provided to the software project team.

16

SOFTWARE REVIEWS

• Software reviews are a filter for the software process. Reviews are applied at various points during Software engineering and serve to uncover errors and defects that can then be removed.

• Software Reviews purify the software engineering activities that are called Analysis, Design, and Coding.

• Technical work needs reviewing although people are good at catching some of their own errors, large classes of errors escape the originator more easily than they escape anyone else.

• Many different types of reviews can be conducted as part of Software Engineering.

• Each review has its own place.

17

SOFTWARE REVIEWS

• An Informal meeting around the coffee machine is a form of review, if technical problems are discussed.

• A Formal presentation of software design to an audience of customers, management, and technical staff is also a form of review also called a “Walkthrough” or an “Inspection”.

• A Formal Technical Review (FTR) is the most effective filter from a quality assurance standpoint.

• FTR is conducted by Software engineers for software engineers.

• FTR is an effective means for uncovering errors and improving software quality.

18

FORMAL TECHNICAL REVIEW

• The primary objective of Formal Technical Review is to find errors during the process so that they do not become defects after release of the software.

• The benefit of FTR is the early discovery of errors so that they do not propagate to the next step in the Software Process.

• A number of Software Studies indicate that design activities introduce between 50 and 65% of all errors during the Software Process.

• FTR have been shown to be up to 75% effective in uncovering design flaws. By detecting and removing a large percentage of these errors, the review

process substantially reduces the cost of subsequent activities in the software process.

e.g. Error uncovered during Design process will cost 1 monetary unit to correct, Relative to this cost, the same error uncovered just before Testing commences will cost 6.5 monetary units; during Testing 15 monetary units; and after release to customer the cost will be between 60 to 100 monetary unit.

19

FORMAL TECHNICAL REVIEW

DEFECT AMPLIFICATION AND REMOVAL

A Defect Amplification Model can be used to illustrate the generation and detection oferrors during the Preliminary Design, and Program Coding steps of a Software Project.

• During the Preliminary Design step and Coding step errors may be inadvertently generated.

• Review may fail to uncover newly generated errors, and errors from previous steps, resulting in some number of errors that are passed through. In some cases, errors passed through from previous steps are Amplified as:

(Amplification Factor * x) by current work Errors from previous step Errors passed to Next Step

The box subdivision represents each of these characteristics and percentage (%) of Efficiency for detecting errors, a function of the thoroughness of the review.

Errors passed through

Amplified Errors 1 : X

Newly generated error

% Efficiency for

ErrorDetection

20

FORMAL TECHNICAL REVIEWThe figure 1 below illustrates a “Defect Amplification figure with no Reviews” . Preliminary Investigation

Detail Design

10 6 Code / Unit Test

Integration Test

0

4 * 1.5 X= 1.5

0

0

0

0

10

0%

0

50%

25

0%

0

50%

10 27 * 3 X = 3 25

20%

0

50%

Validation Test

System Test

66

437

10

2794

9447

24

12

LatentErrors

21

Defect Amplification figure with no Reviews

• Referring to “Defect Amplification with no Reviews” each step is assumed to uncover and correct 50% of all incoming errors without introducing any new errors (an optimistic assumption).

• As you can see 10 Preliminary Design defects are amplified to 94 errors before Integration Testing commences and at the end of System Testing 12 Errors are released to the Field

22

FORMAL TECHNICAL REVIEWThe figure below illustrates a “Defect Amplification with Reviews conducted” . Preliminary Investigation

Detail Design

3 6 Code / Unit Test

Integration Test

0

1. 1..5

0

0

0

0

10

70%

0

50%

25

50%

0

50%

5

10 . 3 25

60%

0

50%

Validation Test

System Test

22

115

5

10

24

2412

6

3

LatentErrors

23

Defect Amplification with Reviews conducted

• Referring to Figure 2 “Defect Amplification with Reviews conducted” same conditions are considered except that the Review conducted at Design Step And Code Step of development. In this case 10 Initial Preliminary Design errors are amplified to 24 errors before testing commences. Only 3 Latent defects exist.

• CALCULATING COST IMPACT (RELATIVE COST) OF SOFTWARE DEFECTS) For Defect Amplification with no Reviews and Defect Amplification figure with Reviews.

• The Number of Errors uncovered during each of the Project steps is multiplied by the cost to remove an error.

• Cost Units for errors (1.5) Cost units for Design phase (6.5) Cost units for before Test stage (15) Cost units during Test stage 67 Cost Units after release to field (Customer)

• Using the Cost Units data, the total cost for Development and Maintenance of Software with no Reviews is 2177 Units and the cost for Development and maintenance with Reviews is 783 (nearly three times less costly.

24

FORMAL TECHNICAL REVIEW

A Formal Technical Review is a Software Quality Control activity performed by Software Engineers .

The Objectives of Technical Formal Review are:

1. To uncover errors in function, logic, or implementation for any representation of the software;

2. To verify that the software under review meets its requirements3. To ensure that the software has been represented according to

predefined standards4. To achieve software that is developed in a uniform manner5. To make projects more manageable.

25

FORMAL TECHNICAL REVIEW

• FTR also serve as a training ground for Junior engineers enabling them to observe different approaches to Software Analysis, Design, and Construction (Build).

• The FTR also serves to promote backup and continuity, since a number of people become familiar with parts of the software that yet may not have otherwise seen.

• FTR is actually a Reviews that includes “Walkthroughs” , Inspections, Round-robin Reviews and other small group technical assessments of software.

• FTR is conducted as a meeting and will be successful only if it is properlyplanned, controlled and attended.

26

FORMAL TECHNICAL REVIEW

• To conduct Reviews, a Software Engineer must expend time and effort, and the development organization must spend money.

• However, the results of FTR as illustrated with Defect amplification examples, leaves little doubt that we can pay now or pay much more later.

• Formal Technical Reviews provide a demonstrable Cost/ benefit Assessment.

27

FORMAL TECHNICAL REVIEW

Every Review Meeting should abide by the following Constraints regardless of the FTR format that is chosen.

• Between 3 – 5 people should be involved in the FTR

• Advance preparation should occur but should require no more than two hours of work for each person

• The duration of the review meeting should be no more than two hours.

Given these constraints, it should be obvious that an FTR focuses on a specific and small part of overall Software such as a small part of the RequirementsSpecification, A detailed Components design, a Source Code listing.

• By narrowing the focus FTR has a higher likelihood of uncovering errors.

28

FORMAL TECHNICAL REVIEW

HOW IS FTR ORGANIZED?

• The “Producer” of the Software product informs the Project Manager that the work product is complete and that a review is required.

• The Project Manager conducts the Review Leader, who evaluate the product for readiness, generates copies of product materials, and distributes them to two or three Reviewers for advance preparation.

• Each Reviewer is expected to spend not more than two hours to review the product , become familiar and make notes.

• The Review Leader also review the product and establishes an Agenda for the review meeting

• The review meeting is attended by the Review Leader, all Reviewers , and the Producer of the software. One of the Reviewer act as a Recorder to record all important issues raised during the review.

29

FORMAL TECHNICAL REVIEW

HOW IS FTR ORGANIZED?

• The FTR begins with an introduction of the Agenda and a brief introduction by the producer.

• The Producer then proceeds to “Walkthrough” the work product explaining the material, while reviewers raise issues based on their advance preparation.

• When valid problems or errors are discovered, the Recorder records the error.

• At the end of the Review meeting , the attendees must decide whether:

1. Accept the product without further modification 2. Reject the product due to severe errors (once corrected , another review

must be performed) 3. Accept the product provisionally (Minor errors have been encountered

and must be corrected, but no additional review will be reviewed).

• Once the decision is made, all FTR Attendees complete a sign-off, indicating their participation in the Review Team and their findings.

30

FORMAL TECHNICAL REVIEW

STATISTICAL SOFTWARE QUALITY ASSURANCE

Statistical Quality Assurance reflects a growing trend throughout industry to become more quantitative about quality.

Statistical Quality Assurance techniques for software have been shown to provide substantial quality improvement.

Statistical Quality Assurance steps are:

1. Information about Software defects is collected and categorized2. An attempt is made to trace each defect to its underlying cause 3. Using the (80 – 20 %) Pareto principles (80% of all defects can be traced to 20% of

all possible causes) to isolate the 20%., which is called the “Vital Few Causes”.4. Once the Vital Few Causes have been identified, move to correct the problems that

have caused the defects

• It is important to note that corrective action focuses primarily on the Vital Few . As the vital few causes are corrected , new candidates pop to the top of the stack.

31

FORMAL TECHNICAL REVIEW

THE SIX SIGMA STRATEGY FOR SOFTWARE ENGINEERING

• Six Sigma is the most widely used strategy for Statistical Quality Assurance in industry today.

• The Six Sigma strategy is a rigorous and disciplined methodology that uses data and Statistical Analysis to measure and improve a company’s operational performance by identifying and eliminating “Defects” in manufacturing and service related processes.

• The term Six Sigma is derived from Six Standard Deviations – (3.4 instances (defects) per million occurrences) – implying an extremely high quality standard.

• The Six Sigma Methodology Defines three Core steps (i.e. Define, Measure, and Analyze)

1. DEFINE• Define Customer Requirements,• Define Deliverables,• Define Project Goals

32

FORMAL TECHNICAL REVIEW

THE SIX SIGMA STRATEGY FOR SOFTWARE ENGINEERING

2. MEASURE - the existing process and its output is measured to determine current quality performance (Collect Defect Metrics)

3. ANALIZE - Defect Metrics is analyzed and the Vital Few is determined

The Six Sigma suggests two additional steps, when an Existing Software Processis in place and improvement is required.

• IMPROVE– The process by eliminating the root causes of defects

• CONTROL – The process to ensure that future work does not reintroduced the causes of defects.

33

FORMAL TECHNICAL REVIEW

THE SIX SIGMA STRATEGY FOR SOFTWARE ENGINEERING

The Six Sigma suggests two additional steps in addition to the Core, for thedeveloping Software process:

• Design - To avoid the root causes of defects, and also to meet customer requirements

• Verify - The process model is verified to avoid defects and meet customer requirements.

34

SOFTWARE RELIABILITY

Software Reliability can be measured, directed and estimated using historical and developmental data.

Software Reliability is defined in statistical terms as:

“ The probability of Failure-Free Operation of a computer program in a specified environment for a specified time”.

E.g: Program X is estimated to have a reliability of 96% over 8 elapsed processing hours.

In other words, if program X were to be executed 100 times and require 8 elapsed execution time, it is likely to operate correctly96 times without failure.

35

SOFTWARE RELIABILITY

MEASURE OF RELIABILITY AND AVAILABILITY

Early work in Software Reliability attempted to extrapolate the mathematics of hardware reliability theory to the prediction of software reliability.

• Most hardware related reliability models are predicted on failure due to Wear rather than failure due to Design defects. Hardware failures due to physical wear are more likely than a Design-related failure.

• However, Software failures can be traced to Design or Implementation problems; wear does not enter into the picture.

36

SOFTWARE RELIABILITY

IS THRE A LINK BETWEEN HARDWARE AND SOFTWARE RELIABILITYT?

• There is a debate over the relationship between the key concepts in hardware reliability and their applicability to Software. An irrefutable link has yet to be established.

• However, it is worthwhile to consider a few simple concepts that apply to both Hardware and Software elements such as “ A Computer-Based System” in which a simple measurement is “Mean-Time-Between-Failure” (NTBF).

Where: MTTF is Mean-Time-To-Failure , MTTR is Mean-Time-To-Repair

MTBF = MTTF + MTTR

37

SOFTWARE RELIABILITY

IS THRE A LINK BETWEEN HARDWARE AND SOFTWARE RELIABILITYT?

• Many researchers argue that MTBF is a far more useful measure than DEFECT / KLOCK or DEFECT / FP Measures.

• End-users are concerned with failures, not with the total Error Count. Because each Defect contained within a program does not have the same failure rate, the Total Defect Count provides little indication of the reliability of a system.

• In addition to a reliability measure, a Measure of Software Availability , which is a probability that a program is operating according to requirements at a given point in time.

• The MTBF Reliability Measure is equally sensitive to MTTF and MTTR.

• The Software Availability Measure is somewhat more sensitive to MTTR, an indirect measure of The Maintability of Software.

SOFTWARE AVAILABILITY = [MTTF / (MTTF + MTTR)] * 100%

38

SOFTWARE SAFETY

Software Safety is a Software Quality Assurance activity that focuses on theidentification and assessment of Potential Hazards that may affect softwarenegatively and cause an entire system to fail.

• If Hazards can be identified early in the Software process , Software Design failures can be specified that will either eliminate or control potential hazards.

A Modelling and Analysis process is conducted for Software Safety as below:.

• Initially, Hazards are Identified and Categorized by Criticality and Risk

• Once the System level System hazards are Identified, Analysis techniques are used to assign Severity and probability of Occurrence .

(To be effective, Software must be Analyzed in the context of the entire system)

Analysis Techniques such as Fault Tree Analysis , Real-Time Logic, or Petri Net Models can be used to predict the chain of events that can cause hazards and the Probability that each of the events will occur to create the chain.

• Once Hazards are Identified and Analysed, Safety-related requirements can be specified for the Software.

• The role of Software in managing undesirable events is then indicated.

39

SOFTWARE SAFETY

For example; A subtle User input errors may be magnified by a software fault to produce control data that improperly positions a mechanical device. If a set of environmental conditions are met , the improper position of the device will cause a disastrous failure.

DIFFERENCES BETWEEN SOFTWARE RELIABILITY AND SAFETY

• Software Reliability and Software safety are closely related to one another. However, it is important to understand the subtle differences between them.

• Software Reliability uses Statistical Analysis to determine the Likelihood that a Software failure will occur. However, the occurrence of a failure does not necessarily result in a hazard or mishap.

• Software Safety examines the ways in which failures result in conditions that can lead to a mishap. That is failures are not considered in a vacuum, but are evaluated in the context of an entire Computer-based Systems and its environment .

40

THE ISO 9000 QUALITY STANDARDS

• A Quality Assurance may be defined as the Organizational Structure, Responsibilities, Procedures, Processes , and Resources for Implementing Quality Management.

• QA Systems are created to help organizations ensure their products and services satisfy customer expectations by meeting their specifications.

• ISO9000 describes a Quality Assurance System in generic terms that can be applied to any business regardless of the products or services offered.

• To become registered to one of the QA System models within ISO9000, a company’s Quality System and operations are scrutinized by third-party auditors for compliance to the standard and for effective operation..

• Upon Registration , a company is issued a certification from a registration body represented by the auditors.

• Some semi-annual surveillance audits ensure continued compliance to the standard.

• ISO900 : 2000 is the Quality Assurance standard that applies to Software Engineering.

41

THE ISO 9000 QUALITY STANDARDSThe Standard contains 20 requirements that must be present for an effective Quality Assurance System.

• Because ISO9000:2000 standard is applicable to all Engineering disciplines , a special set of ISO guidelines (ISO9000 -3)have been developed to help interpreter the Standards for use in the Software process.

The Requirements delineated by ISO9000:2000 address the following topics :

• Management Responsibility• Quality System• Contract Review• Design Control• Document and Data Control• Product Identification and traceability• Process control• Inspection and testing• Corrective and Preventive action• Control of Quality records• Internal Quality Audits,• Training• Servicing• Statistical Techniques

• An organization must address each of the 20 requirements and then be able to demonstrate that these policies and procedures are being followed.

42

THE SOFTWARE QUALITY ASSURANCE (SQA) PLAN

• The SQA plan provides a road map for instituting Software Quality Assurance.

• The SQA Plan serves as a template for SQA activities that are instituted for each Software Project.

• SQA Plan is developed by the SQA Group.

• A standard SQA Plan has been published by the IEEE (Institute of Electrical and Electronic Engineers).

The Standard Plan recommends a structure that identifies:

• The purpose and scope of the plan• A Description of all Software products• Applicable standards and practices that applied during the software process• SQA Actions and Tasks and their placement throughout the Software process• The Tools and Methods that support SQA actions and tasks• Software Configuration Management Procedures for managing changes.• Methods for assembling, safeguarding, and maintaining all SQA records• Organizational roles and responsibilities relative to product quality.