mtat.03.159: software testing€¦ · mtat.03.159 / lecture 04 / © dietmar pfahl 2015 structure of...
TRANSCRIPT
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
MTAT.03.159: Software Testing
Lecture 04:
Static Testing (Inspection)
and Defect Estimation
(Textbook Ch. 10 & 12) Dietmar Pfahl
email: [email protected] Spring 2015
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Structure of Lecture 04
• Static Analysis
• Defect Estimation
• Lab 4
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Static Analysis
• Document Review (manual)
• Different types
• Static Code Analysis (automatic)
• Structural properties / metrics
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Static Analysis
• Document Review (manual)
• Different types
• Static Code Analysis (automatic)
• Structural properties / metrics / etc.
Lab 4
Lab 5
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Question
What is better?
Review (Inspection) or Test?
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Review Metrics
Basic
• Size of review items
– Pages, LOC
• Review time & effort
• Number of defects
found
• Number of slipping
defects found later
Derived
• Defects found per
review time or effort
• Defects found per
artifact (review item)
size
• Size per time or effort
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Empirical Results: Inspection & Test
Source:
Runeson, P.; Andersson, C.; Thelin, T.; Andrews, A.; Berling,
T.; , "What do we know about defect detection methods?”,
IEEE Software , vol.23, no.3, pp. 82-90, May-June 2006
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Inspections – Empirical Results
• Requirements defects – no data; but: reviews
good since finding defects early is cheaper
• Design defects – inspections are both more
efficient and more effective than testing
• Code defects - functional or structural testing is
more efficient and effective than inspection.
– May be complementary regarding types of faults
• Generally, reported effectiveness is low
– Inspections find 25-50% of an artifact’s defects
– Testing finds 30-60% of defects in the code
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Reviews complement testing
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Why Review?
• Main objective
• Detect faults
• Other objectives
• Inform
• Educate
• Learn from (other’s) mistakes Improve!
• (Undetected) faults may affect software quality negatively – during all steps of the development process!
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Defect Containment Measures
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Relative Cost of Faults Maintenance
200
Source: Davis, A.M., “Software Requirements: analysis and specification” (1990)
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Reviews – Types
• Review – meeting to evaluate software artifact
• Walkthrough – author guided review
• Inspection – formally defined review
• Static testing – testing without software execution
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Walkthrough
Author guides through artifact (’static simulation’)
Attendees scrutinize and question
If defects are detected it’s left to the author to correct them
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Walkthrough
Objective
Detect faults
Become familiar with the product
Roles
Presenter (author)
Reviewers (Inspectors)
Elements
Planned meeting
Team (2 to 7 people)
Brainstorm
Disadvantage
Finds fewer faults than (formal) inspections
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Inspections
• Objective:
• Detect faults
• Collect data
• Communicate information
• Roles
• Moderator
• Reviewers (Inspectors)
• Presenter
• Author
• Elements
• Formal process
• Planned, structured meeting
• Preparation important
• Team (3 to 6 people)
• Disadvantages
• Short-term cost
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Origin: Fagan Inspections
• Who: Michael Fagan (IBM, early 1970’s)
• Approach: Checklist-based
• Phases
• Overview, Preparation, Meeting, Rework, Follow-up
• Fault searching at meeting! – Synergy
• Roles
• Author (designer), reader (coder), tester, moderator
• Classification
• Major and minor
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Inspection Process
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Action
Team
Meeting
Causal
Analysis
Meeting
Defect (Fault)
Detection
(Review / Test)
Software Constr.
(Analyse / Design
/ Code / Rework)
Defect
Database
Organizational
Processes
Artifact
extract sample of defects
find defects fix defects
propose actions
prioritize & implement actions
define
Defect Causal Analysis (DCA)
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Reading Techniques
• Ad hoc
• Checklist-based
• Defect-based
• Usage-based
• Scenario-based
• Perspective-based
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Ad-hoc / Checklist-based / Defect-based Reading
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Usage-Based Reading
Source: Thelin, T.; Runeson, P..; Wohlin, C.; ”Prioritized Use Cases as a Vehicle for Software Inspections”, IEEE Software , vol. 20, no.4, pp. 30-33, July-August 2003
1 – Prioritize Use Cases (UCs) 2 – Select UC with highest priority 3 – Track UC’s scenario through the document under review 4 – Check whether UC’s goals are fulfilled, needed funtionality provided, interfaces are correct, and so on (report issues detected) 5 – Select next UC
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Usage-Based Reading
Source: Thelin, T.; Runeson, P..; Wohlin, C.; ”Prioritized Use Cases as a Vehicle for Software Inspections”, IEEE Software , vol. 20, no.4, pp. 30-33, July-August 2003
Comparison of UBR with Checklist-Based Reading (CBR)
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Perspective-based Reading
• Scenarios
• Purpose • Decrease overlap
(redundancy)
• Improve
effectiveness
Designer
Tester
User
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Structure of Lecture 04
• Static Analysis
• Defect Estimation
• Lab 4
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Quality Prediction
• Based on product and process properties
• Quality = Function(Code Size | Complexity)
• Quality = Function(Code Changes)
• Quality = Function(Detected #Defects)
• Quality = Function(Test Effort)
• Based on detected defects
• Capture-Recapture Models
• Reliability Growth models
Quality def.: Undetected #Defects
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Capture-Recapture – Defect Estimation
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Capture-Recapture – Defect Estimation
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Capture-Recapture – Defect Estimation
• Situation: Two inspectors are assigned to inspect the same product
• d1: #defects detected by Inspector 1
• d2: #defects detected by Inspector 2
• d12: #defects by both inspectors
• Nt: total #defects (detected and undetected)
• Nr: remaining #defects (undetected)
12
21
d
ddNt )( 1221 dddNN tr
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Capture-Recapture – Example
• Situation: Two inspectors are assigned to inspect the same product
• d1: 50 defects detected by Inspector 1
• d2: 40 defects detected by Inspector 2
• d12: 20 defects by both inspectors
• Nt: total defects (detected and undetected)
• Nr: remaining defects (undetected)
10020
4050
12
21
d
ddNt 30)204050(100 rN
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Advanced Capture-Recapture Models
• Four basic models used for inspections
• Degree of freedom
• Prerequisites for all models
• All reviewers work independently of each other
• It is not allowed to inject or remove faults during inspection
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Advanced Capture-Recapture Models
Model
Probability of defect being
found is equal across ...
Estimator Defect Reviewer
M0 Yes Yes Maximum-likelihood
Mt Yes No Maximum-likelihood
Chao’s estimator
Mh No Yes Jackknife
Chao’s estimator
Mth No No Chao’s estimator
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Mt Model
Maximum-likelihood: • Mt = total marked animals
(=faults) at the start of the t'th sampling interval
• Ct = total number of animals (=faults) sampled during interval t
• Rt = number of recaptures in the sample Ct
• An approximation of the maximum likelihood estimate of population size (N) is: SUM(Ct*Mt)/SUM(Rt)
First resampling:
M1=50 (first inspector)
C1=40 (second inspector)
R1=20 (duplicates)
N=40*50/20=100
Second resampling:
M2=70 (first and second inspector)
C2=40 (third inspector)
R2=30 (duplicates)
N=(40*50+40*70)/(20+30)=4800/50=96
Third resampling:
M3=80 (first, second and third inspector)
C3=30 (fourth inspector)
R3=30 (duplicates)
N=(2000+2800+30*80)/(20+30+30)=7200/80=90
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Reliability Growth Models
• To predict the probability of future failure occurrence based on past (observed) failure occurrence
• Can be used to estimate
• the number of residual (remaining) faults or
• the time until the next failure occurs
• the remaining test time until a reliability objective is achieved
• Application typically during system test
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Reliability Growth Models (RGMs)
Purpose:
Stop testing when
a certain percentage (90%, 95%, 99%, 99.9%, …) of estimated total number of failures has been reached
a certain failure rate has been reached
Cumulative #Failures (m)
Test Intensity (t)
(CPU time, test effort, test days, calendar days, …)
100%
95%
(estimated n0)
dtet
0
0
0
0)(n
t
tm
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Failure Data Format /1
1) Time of failure
2) Time interval between failures
3) Cumulative failure up to a given time
4) Failures experienced in a time interval
Failure
no.
Failure times
(hours)
Failure interval
(hours)
1 10 10
2 19 9
3 32 13
4 43 11
5 58 15
6 70 12
7 88 18
8 103 15
9 125 22
10 150 25
11 169 19
12 199 30
13 231 32
14 256 25
15 296 40
Time-based failure specification
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Failure Data Format /2
1) Time of failure
2) Time interval between failures
3) Cumulative failure up to a given time
4) Failures experienced in a time interval
Time Cumulative
Failures
Failures in
interval
30 2 2
60 5 3
90 7 2
120 8 1
150 10 2
180 11 1
210 12 1
240 13 1
270 14 1
300 15 1
Failure count based failure specification
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Failure Data Format /3
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Model Selection
Many different RGMs have been proposed (>100)
To choose a reliability model, perform the following steps:
1. Collect failure data
2. Examine data (failure data vs. test time/effort)
3. Select a set of candidate models
4. Estimate model parameters for each candidate model
Least squares method
Maximum likelihood method
5. Customize model using the estimated parameters
6. Compare models with goodness-of-fit test and select the best
7. Make reliability predictions with selected model(s)
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Structure of Lecture 04
• Static Analysis
• Defect Estimation
• Lab 4
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Lab 4 – Document Inspection & Defect Prediction
Lab 4 (week 31: Mar 30 – Apr 02) – Software Inspection (10%) Lab 4 Instructions Lab 4 & Sample Documentation Submission Deadlines:
• Monday Lab: Sunday, 05 Apr, 23:59 • Tuesday Labs: Monday, 06 Apr, 23:59 • Wednesday Labs: Tuesday, 07 Apr, 23:59 • Thursday Labs: Wednesday, 08 Apr, 23:59
• Penalties apply for late delivery: 50% penalty, if submitted
up to 24 hours late; 100 penalty, if submitted more than 24 hours late
Instructions Documentation: Requirements List (User Stories) Specification - 2 Screens - 1 Text
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Student Pair
Lab 4 – Document Inspection & Defect Prediction (cont’d)
Instructions
Inspection of Specification against Requiremnts
Table columns: ID, Description, Location, Type, Severity
Requirements (6 User Stories)
Specification (excerpt) 2 screens
& Text
?
Issue List (at least 8 defects)
Phase A: Individual student work Phase B: Pair work
1 Student
Issue List Student 1
Issue List Student 2
Consolidated Issue List
Remaining Defects Estimation
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Lab 4 – Document Inspection & Defect Prediction (cont’d)
Lab 4 (week 31: Mar 30 – Apr 02) – Defect Estimation (4% Bonus) Lab 4 Bonus Task Instructions Lab 4 & SciLab Scripts
Script 1 (M0) Consolidated
Issue List (from Phase B)
Script 2 (Mt) Script 3 (Mh)
SciLab scripts
Phase C:
Input File (Defects)
Report: - Input Data - 3 Estimates - Discussion of Assumptions
MTAT.03.159 / Lecture 04 / © Dietmar Pfahl 2015
Next 2 Weeks
• Lab 4:
– Document Inspection and Defect Prediction
• Lecture 5:
– Test Lifecycle, Documentation and Organisation
• In addition to do:
– Read textbook chapters 10 and 12 (available via OIS)
Lab 4: Must work in pairs
to get full marks!