software test metrics qa

Upload: pulsarsp

Post on 04-Jun-2018

254 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Software Test Metrics QA

    1/23

    Test Metrics:A Practical Approach to Tracking & Interpretation

    Presented By:Shaun BradshawDirector of Quality Solutions

    May 20, 2004

    Quality - Innovation - Vision

  • 8/13/2019 Software Test Metrics QA

    2/23

    Slide 2

    ObjectivesWhy Measure?

    DefinitionMetrics Philosophy

    Types of MetricsInterpreting the Results

    Metrics Case Study

    Q & A

  • 8/13/2019 Software Test Metrics QA

    3/23

    Slide 3

    Software bugs cost the U.S. economyan estimated $59.5 billion per year.

    An estimated $22.2 billion

    could be eliminated byimprovedtesting that enables

    earlier and more effective

    identification and removal of defects.

    - US Department of Commerce (NIST)

    Why Measure?

  • 8/13/2019 Software Test Metrics QA

    4/23

    Slide 4

    It is often said,You cannot improve what you

    cannot measure.

    Why Measure?

  • 8/13/2019 Software Test Metrics QA

    5/23

    Slide 5

    Definition

    Test Metrics:

    Are a standard of measurement.

    Gauge the effectiveness and efficiency of several

    software development activities.

    Are gathered and interpreted throughout the test

    effort.

    Provide an objective measurement of the success

    of a software project.

  • 8/13/2019 Software Test Metrics QA

    6/23

    Slide 6

    Metrics Philosophy

    When tracked and usedproperly, test metrics canaid in software

    development processimprovement by providingpragmatic & objective

    evidence of processchange initiatives.

    Keep It Simple

    Make It Meaningful

    Track It

    Use It

  • 8/13/2019 Software Test Metrics QA

    7/23

    Slide 7

    Metrics Philosophy

    Measure the basics first

    Clearly define each metric

    Get the most bang for your buck

    Keep It Simple

    Make It Meaningful

    Track It

    Use It

  • 8/13/2019 Software Test Metrics QA

    8/23

    Slide 8

    Metrics Philosophy

    Metrics are useless if they aremeaningless (use GQM model)

    Must be able to interpret theresults

    Metrics interpretation should beobjective

    Make It Meaningful

    Keep It Simple

    Track It

    Use It

  • 8/13/2019 Software Test Metrics QA

    9/23

    Slide 9

    Metrics Philosophy

    Incorporate metrics tracking intothe Run Log or defect trackingsystem

    Automate tracking process toremove time burdens

    Accumulate throughout the testeffort & across multiple projects

    Track It

    Keep It Simple

    Use It

    Make It Meaningful

  • 8/13/2019 Software Test Metrics QA

    10/23

    Slide 10

    Metrics Philosophy

    Use It

    Keep It Simple

    Make It Meaningful

    Track It

    Interpret the results

    Provide feedback to the ProjectTeam

    Implement changes based onobjective data

  • 8/13/2019 Software Test Metrics QA

    11/23

    Slide 11

    Types of Metrics

    Base Metrics Examples

    Raw data gathered by Test AnalystsTracked throughout test effort

    Used to provide project status andevaluations/feedback

    # Test Cases

    # Executed

    # Passed

    # Failed

    # Under Investigation# Blocked

    # 1stRun Failures

    # Re-Executed

    Total ExecutionsTotal Passes

    Total Failures

  • 8/13/2019 Software Test Metrics QA

    12/23

    Slide 12

    Types of Metrics

    # Blocked

    Base Metrics Examples

    The number of distinct test cases thatcannot be executed during the test effortdue to an application or environmental

    constraint.Defines the impact of known systemdefects on the ability to execute specifictest cases

    Raw data gathered by Test AnalystTracked throughout test effort

    Used to provide project status andevaluations/feedback

    # Test Cases

    # Executed

    # Passed

    # Failed

    # Under Investigation# Blocked

    # 1stRun Failures

    # Re-Executed

    Total ExecutionsTotal Passes

    Total Failures

  • 8/13/2019 Software Test Metrics QA

    13/23

    Slide 13

    Types of Metrics

    Calculated Metrics Examples

    Tracked by Test Lead/ManagerConverts base metrics to useful data

    Combinations of metrics can be used toevaluate process changes

    % Complete% Test Coverage

    % Test Cases Passed

    % Test Cases Blocked

    1st

    Run Fail RateOverall Fail Rate

    % Defects Corrected

    % Rework

    % Test EffectivenessDefect Discovery Rate

  • 8/13/2019 Software Test Metrics QA

    14/23

    Slide 14

    Types of Metrics

    1stRun Fail Rate

    Calculated Metrics Examples

    The percentage of executed test casesthat failed on their first execution.

    Used to determine the effectiveness of

    the analysis and development process.Comparing this metric across projectsshows how process changes haveimpacted the quality of the product atthe end of the development phase.

    Tracked by Test Lead/ManagerConverts base metrics to useful data

    Combinations of metrics can be used toevaluate process changes

    % Complete% Test Coverage

    % Test Cases Passed

    % Test Cases Blocked

    1st

    Run Fail RateOverall Fail Rate

    % Defects Corrected

    % Rework

    % Test EffectivenessDefect Discovery Rate

  • 8/13/2019 Software Test Metrics QA

    15/23

  • 8/13/2019 Software Test Metrics QA

    16/23

    Slide 16

    Sample Run Log

    Metric Value Metric Value

    Total # of TCs 100 % Complete 11.0%

    # Executed 13 % Test Coverage 13.0%

    # Passed 11 % TCs Passed 84.6%

    # Failed 1 % TCs Blocked 2.0%# UI 1 % 1st Run Failures 15.4%

    # Blocked 2 % Failures 20.0%

    # Unexecuted 87 % Defects Corrected 66.7%

    # Re-executed 1 % Rework 100.0%

    Total Executions 15

    Total Passes 11

    Total Failures 3

    1st Run Failures 2

    Calculated MetricsBase Metrics

  • 8/13/2019 Software Test Metrics QA

    17/23

    Slide 17

    Issue:The test team tracks and reportsvarious test metrics, but there is noeffort to analyze the data.

    Result:Potential improvements are not

    implemented leaving process gapsthroughout the SDLC. This reduces theeffectiveness of the project team andthe quality of the applications.

    Metrics ProgramNo Analysis

  • 8/13/2019 Software Test Metrics QA

    18/23

    Slide 18

    Solution:

    Closely examine all available data

    Use the objective information todetermine the root cause

    Compare to other projects

    Are the current metrics typical of

    software projects in yourorganization?

    What effect do changes have onthe software developmentprocess?

    Metrics Analysis & Interpretation

    Result:Future projects benefit from a moreeffective and efficient applicationdevelopment process.

  • 8/13/2019 Software Test Metrics QA

    19/23

    Slide 19

    Volvo IT of North America had little or no testinginvolvement in its IT projects. The organizations projectswere primarily maintenance related and operated in aCOBOL/CICS/Mainframe environment. The organizationhad a desire to migrate to newer technologies and felt thattesting involvement would assure and enhance thistechnological shift.

    While establishing a test team we also instituted a metricsprogram to track the benefits of having a QA group.

    Metrics Case Study

  • 8/13/2019 Software Test Metrics QA

    20/23

    Slide 20

    Project V

    Introduced a test methodology and metrics programProject was 75% complete (development was nearly

    finished)Test team developed 355 test scenarios30.7% - 1stRun Fail Rate31.4% - Overall Fail Rate

    Defect Repair Costs = $519,000

    Metrics Case Study

  • 8/13/2019 Software Test Metrics QA

    21/23

    Slide 21

    Project T

    Instituted requirements walkthroughs and design reviewswith test team input

    Same resources comprised both project teamsTest team developed 345 test scenarios17.9% - 1stRun Fail Rate18.0% - Overall Fail Rate

    Defect Repair Costs = $346,000

    Metrics Case Study

  • 8/13/2019 Software Test Metrics QA

    22/23

    Slide 22

    Metrics Case Study

    Project V Project T Reduction of

    1st Run Fail Rate 30.7% 17.9% 41.7%

    Overall Failure Rate 31.4% 18.0% 42.7%

    Cost of Defects 519,000.00$ 346,000.00$ 173,000.00$

    Reduction of33.3%in the cost of defect repairs

    Every project moving forward, using the sameQA principles can achieve the same type of savings.

  • 8/13/2019 Software Test Metrics QA

    23/23

    Slide 23

    Q & A