weighted+risk+based+testing (1)

3
Weighted Risk Based Testing Step-by-step guide 1. After creating the test case in Zephyr determine the probability of failure. The probability should be determined by certain risk conditions 1. 1. Frequent Change Request 2. Undefined Scope 3. New code vs changed code 4. For each number of defects the risk probability should grow 2. Risk Impact is determined as it will affect the outcome of the project 1. None- There is no impact because we will not support it any longer 2. Minor- If this item fails we will launch anyways 3. Significant - This is a must have but can be launched at a later date 4. Severe - Failure will delay the launch of the project but the site is still functional 5. Catastrophic - The site will fail or the launch would be cancelled until this item is addressed and fixed. Weighted Testing The concept of weighted or risk based testing has been around and in practice for decades, it is agnostic to process and lives within Agile, Waterfall or the newest flavor of the month. This flexibility with weighted testing is why it is an ideal solution for shops that experience frequent time crunches and scope creep. Weighted testing provides easy implementation, flexible enough for various platforms and testing types, objective views ,and a legitimacy of priority to test cases and associated defects. According to the National Institute of Standards and Technology, American companies are estimated to lose approximately $59.5 billion dollars annually. Here as some statics you can research within our own system, Jira. Across projects we currently have test cases and 715 open bugs in a status that reflects open or defined. That is an average of 282 bugs (defects) per project with roughly 10% of the total number of bugs being “Critical” or “High”. Average time spent testing is averages 2.25 hours per test case for the initial run ,is around 3 hours with test cases producing critical defects 1 out of 14, those test cases normally produce 3.25 total defects. Those defects create the need to run the failed test cases again, which means those 140 critical or high test cases cause the tester to run those cases a combined total of 385 times at 866.25 hours. The typical sprint is happening at 80 hours or two weeks, each case would take roughly 11 hours to execute again in single test to clear all defects. For 2014 the numbers are: Test Cases Opened or Updated: 1,167 New Feature/User Story: 1,906 Critical Bugs: 2,015 Total Bugs: 3,808 Approximate hours spent: 11,639+ in 2014

Upload: catherine-campbell

Post on 17-Aug-2015

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Weighted+Risk+Based+Testing (1)

Weighted Risk Based Testing

Step-by-step guide

1. After creating the test case in Zephyr determine the probability of failure. The probability should be determined by certain risk conditions

1. 1. Frequent Change Request 2. Undefined Scope 3. New code vs changed code 4. For each number of defects the risk probability should grow

2. Risk Impact is determined as it will affect the outcome of the project 1. None- There is no impact because we will not support it any longer 2. Minor- If this item fails we will launch anyways 3. Significant - This is a must have but can be launched at a later date 4. Severe - Failure will delay the launch of the project but the site is still functional 5. Catastrophic - The site will fail or the launch would be cancelled until this item is

addressed and fixed.

Weighted Testing

The concept of weighted or risk based testing has been around and in practice for decades, it is agnostic to process and lives within Agile, Waterfall or the newest flavor of the month. This flexibility with weighted testing is why it is an ideal solution for shops that experience frequent time crunches and scope creep. Weighted testing provides easy implementation, flexible enough for various platforms and testing types, objective views ,and a legitimacy of priority to test cases and associated defects.

According to the National Institute of Standards and Technology, American companies are estimated to lose approximately $59.5 billion dollars annually. Here as some statics you can research within our own system, Jira. Across projects we currently have test cases and 715 open bugs in a status that reflects open or defined. That is an average of 282 bugs (defects) per project with roughly 10% of the total number of bugs being “Critical” or “High”. Average time spent testing is averages 2.25 hours per test case for the initial run ,is around 3 hours with test cases producing critical defects 1 out of 14, those test cases normally produce 3.25 total defects. Those defects create the need to run the failed test cases again, which means those 140 critical or high test cases cause the tester to run those cases a combined total of 385 times at 866.25 hours. The typical sprint is happening at 80 hours or two weeks, each case would take roughly 11 hours to execute again in single test to clear all defects. For 2014 the numbers are:

Test Cases Opened or Updated: 1,167

New Feature/User Story: 1,906

Critical Bugs: 2,015

Total Bugs: 3,808

Approximate hours spent: 11,639+ in 2014

Page 2: Weighted+Risk+Based+Testing (1)

For Company, 52% of all defects opened this year were in a Critical or high status. We can reduce this number the introduction of Weighted Testing should be determined by the importance the feature is to the company or business. NIST gives guidelines:

Simple and computable

Persuasive: The metrics appear to be measuring the correct attribute

Objective and constant

Collectable regardless of platform or computing language

Provides useful and actionable feedback.

To add Risk Based into the mix we look to the paper “Risk Based Testing and Metrics”, in which Amland determined the steps needed for risk based testing were to:

Identify the risk and its classification. Is there a potential risk to this function or activity? How is classified? Critical, High, Medium, Low or non-existent.

What are the consequences if this particular function fails?

Determine the risk mitigation plan to avoid risks or minimize their impact through focused testing.

Risk Reporting – What is the probability of occurrence?

Risk Prediction – Determined through metrics o Earlier defects o Occurrence of defects o Predictive behavior’s prior to occurrence of defect

To implement Weighted testing with Risk based we need to complete upfront analysis, planning, and the metrics prior to execution, this will be important to success. Another consideration to take into place as we develop Weighted Testing to fit our unique ways is found in the paper “Fuzzy Logic”. In a paper written by Zhou and Zhang, the duo suggests the quality of collected metrics is vital to successful testing. They introduce ratios to this equation, “the ratio of injecting faults to inherent faults found by testers…” Through up front diligence of the collection and utilization of metrics will significantly decrease cost and workload while improving software quality and the effectiveness of testing, this is backed by the paper “A Fuzzy Logic Based Approach For Software Testing” and specifically deals with the black box testing us currently perform.

Our group is creating featured based scripts using the User Story as the backbone, testable tasks as the details of steps. For an example a feature gets rated 1 to 5: EXAMPLE = 5, EXAMPLE = 4, EXAMPLE = 2. Pages would be rated as such: The Home Page= 4, Originals= 5, Movies= 4,etc. The new carousel we are giving EXAMPLE would rate a 5, with each of those features 1 -5. The test case would have a combined weight of 25 with each step being between 15 + for high priority. If the Carousel Video failed then the following would be taken into consideration when decided the priority of the defect. Company – Weight= 5

EXAMPLE 1 – Weight= 5

EXAMPLE 2 - Weight= 5

EXAMPLE 3 – Weight = 5

*Risked Based attribute – Carousel has 3 defects that failed 4 times each = 12

Combined Weight is 20 for priority and 32 for Regression.

Page 3: Weighted+Risk+Based+Testing (1)

If I gave the weights of EXAMPLE Series page the following: EXAMPLE – 2,Series – 4, “SHOW” -4 , “SHOW2” -3. I run EXAMPLE regression script where “Power” original and “Gunsmoke” series failed, we would determine that “Power” would be a higher priority than “Gunsmoke”, but “Power” is still less than the Company Carouse. In our example, if 20+ is critical and 15 -19 a high then the defect for Company Carousel would be “Critical” status where Movieplex Series Power would be 22 weight with the same number of defects and failures. . When the numbers are set by collaboration between developers, users and testing then the assigned priority is not arbitrary but decided by the numbers. Numbers give defects validity to the prioritization that is based off objective factors. The simplicity and flexibility of weighted /risk based approach to the creation and execution of test cases, it is the most doable solution. It will reduce the number of test cases created, create a foundation for strong metrics to be used in planning, and add a strong case for why there should be no open critical defects at the end of any cycle.

Works Cited

Amland, Ståle. “Risk Based Testing and Metrics.” Stavanger. (1999) Print.

RTI –National Institute of Standards & Technology. “The Economic Impacts of Inadequate Infrastructure for Software Testing.” NIST Program Office. Print. May 2002.

Zhang, Zili, and Yanhui Zhou. "A Fuzzy Logic Based Approach For Software Testing." International Journal Of Pattern Recognition & Artificial Intelligence 21.4 (2007): 709-722. Academic Search Premier. Web. 29 Sept. 2014.