Testing Fundamentals
Need for, Black box, White box, Minimal output
Need for Testing
n Software systems are inherently complexn Large systems 1 to 3 errors per 100 lines of code (LOC)
Test–2
Need for Testing
n Software systems are inherently complexn Large systems 1 to 3 errors per 100 lines of code (LOC)
n Extensive verification and validiation is required to build quality software
Test–3
Need for Testing
n Software systems are inherently complexn Large systems 1 to 3 errors per 100 lines of code (LOC)
n Extensive verification and validiation is required to build quality softwaren Verification
n Does the software meet its specifications
Test–4
Need for Testing
n Software systems are inherently complexn Large systems 1 to 3 errors per 100 lines of code (LOC)
n Extensive verification and validiation is required to build quality softwaren Verification
n Does the software meet its specificationsn Validation
n Does the software meet its requirements
Test–5
Need for Testing
n Software systems are inherently complexn Large systems 1 to 3 errors per 100 lines of code (LOC)
n Extensive verification and validiation is required to build quality softwaren Verification
n Does the software meet its specificationsn Validation
n Does the software meet its requirements
n Testing is used to determine whether there are faults in a software system
Test–6
Need for Testing
n The process of testing is to find the minimum number of test cases that can produce the maximum number of failures to achieve a desired level of confidence
Test–7
Need for Testing
n The process of testing is to find the minimum number of test cases that can produce the maximum number of failures to achieve a desired level of confidence
n Cost of validation should not be under estimated
Test–8
Need for Testing
n The process of testing is to find the minimum number of test cases that can produce the maximum number of failures to achieve a desired level of confidence
n Cost of validation should not be under estimated
n The cost of testing is extremely highn Anything that we can do to reduce it is worthwhile
Test–9
Testing Not Enough
n Exhaustive testing is not usually possible
Test–10
Testing Not Enough
n Exhaustive testing is not usually possibleTesting can determine the presence of faults, never their absence
Dikstra
Test–11
Testing Not Enough
n Exhaustive testing is not usually possibleTesting can determine the presence of faults, never their absence
Dikstra
n Test to give us a high level of confidence
Test–12
Test Strategy
n Identify test criterian What are the goals for comparing the system against its
specificationn Reliability, completeness, robustness
Test–13
Test Strategy
n Identify test criterian What are the goals for comparing the system against its specification
n Reliability, completeness, robustness
n Identify target components for testingn In an OO system the classes and class hierarchies
Test–14
Test Strategy
n Identify test criterian What are the goals for comparing the system against its specification
n Reliability, completeness, robustness
n Identify target components for testingn In an OO system the classes and class hierarchies
n Generate test casesn Produce test cases that can identify faults in an
implementation
Test–15
Test Strategy
n Identify test criterian What are the goals for comparing the system against its specification
n Reliability, completeness, robustness
n Identify target components for testingn In an OO system the classes and class hierarchies
n Generate test casesn Produce test cases that can identify faults in an implementation
n Execute test cases against target components
Test–16
Test Strategy
n Identify test criterian What are the goals for comparing the system against its specification
n Reliability, completeness, robustness
n Identify target components for testingn In an OO system the classes and class hierarchies
n Generate test casesn Produce test cases that can identify faults in an implementation
n Execute test cases againsr target components
n Evaluationn If expected outputs are not produced, a bug report is issued
Test–17
Test Plans
n Specify a set of test input
Test–18
Test Plans
n Specify a set of test input
n For each input give the expected output
Test–19
Test Plans
n Specify a set of test input
n For each input give the expected output
n Run the program and document the actual output
Test–20
Test Plans
n Specify a set of test input
n For each input give the expected output
n Run the program and document the actual output
n Example – square root programn Input: 4.0n Expected Output: 2.0n Actual Output: 1.99999 looks all right
n Input: -4n Expected Output: Invalid inputn Actual Output: 1.99999 Uh oh! problem here
Test–21
Black Box Testing – Data Coverage
n Testing based on input and output alonen Do not consider underlying implementation
Test–22
Black Box Testing – Data Coverage
n Testing based on input and output alonen Do not consider underlying implementation
n Test cases are generated from the specificationn Pre and post conditions
Test–23
Black Box Testing – Data Coverage
n Testing based on input and output alonen Do not consider underlying implementation
n Test cases are generated from the specificationn Pre and post conditions
n Specific kinds of black box testingn Random testing
n Generate random inputsn Easy to generate casesn good at detecting failuesn Must be able to easily generate expected output
Test–24
Black Box Testing – Data Coverage
n Testing based on input and output alonen Do not consider underlying implementation
n Test cases are generated from the specificationn Pre and post conditions
n Specific kinds of black box testingn Random testing
n Generate random inputsn Easy to generate casesn good at detecting failuesn Must be able to easily generate expected output
n Partition testingn See next slides
Test–25
Partition Testing
n Cannot try all possible inputs
Test–26
Partition Testing
n Cannot try all possible inputsn Partition input into equivalence classes
n Every value in a class should behave similarly
Test–27
Partition Testing
n Cannot try all possible inputsn Partition input into equivalence classes
n Every value in a class should behave similarly
n Test casesn just before boundaryn just after a boundaryn on a boundaryn one from the middle of an equivalence class
Test–28
Partition Testing
n Cannot try all possible inputsn Partition input into equivalence classes
n Every value in a class should behave similarly
n Test casesn just before boundaryn just after a boundaryn on a boundaryn one from the middle of an equivalence class
n Loopsn Zero times through the bodyn Once through the bodyn Many times through the body
Test–29
Partition Testing examples
n Example 1 – Absolute value – 2 equivalence classes
n Values below zero and values above zero
n 5 test cases: large negative, -1, 0, +1, large positive
Test–30
Partition Testing examples
n Example 2 – Tax rates – 3 equivalence classesn 0 .. $29,000 at 17%
n 29,001 .. 35,000 at 26%
n 35,001 ... at 29%
n 13 Test cases 0 1 15,000 28,999 29,000 29,001 29,002 30,000 34,999 35,000 35,001 35,002 50,000
Test–31
Partition Testing examples
n Example 3 – Insert into a sorted list -- max 20 elements
Test–32
Partition Testing examples
n Example 3 – Insert into a sorted list -- max 20 elements
n About 25 test cases
Test–33
Partition Testing examples
n Example 3 – Insert into a sorted list -- max 20 elements
n About 25 test casesn Boundary conditions on the list
n empty boundary – length 0, 1, 2n full boundary – length 19, 20 , 21n middle of range – length 10
Test–34
Partition Testing examples
n Example 3 – Insert into a sorted list -- max 20 elements
n About 25 test casesn Boundary conditions on the list
n empty boundary – length 0, 1, 2n full boundary – length 19, 20 , 21n middle of range – length 10
n Boundary conditions on insertingn just before & after first list elementn just before & after last list elementn into the middle of the list
Test–35
Partition Testing examples
n Example 3 – Insert into a sorted list -- max 20 elements
n About 25 test casesn Boundary conditions on the list
n empty boundary – length 0, 1, 2n full boundary – length 19, 20 , 21n middle of range – length 10
n Boundary conditions on insertingn just before & after first list elementn just before & after last list elementn into the middle of the list
n Suppose an error occurs when adding to the upper end of a full listn Devise additional test cases to test hypothesis as to why
the error occurs
Test–36
White Box Testing
n Use internal properties of the system as test case generation criteria
Test–37
White Box Testing
n Use internal properties of the system as test case generation criteria
n Generate cases based onn Statement blocksn Paths
Test–38
White Box Testing
n Use internal properties of the system as test case generation criteria
n Generate cases based onn Statement blocksn Paths
n Identify unintentionaln Infinite loopsn Illegal pathsn Unreachable program text (dead code)
Test–39
White Box Testing
n Use internal properties of the system as test case generation criteria
n Generate cases based onn Statement blocksn Paths
n Identify unintentionaln Infinite loopsn Illegal pathsn Unreachable program text (dead code)
n Need test cases for exceptions and interrupts
Test–40
Statement Coverage
n Make sure every statement in the program is executed at least once
n Examplen if a < b then c = a+b ; d = a*b /* 1 */
else c = a*b ; d = a+b /* 2 */if c < d then x = a+c ; y = b+d /* 3 */ else x = a*c ; y = b*d /* 4 */
n Statement coverage – can do with 2 testsn Execute 1 & 3 with a < b & a+b < a*b
n a = 2 ; b = 5n Execute 2 & 4 with a >= b & a*b >= a+b
n a = 5 ; b = 2
1
3 4
2
T
T
F
F
Test–41
Statement Coverage
n Loops – only 1 test requiredn Execute body at least oncen If there are choices in the body, need to execute body with
enough tests to execute the statements in the body, analogous to example 1.
Body
T
F
Test–42
Top Down Testing
n Test upper levels of program first
Test–43
Top Down Testing
n Test upper levels of program first
n Use stubs for lower levels
Test–44
Top Down Testing
n Test upper levels of program first
n Use stubs for lower levelsn Stubs are dummy procedures that have "empty"
implementations – do nothing
Test–45
Top Down Testing
n Test upper levels of program first
n Use stubs for lower levelsn Stubs are dummy procedures that have "empty"
implementations – do nothingn For functions return a constant value
Test–46
Top Down Testing
n Test upper levels of program first
n Use stubs for lower levelsn Stubs are dummy procedures that have "empty"
implementations – do nothingn For functions return a constant value
n Test calling sequences for procedures and simple cases for upper levels.
Test–47
Bottom Up Testing
n Use test drivers
Test–48
Bottom Up Testing
n Use test driversn Have complete implementation of subprograms
Test–49
Bottom Up Testing
n Use test driversn Have complete implementation of subprogramsn Create a special main program to call the subprograms with
a sequence of test cases
Test–50
Bottom Up Testing
n Use test driversn Have complete implementation of subprogramsn Create a special main program to call the subprograms with
a sequence of test casesn Can be interactive, semi-interactive, or non-interactive
Test–51
Bottom Up Testing
n Use test driversn Have complete implementation of subprogramsn Create a special main program to call the subprograms with
a sequence of test casesn Can be interactive, semi-interactive, or non-interactive
n See minimal output test program slides
Test–52
Mixed Top & Bottom Testing
n Use scaffolding
Test–53
Mixed Top & Bottom Testing
n Use scaffoldingn Have some stubs
Test–54
Mixed Top & Bottom Testing
n Use scaffoldingn Have some stubsn Have some special test drivers
Test–55
Mixed Top & Bottom Testing
n Use scaffoldingn Have some stubsn Have some special test driversn Like the scaffolding around a building while it is being built
or repaired
Test–56
Mixed Top & Bottom Testing
n Use scaffoldingn Have some stubsn Have some special test driversn Like the scaffolding around a building while it is being built
or repairedn Not a part of the final product but necessary to complete
the task
Test–57
Regressive Testing
n Rerun all previous tests whenever a change is made
Test–58
Regressive Testing
n Rerun all previous tests whenever a change is maden Changes can impact previously working programs
Test–59
Regressive Testing
n Rerun all previous tests whenever a change is maden Changes can impact previously working programsn So retest everything
Test–60
Regressive Testing
n Rerun all previous tests whenever a change is maden Changes can impact previously working programsn So retest everythingn Must also test the changes
Test–61
Minimal Output Testing
n Reading and comparing expected with actual output is tedious and error prone
Test–62
Minimal Output Testing
n Reading and comparing expected with actual output is tedious and error prone
n Task should be automated – especially for regressive testing
Test–63
Minimal Output Testing
n Reading and comparing expected with actual output is tedious and error prone
n Task should be automated – especially for regressive testing
n Build the expected output into the test driver and compare with the actual output
Test–64
Minimal Output Testing
n Reading and comparing expected with actual output is tedious and error prone
n Task should be automated – especially for regressive testing
n Build the expected output into the test driver and compare with the actual output
n Report only if expected ≠ actualn Output states which test failed, the expected and the actual
outputs
Test–65
Minimal Output Testing
n Reading and comparing expected with actual output is tedious and error prone
n Task should be automated – especially for regressive testing
n Build the expected output into the test driver and compare with the actual output
n Report only if expected ≠ actualn Output states which test failed, the expected and the actual
outputs
n Successful test runs only output the messagen Tests passed
Test–66
Minimal Output Testing – 2
n Example checking a stack implementation Stack list = new Stack(); verifyEmpty("1", list);
list.add("a"); verifyL1("2", list, "[ a ]");list.remove(); verifyEmpty("3", list);list.add("a");list.add("b"); verifyL2("4", list, "[ b , a ]");
if (list.contains("c")) println("5 shouldn't contain c");if (!list.contains("a")) println("6 should contain a");if (!list.contains("b")) println("7 should contain b");
list.add("c"); verifyL3("8", list, "[ c , b , a ]");list.remove(); verifyL2("9", list, "[ b , a ]");list.remove(); verifyL1("10", list, "[ a ]");list.remove(); verifyEmpty("11", list);
n Verify routines do the appropriate test and output messages
Test–67
Assertions
n Running programs with a life jacket
Test–68
Assertions
n Running programs with a life jacketn Assertions are tested whenever the program is executed,
unlike testing which is done off-line
Test–69
Assertions
n Running programs with a life jacketn Assertions are tested whenever the program is executed, unlike
testing which is done off-line
n Assertions can be put into programs using if...then statementsn Some languages such as Eiffel have them built inn C has the assert macro
Test–70
Assertions
n Running programs with a life jacketn Assertions are tested whenever the program is executed, unlike
testing which is done off-line
n Assertions can be put into programs using if...then statementsn Some languages such as Eiffel have them built inn C has the assert macro
n The condition compares expected with actual values and executes an exception if the assertion fails
Test–71
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development products
Test–72
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
Test–73
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
Test–74
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
Test–75
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
n Walkthrough teamn Usually 4-6 people
Test–76
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
n Walkthrough teamn Usually 4-6 peoplen Elicit questions, facilitate discussion
Test–77
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
n Walkthrough teamn Usually 4-6 peoplen Elicit questions, facilitate discussionn Interactive process
Test–78
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
n Walkthrough teamn Usually 4-6 peoplen Elicit questions, facilitate discussionn Interactive processn Not done to evaluate people
Test–79
Inspections & Walkthroughs
n Manual or computer aided comparisons of software development productsn specifications, program text, analysis documents
n Documents are paraphrased by the authors
n Walkthroughs are done by going through the execution paths of a program, comparing its outputs with those of the paraphrased documents
n Walkthrough teamn Usually 4-6 peoplen Elicit questions, facilitate discussionn Interactive processn Not done to evaluate people
Test–80
Inspections & Walkthroughs
n Psychological side effects
n If a walkthrough is going to be performed, developers frequently write easy to read program text
n This clarity can help make future maintenance easier
Test–81
OO Testing
n Using OO technology can have effects on three kinds of testingn Intra featuren Inter feature testingn Testing class hierarchies
Test–82
OO Testing
n Using OO technology can have effects on three kinds of testingn Intra featuren Inter feature testingn Testing class hierarchies
n Intra feature testingn Features can be tested much as procedures & functions are
tested in imperative languages
Test–83
OO Testing
n Using OO technology can have effects on three kinds of testingn Intra featuren Inter feature testingn Testing class hierarchies
n Intra feature testingn Features can be tested much as procedures & functions are tested
in imperative languages
n Inter feature testingn Test an entire class against an abstract data type
(specification)n Some inter-feature testing methods specify correct
sequences of feature calls, and use contracts to derive test cases
Test–84
Testing Class Hierarchies
n May have re-test inherited features
Test–85
Testing Class Hierarchies
n May have re-test inherited features
n For testing a hierarchy it is usually best to test from the top downn Start with base classesn Test each feature in isolationn Then test feature interactions
Test–86
Testing Class Hierarchies
n May have re-test inherited features
n For testing a hierarchy it is usually best to test from the top downn Start with base classesn Test each feature in isolationn Then test feature interactions
n Build test historiesn Associate test cases with featuresn History can be inherited with the class, allowing for reuse of
test cases
Test–87
Testing Classes
n Consider the class as the basic unit
Test–88
Testing Classes
n Consider the class as the basic unit
n Like unit (module) testingn Usually tested in isolationn Surround class with stubs and drivers
Test–89
Testing Classes
n Consider the class as the basic unit
n Like unit (module) testingn Usually tested in isolationn Surround class with stubs and drivers
n Exercise each feature in turnn Create test cases for each feature
Test–90
Testing Classes
n Consider the class as the basic unit
n Like unit (module) testingn Usually tested in isolationn Surround class with stubs and drivers
n Exercise each feature in turnn Create test cases for each feature
n Often a test driver is createdn Simple menu that can be used to exercise each feature with
inputs from a test file
Test–91
Testing Classes
n Consider the class as the basic unit
n Like unit (module) testingn Usually tested in isolationn Surround class with stubs and drivers
n Exercise each feature in turnn Create test cases for each feature
n Often a test driver is createdn Simple menu that can be used to exercise each feature with
inputs from a test file
n Need to recompile when switching drivers
Test–92
Integration Testing
n Testing with all the components assembled
Test–93
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
Test–94
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
n Usually want to do this in a piece by piece fashion
Test–95
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
n Usually want to do this in a piece by piece fashionn Avoid a big bang test
Test–96
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
n Usually want to do this in a piece by piece fashionn Avoid a big bang testn Incremental testing and incremental integration is
preferable
Test–97
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
n Usually want to do this in a piece by piece fashionn Avoid a big bang testn Incremental testing and incremental integration is
preferablen Integrate after unit testing a component
Test–98
Integration Testing
n Testing with all the components assembled
n Focuses on testing the interfaces between components
n Usually want to do this in a piece by piece fashionn Avoid a big bang testn Incremental testing and incremental integration is preferable
n Integrate after unit testing a component
n Bottom up and top down approaches may be used
Test–99