search based testing - for webtarot2010.ist.tugraz.at/slides/mcminn.pdf · overview how and why...

Post on 26-Oct-2020

6 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Search-Based TestingPhil McMinn

University of Sheffield, UK

Overview

How and why Search-Based Testing Works

Examples: Temporal, Functional, Structural

Search-Based Test Data Generation

Empirical and Theoretical Studies

Testability Transformation

Input Domain Reduction

Acknowledgements

The material in some of these slides has kindly been provided by:

Mark Harman (KCL)

Joachim Wegener (Berner & Matner)

the hard part...

manual design of test cases / scenarios

laborious,time-consuming

tedious

difficult!(where are the faults ???)

Conventional Testing

Search-Based Testing is an automated search of a

potentially large input space

the search is guided by a problem-specific ‘fitness function’

the fitness function guides the search to the test goal

Random Test Data Generation

Input

Fitness-guided search

Input

Fitn

ess

Fitness-guided search

Input

Fitn

ess

Fitness Function

The fitness function scores different inputs to the system according to the test goal

which ones are ‘good’ (that we should develop/evolve further)

which ones are useless (that we can forget about)

Publications since 1976

source: SEBASE publications repository http://www.sebase.org

Fitness Functions

Often easy

We often define metrics

Need not be complex

Conventional testingmanual design of test cases / scenarios

laborious,time-consuming

tediousdifficult!

(where are the faults ???)

Search-Based Testing:automatic - may sometimes be time consuming, but it is not a human’s time being consumed

Search-Based Testing:a good fitness function will lead the search to

the faults

Generating vs CheckingConventional Software Testing Research

Write a method to construct test cases

Search-Based Testing

Write a method to determine how good a test case is

Generating vs CheckingConventional Software Testing Research

Write a method to construct test cases

Search-Based Testing

Write a fitness functionto determine how good a test case is

White box testing

fitness function analysesthe outcome of

decision statements and the values of variables in

predicates

White-box + testing for assertion violations

safety condition (desired state):

speed < 150mph

fitness function:f =150 - speed

fitness minimised If f is zero or less a fault is found

Search Techniques

Hill Climbing

Input

Fitn

ess

Hill Climbing

Input

Fitn

ess

No better solution in neighbourhood Stuck at a local optima

Hill Climbing - Restarts

Input

Fitn

ess

Hill Climbing - Restarts

Input

Fitn

ess

Simulated Annealing

Input

Fitn

ess

Worse solutions temporarily

accepted

Evolutionary Algorithm

Input

Fitn

ess

Evolutionary Algorithm

Input

Fitn

ess

Evolutionary Testing

Mutation

Recombination

Selection

Insertion

Fitness Evaluation

End?

Test cases

Monitoring

Execution

Which search method?Depends on characteristics of the search landscape

Which search method?Some landscapes are

hard for some searches but easy for

others

more on this later...

...and vice versa...

Representation

Fitness Function

Neighbourhood

Ingredients for an optimising search

algorithm

Representation

Fitness Function

Neighbourhood

Ingredients for Search-Based Testing

Representation A method of encoding all possible inputs

Usually straightforward

Inputs are already in data structuresFitness Function

Neighbourhood

Ingredients for Search-Based Testing

Part of our understanding of the problem

We need to know our near neighbours

Representation

Fitness Function

Neighbourhood

Ingredients for Search-Based Testing

Transformation of the test goal to a numerical function

Numerical values indicate how ‘good’ an input is

Ingredients for Search-Based Testing

Representation

Fitness Function

Neighbourhood

More search algorithms

Tabu Search

Estimation of Distribution Algorithms

Particle Swarm Optimisation

Ant Colony Optimisation

Genetic Programming

Important Publications

Phil McMinn: Search-based software test data generation: a survey. Software Testing, Verification and Reliability 14(2): 105-156, 2004

Wasif Afzal, Richard Torkar and Robert Feldt: A Systematic Review of Search-based Testing for Non-Functional System Properties. Information and Software Technology, 51(6):957-976, 2009

Shaukat Ali, Lionel Briand, Hadi Hemmati and Rajwinder Panesar-Walawege: A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation. IEEE Transactions on Software Engineering, To appear, 2010

M. Harman and B. Jones: Search-based software engineering. Information and Software Technology, 43(14):833–839, 2001.

M. Harman: The Current State and Future of Search Based Software Engineering, In Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), 20-26 May, Minneapolis, USA (2007)

D. Whitley: An overview of evolutionary algorithms: Practical issues and common pitfalls. Information and Software Technology, 43(14):817–831, 2001.

O Bühler and J Wegener: Evolutionary functional testing, Computers & Operations Research, 2008 - Elsevier

J. Wegener and M. Grochtmann. Verifying timing constraints of real-time systems by means of evolutionary testing. Real-Time Systems, 15(3):275– 298, 1998.

O. Buehler and J. Wegener. Evolutionary functional testing of an automated parking system. In International Conference on Computer, Communication and Control Technologies and The 9th. International Conference on Information Systems Analysis and Synthesis, Orlando, Florida, USA, 2003.

Important Publications

Phil McMinn: Search-based software test data generation: a survey. Software Testing, Verification and Reliability 14(2): 105-156, 2004

Wasif Afzal, Richard Torkar and Robert Feldt: A Systematic Review of Search-based Testing for Non-Functional System Properties. Information and Software Technology, 51(6):957-976, 2009

Shaukat Ali, Lionel Briand, Hadi Hemmati and Rajwinder Panesar-Walawege: A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation. IEEE Transactions on Software Engineering, To appear, 2010

M. Harman and B. Jones: Search-based software engineering. Information and Software Technology, 43(14):833–839, 2001.

M. Harman: The Current State and Future of Search Based Software Engineering, In Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), 20-26 May, Minneapolis, USA (2007)

D. Whitley: An overview of evolutionary algorithms: Practical issues and common pitfalls. Information and Software Technology, 43(14):817–831, 2001.

O Bühler and J Wegener: Evolutionary functional testing, Computers & Operations Research, 2008 - Elsevier

J. Wegener and M. Grochtmann. Verifying timing constraints of real-time systems by means of evolutionary testing. Real-Time Systems, 15(3):275– 298, 1998.

O. Buehler and J. Wegener. Evolutionary functional testing of an automated parking system. In International Conference on Computer, Communication and Control Technologies and The 9th. International Conference on Information Systems Analysis and Synthesis, Orlando, Florida, USA, 2003.

Surveys

Important Publications

Phil McMinn: Search-based software test data generation: a survey. Software Testing, Verification and Reliability 14(2): 105-156, 2004

Wasif Afzal, Richard Torkar and Robert Feldt: A Systematic Review of Search-based Testing for Non-Functional System Properties. Information and Software Technology, 51(6):957-976, 2009

Shaukat Ali, Lionel Briand, Hadi Hemmati and Rajwinder Panesar-Walawege: A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation. IEEE Transactions on Software Engineering, To appear, 2010

M. Harman and B. Jones: Search-based software engineering. Information and Software Technology, 43(14):833–839, 2001.

M. Harman: The Current State and Future of Search Based Software Engineering, In Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), 20-26 May, Minneapolis, USA (2007)

D. Whitley: An overview of evolutionary algorithms: Practical issues and common pitfalls. Information and Software Technology, 43(14):817–831, 2001.

O Bühler and J Wegener: Evolutionary functional testing, Computers & Operations Research, 2008 - Elsevier

J. Wegener and M. Grochtmann. Verifying timing constraints of real-time systems by means of evolutionary testing. Real-Time Systems, 15(3):275– 298, 1998.

O. Buehler and J. Wegener. Evolutionary functional testing of an automated parking system. In International Conference on Computer, Communication and Control Technologies and The 9th. International Conference on Information Systems Analysis and Synthesis, Orlando, Florida, USA, 2003.

Getting started in SBSE

Important Publications

Phil McMinn: Search-based software test data generation: a survey. Software Testing, Verification and Reliability 14(2): 105-156, 2004

Wasif Afzal, Richard Torkar and Robert Feldt: A Systematic Review of Search-based Testing for Non-Functional System Properties. Information and Software Technology, 51(6):957-976, 2009

Shaukat Ali, Lionel Briand, Hadi Hemmati and Rajwinder Panesar-Walawege: A Systematic Review of the Application and Empirical Investigation of Search-Based Test-Case Generation. IEEE Transactions on Software Engineering, To appear, 2010

M. Harman and B. Jones: Search-based software engineering. Information and Software Technology, 43(14):833–839, 2001.

M. Harman: The Current State and Future of Search Based Software Engineering, In Proceedings of the 29th International Conference on Software Engineering (ICSE 2007), 20-26 May, Minneapolis, USA (2007)

D. Whitley: An overview of evolutionary algorithms: Practical issues and common pitfalls. Information and Software Technology, 43(14):817–831, 2001.

O Bühler and J Wegener: Evolutionary functional testing, Computers & Operations Research, 2008 - Elsevier

J. Wegener and M. Grochtmann. Verifying timing constraints of real-time systems by means of evolutionary testing. Real-Time Systems, 15(3):275– 298, 1998.

O. Buehler and J. Wegener. Evolutionary functional testing of an automated parking system. In International Conference on Computer, Communication and Control Technologies and The 9th. International Conference on Information Systems Analysis and Synthesis, Orlando, Florida, USA, 2003.

Applications

Search-Based Structural Test Data

Generation

Covering a structure

TARGET

Fitness evaluation

TARGET

The test data executes the ‘wrong’ path

Analysing control flow

TARGET

The outcomes at key decision statements

matter.

These are the decisions on

which the target is control dependent

Approach Level

TARGET

= 2

= 1

= 0

minimisation

Analysing predicatesApproach level alone gives us coarse values

a = 50, b = 0 a = 45, b = 5 a = 40, b = 10 a = 35, b = 15 a = 30, b = 20 a = 25, b = 25 !

getting ‘closer’ to being true

Branch distanceAssociate a distance formula with different

relational predicates

a = 50, b = 0 branch distance = 50a = 45, b = 5 branch distance = 40a = 40, b = 10 branch distance = 30a = 35, b = 15 branch distance = 20a = 30, b = 20 branch distance = 10a = 25, b = 25 branch distance = 0

getting ‘closer’ to being true

Branch distances for relational predicates

Putting it all together

true

true

if a >= b

if b >= c

TARGET

TARGET MISSEDApproach Level = 1

Branch Distance = c - b

TARGET MISSEDApproach Level = 2

Branch Distance = b - a

false

false

true if c >= d false

TARGET MISSEDApproach Level = 0

Branch Distance = d - c

Fitness = approach Level + normalised branch distance

TARGET

normalised branch distance between 0 and 1indicates how close approach level is to being penetrated

Normalisation FunctionsSince the ‘maximum’ branch distance is generally unknown we need a non-standard normalisation function

Baresel (2000), alpha = 1.001

Normalisation FunctionsSince the ‘maximum’ branch distance is generally unknown we need a non-standard normalisation function

Arcuri (2010), beta = 1

decrease decrease decrease

void fn( input1, input2, input3 .... )

increase increase increase

Alternating Variable Method

Alternating Variable Method

Input variable value

Fitn

ess

Accelerated hill climb

Alternating Variable Method1. Randomly generate start point

a=10, b=20, c=30

2. ‘Probe’ moves on a

a=9, b=20, c=30 no effect

a=11, b=20, c=30

3. ‘Probe’ moves on b

a=10, b=19, c=30

4. Accelerated moves in direction of improvement

improved

branch distance

A search-based test data generator tool

IGUANA(Java)

Test object

(C code compiled to a DLL )

inputs

information from test object

instrumentation

Java Native Interface

fitness computation

search algorithm

A function for testing

1. Parse the code and extract control dependency graph

TARGET

“which decisions are key for the execution of individual structural

targets” ?

if a == b

if b == c

return 1

Test Object Preparation

2. Instrument the code

TARGET

if a == b

if b == c

return 1

for monitoring control flow and variable values in

predicates

Test Object Preparation

3. Map inputs to a vector

Test Object Preparation

a b c

Straightforward in many cases

Inputs composed of dynamic data structures are harder to compose

Kiran Lakhotia, Mark Harman and Phil McMinn.Handling Dynamic Data Structures in Search-Based Testing.

Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2008), Atlanta, Georgia, USA, July 12-16 2008, pp. 1759-1766, ACM Press.

Instrumentation

Each branching condition is

replaced by a call to the function

node(...)

the instrumentation should only observe the program and not alter its behaviour

The first parameter is the control flow

graph node ID of the decision statement

The second parameter is a boolean condition that replicates the structure in the original program

(i.e. including short-circuiting)

Relational predicates are replaced with functions that compute branch distance.

The instrumentation tells us:

Which decision nodes were executed

and their outcome (branch distances)

Therefore we can find which decision control flow diverged from a target for an input....

... and compute the approach level from the control dependence graph

... and lookup the branch distance

fitness value for an input

TARGET

if a == b

if b == c

return 1

1nput: <20, 20, 30>NODE T F

1 0 1

2 10 0

4 20 0

....

Diverged at node 2approach level: 0

branch distance: 10

fitness = 0.009945219

Which searchalgorithm?

Empirical Study Bibclean Defroster F2 Eurocheck Gimp Space Spice Tiff Totinfo

Mark Harman and Phil McMinn.A Theoretical and Empirical Study of Search Based Testing: Local, Global and Hybrid Search.

IEEE Transactions on Software Engineering, 36(2), pp. 226-247, 2010.

760 branches in ~5 kLOC

Interesting branches

20 98

Alternating Variable Method

Evolutionary Testing

0

10

20

30

40

50

60

70

80

90

100

f2 F2 (

11F)

spac

e spa

ce_s

eqrot

rg (17

T)

spac

e spa

ce_s

eqrot

rg (22

T)

spice

clip_

to_cir

cle (4

9T)

spice

clip_

to_cir

cle (6

2T)

spice

clip_

to_cir

cle (6

8F)

spice

clipa

rc (22

T)

spice

clipa

rc (24

T)

spice

clipa

rc (63

F)

totinf

o Info

Tbl (14

F)

totinf

o Info

Tbl (21

F)

totinf

o Info

Tbl (29

T)

totinf

o Info

Tbl (29

F)

totinf

o Info

Tbl (35

T)

totinf

o Info

Tbl (35

F)

Suc

cess

Rat

e (%

)

Evolutionary Testing

Hill Climbing

Wins for the AVM

1

10

100

1,000

10,000

100,000

gimp_

rgb_to

_hsl

(4T)

gimp_

rgb_to

_hsv

(5F)

gimp_

rgb_to

_hsv

4 (11

F)

gimp_

rgb_to

_hsv

_int (1

0T)

gradie

nt_ca

lc_bil

inear_

factor

(8T)

gradie

nt_ca

lc_co

nical_

asym

_facto

r (3F

)

gradie

nt_ca

lc_co

nical_

sym_fa

ctor (

3F)

gradie

nt_ca

lc_sp

iral_f

actor

(3F)

clipa

rc (13

F)

clipa

rc (15

T)

clipa

rc (15

F)

TIFF_SetS

ample

(5T)

Aver

gage

num

ber o

f fitn

ess

eval

uatio

ns

Evolutionary Testing

Hill Climbing

Wins for the AVM

When does the AVM win?

fitne

ss

inputinput

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

chec

k_IS

BN (23F

)

chec

k_IS

BN (27T

)

chec

k_IS

BN (29T

)

chec

k_IS

BN (29F

)

chec

k_IS

SN (23F

)

chec

k_IS

SN (27T

)

chec

k_IS

SN (29T

)

chec

k_IS

SN (29F

)

Suc

cess

Rat

e

Evolutionary Testing

Hill Climbing

Wins for Evolutionary Testing

When does ET win?

The branches in question were part of a routine for validating ISBN/ISSN strings

When a valid character is found, a counter variable is incremented

When does ET win?Evolutionary algorithms incorporate

a population of candidate solutions

crossover

Crossover enables valid characters to be crossed over into different ISBN/ISSN strings

Schemata

1010100011110000111010

1111101010000000101011

0001001010000111101011

Schemata

1010100011110000111010

1111101010000000101011

0001001010000111101011

The schema theory predicts that schema of above average fitness will proliferate in subsequent

generations of the evolutionary search

Schemata

1**1 *11*

1111

The Genetic AlgorithmRoyal Road

S1: 1111****************************S2: ****1111************************S3: ********1111********************S4: ************1111****************S5: ****************1111************S6: ********************1111********S7: ************************1111****S8: ****************************1111S9: 11111111************************S10: ********11111111****************S11: ****************11111111********S12: ************************11111111S13: 1111111111111111****************S14: ****************1111111111111111S15: 11111111111111111111111111111111

When Crossover Helps Executes the target

R1 R2

Q1

P1 P2

Q2

P3 P4

Q1

P1 P2

Q2

P3 P4

When Crossover Helps Executes the target

Contains 2 valid characters

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains1 valid

character

Contains 2 valid characters

Contains 2 valid characters

Contains 2 valid characters

Contains 4 valid characters

Contains 4 valid characters

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

chec

k_IS

BN (23F

)

chec

k_IS

BN (27T

)

chec

k_IS

BN (29T

)

chec

k_IS

BN (29F

)

chec

k_IS

SN (23F

)

chec

k_IS

SN (27T

)

chec

k_IS

SN (29T

)

chec

k_IS

SN (29F

)

Suc

cess

Rat

e

Evolutionary Testing

Headless Chicken Test

Headless Chicken TestT. Jones, “Crossover, macromutation and population-based search,” Proc. ICGA ʼ95. Morgan Kaufmann, 1995, pp. 73–80.

Investigations into Crossover

Royal Roads

HIFF

Real Royal Roads

Ignoble Trails

M. Mitchell, S. Forrest, and J. H. Holland, “The royal road for genetic algorithms: Fitness landscapes and GA performance,” Proc. 1st European Conference on Artificial Life. MIT Press, 1992, pp. 245–254.

R. A. Watson, G. S. Hornby, and J. B. Pollack, “Modeling building-block interdependency,” Proc. PPSN V. Springer, 1998, pp.97–106.

T. Jansen and I. Wegener, “Real royal road functions - where crossover provably is essential,” Discrete Applied Mathematics, vol. 149, pp. 111–125, 2005.

J. N. Richter, A. Wright, and J. Paxton, “Ignoble trails - where crossover is provably harmful,” Proc. PPSN X. Springer, 2008, pp. 92–101.

Evolutionary Testing Schemata

{(a, b, c) | a = b}

{(a, b, c) | a > 0}

(50, 50, 25)(100, 100, 10)

...

(50, 10, 25)(100, -50, 10)

...

Crossover of good schemata

{(a, b, c) | a = b} {(a, b, c) | b ≥ 100}

{(a, b, c) | a = b ∧ b ≥ 100}

subschema subschema

superschema

building block building block

Crossover of good schemata

{(a, b, c) | a = b ∧ b ≥ 100} {(a, b, c) | c ≤ 10}

{(a, b, c) | a = b ∧ b ≥ 100 ∧ c ≤ 10} coveringschema

What types of program and program structure enable Evolutionary Testing to perform well, through crossover and how?

P. McMinn, How Does Program Structure Impact the Effectiveness of the Crossover Operator in Evolutionary Testing? Proc. Symposium on Search-Based Software Engineering, 2010

1. Large numbers of conjuncts in the input condition

{(a, b, c...) | a = b ∧ b ≥ 100 ∧ c ≤ 10 ...

each represents a ‘sub’-test data generation problem that can be solved independently and

combined with other partial solutions

P. McMinn, How Does Program Structure Impact the Effectiveness of the Crossover Operator in Evolutionary Testing? Proc. Symposium on Search-Based Software Engineering, 2010

2. Conjuncts should reference disjoint sets of variables

{(a, b, c, d ...) | a = b ∧ b = c ∧ c = d ...

the solution of each conjunct independently does not necessarily result in an overall solution

What types of program and program structure enable Evolutionary Testing to perform well, through crossover and how?

Progressive Landscape

fitne

ss

input input

Crossover - Conclusions

Crossover lends itself to programs/units that process large data structures (e.g. strings, arrays) resulting in input condition conjuncts with disjoint variables

1. Large numbers of conjuncts in the input condition 2. Conjuncts should reference disjoint sets of variables

... or units that require large sequences of method calls to move an object into a required state

e.g. testing for a full stack - push(...), push(...), push(...)

Other Theoretical Work

A. Arcuri, P. K. Lehre, and X. Yao, “Theoretical runtime analyses of search algorithms on the test data generation for the triangle classification problem,” SBST workshop 2008, Proc. ICST 2008. IEEE, 2008, pp. 161–169.

A. Arcuri, “Longer is better: On the role of test sequence length in software testing,” Proc. ICST 2010. IEEE, 2010.

TestabilityTransformation

The ‘Flag’ Problem

Program Transformation

Testability transformation: change the program to improve test data generation

... whilst preserving test adequacy

Programs will inevitably have features that heuristic searches handle less well

Mark Harman, Lin Hu, Rob Hierons, Joachim Wegener, Harmen Sthamer, Andre Baresel and Marc Roper. Testability Transformation.

IEEE Transactions on Software Engineering. 30(1): 3-16, 2004.

Nesting

Phil McMinn, David Binkley and Mark Harman Empirical Evaluation of a Nesting Testability Transformation for Evolutionary Testing

ACM Transactions on Software Engineering and Methodology, 18(3), Article 11, May 2009

Testability Transformation

Note that the programs are no longer equivalent

But we don’t care - so long as we get the test data is still adequate

Nesting & Local Optima

Nesting & Local Optima

-100

-80

-60

-40

-20

0

20

40

60

80

100

Nested branches

Chan

ge in

suc

cess

rate

afte

r app

lyin

g tr

ansf

orm

atio

n (%

)

Results - Industrial & Open source code

Dependent & Independent Predicates

Predicates influenced by disjoint sets of input variables

IndependentCan be optimised

in parallel

e.g. ‘a == 0’ and ‘b == 0’

Dependent & Independent Predicates

Predicates influenced by non-disjoint sets of input

variables

DependentInteractions

between predicates inhibit

parallel optimisation

e.g. ‘a == b’ and ‘b == c’

-100

-80

-60

-40

-20

0

20

40

60

80

100

Nested branches

Chan

ge in

suc

cess

rate

afte

r app

lyin

g tr

ansf

orm

atio

n (%

)

Dependent predicates

-100

-80

-60

-40

-20

0

20

40

60

80

100

Nested branches

Chan

ge in

suc

cess

rate

afte

r app

lyin

g tr

ansf

orm

atio

n (%

)

Dependent predicates

Independent and some dependent predicates

When not preserving program equivalence can go wrong

so we need to be careful:are we still testing according to the same

criterion?

we are testing to cover structure

... but the structure is the problem

so we transform the program

... but this alters the structure

Input Domain Reduction

Mark Harman, Youssef Hassoun, Kiran Lakhotia, Phil McMinn and Joachim Wegener.The Impact of Input Domain Reduction on Search-Based Test Data Generation.

The 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/

FSE 2007), Cavtat, Croatia, September 3-7 2007, pp. 155-164, ACM Press.

Effect of Reduction-100,000 ... 100,000-100,000 ... 100,000

-100,000 ... 100,000

approx 1016

Effect of Reduction-100,000 ... 100,000-100,000 ... 100,000

-100,000 ... 100,000

200,001

Variable Dependency Analysis

Empirical Study

Random Search Alternating Variable Method Evolutionary Testing

Defroster F2

Gimp Spice

Tiff

Case studies:

Studied the effects of reduction with:

Effect on Random Testing-49 ... 50

Probability of executing target:

-49 ... 50-49 ... 50

100 x 100 x 1

100 x 100 x 100

Results with Random Testing

Results with AVM

Effect on AVMSaves probe moves (and thus wasteful fitness evaluations) around irrelevant variables

void fn( irrelevant1, irrelevant2, irrelevant3, required1 .... )

increase increase increase increase

decrease decrease decrease decrease

Effect on AVMSaves probe moves (and thus wasteful fitness evaluations) around irrelevant variables

void fn( irrelevant1, irrelevant2, irrelevant3, required1 .... )

increase

decrease

Effect on ET

Saves mutations on irrelevant variables

Mutations concentrated on the variables that matter

Likely to speed up the search

Results with ET

Conclusions for Input Domain Reduction

Variable dependency analysis can be used to reduce input domains

This can reduce search effort for the AVM and ET

Perhaps surprisingly, there is no overall change for random search

Other applications of Search-Based Testing

Mutation testing

State machine testing

Y. Jia and M. Harman. Constructing subtle faults using higher order mutation testing. In 8th International Working Conference on Source Code Analysis and Manipulation (SCAM 2008,

Beijing, China, 2008. IEEE Computer Society. To appear.

K. Derderian, R. Hierons, M. Harman, and Q. Guo. Automated unique input output sequence generation for conformance testing of FSMs. The Computer Journal,

39:331–344, 2006.

Other applications of Search-Based Testing

Test suite reductionPareto Efficient Multi-Objective Test Case Selection, Shin Yoo and Mark Harman.

Proceedings of ACM International Symposium on Software Testing and Analysis 2007 (ISSTA 2007), p140-150

(Time-aware) Test suite prioritisationK. R. Walcott, M. L. Soffa, G. M. Kapfhammer, and R. S. Roos. Time aware test suite prioritization. In ISSTA ’06: Proceedings of the 2006 international symposium on Software testing and analysis, pages 1–12, New York, NY, USA, 2006. ACM Press.

Other applications of Search-Based Testing

Combinatorial Interaction Testing

GUI Testing

M.B. Cohen, M.B. Dwyer and J. Shi, Interaction testing of highly-configurable systems in the presence of constraints, International Symposium on Software Testing and Analysis

(ISSTA), London, July 2007, pp. 129-139.

X. Yuan, M.B. Cohen and A.M. Memon, GUI Interaction Testing: Incorporating Event Context, IEEE Transactions on Software Engineering, to appear

top related