fastest: test case generation from z specifications

37
Fastest Test Case Generation from Z Specifications Maximiliano Cristi´ a CIFASIS Universidad Nacional de Rosario Rosario – Argentina [email protected] Alpen-Adria Univesit¨ at – Klagenfurt – March, 2012

Upload: foerderverein-technische-fakultaet

Post on 18-Dec-2014

930 views

Category:

Technology


1 download

DESCRIPTION

 

TRANSCRIPT

FastestTest Case Generation from Z Specifications

Maximiliano Cristia

CIFASISUniversidad Nacional de Rosario

Rosario – Argentina

[email protected]

Alpen-Adria Univesitat – Klagenfurt – March, 2012

Software Testing Basics

Program Correctness

A program is correct if it behaves according to its specification

Edsger Dijkstra

“Program testing can be a very effective way to show the presenceof bugs, but it is hopelessly inadequate for showing their absence.”

Donald Knuth

“Beware of bugs in the above code; I have only proved it correct,not tried it.”

2 / 34

Model-Based Testing (MBT)

MBT analyses a model of theprogram to derive test cases whichlater are provided as input.

At the beginning, the model is usedto generate test cases.

At end, the model becomes theoracle.

If a model is provided, the processis almost automatic.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

3 / 34

The Test Template Framework and Fastest

The Test Template Framework (TTF)

A particular method of MBT which uses Z specifications.

Unit testing.

Stocks and Carrington, A Framework for Specification-BasedTesting, IEEE TOSE 1996.

Fastest

It is the only existing implementation of the TTF.

Available at www.fceia.unr.edu.ar/~mcristia.

Cristia and Rodrıguez Monetti, Implementing and Applyingthe Stocks-Carrington Framework for MBT, ICFEM 2009.

4 / 34

TTF Automation

The TTF in 9 Steps

1 Partition the IS by applying testing tactics.

2 Generate a testing tree

3 Prune the testing tree.

4 Generate abstract test cases.

(Optionally) Translate test cases into

natural language.

5 Refine abstract test cases.

6 Execute test cases.

7 Get the program output.

8 Abstract the program output.

9 Check for errors.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

5 / 34

An Example

The Requirements

A bank has a certain number of savings accounts.

Customers can deposit and withdraw money.

Each account is identified with an account number.

An amount of money is considered to be a natural number.

Only the deposit operation is going to be shown.

6 / 34

The Z SpecificationTypes and State Space

[ACCNUM] A type for account numbers

BALANCE == N A type for balances

AMOUNT == N A type for amounts of money

Banksa : ACCNUM 7→ BALANCE

7 / 34

The Z SpecificationThe Deposit Operation

DepositOk∆Bankn? : ACCNUMa? : AMOUNT

n? ∈ dom saa? > 0sa′ = sa⊕ {n? 7→ sa n? + a?}

DepositE 1 == [ΞBank; n? : ACCNUM | n? /∈ dom sa]

DepositE 2 == [ΞBank; a? : Z | a? ≤ 0]

Deposit == DepositOk ∨ DepositE 1 ∨ DepositE 2

loadspec bank.tex

selop Deposit

8 / 34

The TTF in 9 Steps

1 Partition the IS by applying testing tactics.

2 Generate a testing tree.

3 Prune the testing tree.

4 Generate abstract test cases.

(Optionally) Translate test cases into

natural language.

5 Refine abstract test cases.

6 Execute test cases.

7 Get the program output.

8 Abstract the program output.

9 Check for errors.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

Domain Partition

Given a Z operation, take its input space and get a partition of it.

inputspace

inputspace

inputspace

testingtactic

testingtactic

testingtactic

P Q P P1

Q Q1

atomic predicates

Each member of a partition is given by a Z predicate (i.e.,first-order logic and Zermelo-Fraenkel set theory).

Partitioning can be represented as a tree.

Test cases are derived only from the “leaves”

Leaves are called test conditions.

10 / 34

Testing Tactics

The TTF performs domain partition by applying testing tactics.

Disjunctive Normal Form (DNF)

Write the operation predicate in DNF and take the precondition ofeach disjunct.

Standard Partition (SP) for R ⊕ G

R = {},G = {}R = {},G 6= {}R 6= {},G = {}R 6= {},G 6= {},dom R = dom GR 6= {},G 6= {},dom G ⊂ dom RR 6= {},G 6= {}, (dom R ∩ dom G ) = {}R 6= {},G 6= {},dom R ⊂ dom GR 6= {},G 6= {}, (dom R ∩ dom G ) 6= {},

¬ (dom G ⊆ dom R),¬ (dom R ⊆ dom G )

11 / 34

Applying DNF and SP to Deposit

First DNF

DepositDNF1 == [DepositVIS | n? ∈ dom sa ∧ a? > 0]

DepositDNF2 == [DepositVIS | n? /∈ dom sa]

DepositDNF3 == [DepositVIS | a? ≤ 0]

Second SP, but only to DepositDNF1

DepositSP1 == [DepositDNF1 | sa = {} ∧ {n? 7→ (sa n? + a?)} = {}]

DepositSP2 == [DepositDNF1 | sa = {} ∧ {n? 7→ (sa n? + a?)} 6= {}]

DepositSP3 == [DepositDNF1 | sa 6= {} ∧ {n? 7→ (sa n? + a?)} = {}]

DepositSP4 == [DepositDNF1 | sa 6= {} ∧ {n? 7→ (sa n? + a?)} 6= {}

∧ dom sa = dom{n? 7→ (sa n? + a?)}]DepositSP5 == [DepositDNF

1 | sa 6= {} ∧ {n? 7→ (sa n? + a?)} 6= {}∧ dom{n? 7→ (sa n? + a?)} ⊂ dom sa]

(three more test conditions not shown here)

genalltt

addtactic Deposit_DNF_1 SP \oplus sa oplus \{(n? , sa~n? + a?)\}

12 / 34

The TTF in 9 Steps

1 Partition the IS by applying testing tactics.

2 Generate a testing tree

3 Prune the testing tree.

4 Generate abstract test cases.

(Optionally) Translate test cases into

natural language.

5 Refine abstract test cases.

6 Execute test cases.

7 Get the program output.

8 Abstract the program output.

9 Check for errors.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

Pruning the Testing Tree

Pruning involves eliminating unsatisfiable test conditions.

Before pruning

Deposit VIS

Deposit DNF 1

Deposit SP 1

Deposit SP 2

Deposit SP 3

Deposit SP 4

Deposit SP 5

Deposit SP 6

Deposit SP 7

Deposit SP 8

Deposit DNF 2

Deposit DNF 3

After pruning

Deposit VIS

Deposit DNF 1

Deposit SP 4

Deposit SP 5

Deposit DNF 2

Deposit DNF 3

prunett

14 / 34

Pruning in Detail

The problem – I

Many test conditions are unsatisfiable.

Most of them are mathematical contradictions.

Then, no test cases can be derived from them.

Therefore, unsatisfiable test conditions should be pruned fromtesting trees.

In general, this problem is undecidable.

The problem – II

Z includes many mathematical theories (sets, relations, partialfunctions, sequences, etc.), making the problem harder.

15 / 34

Detecting Contradictions

Detecting logical contradictions

Since every test condition is a conjunction of atomic predicates,then the only possible logical contradiction is of the form:

· · · ∧ p ∧ · · · ∧ ¬ p ∧ . . .

This is the easy part.

Detecting mathematical contradictions

Most of the contradictions are of a mathematical nature.

Some depend on the particular semantics of the Zmathematical theories.

16 / 34

A Library of Mathematical Contradictions

The core of our method is a library of elimination theorems.

Each elimination theorem should represent a contradiction.

ETheorem ETheoremName [parameters]

atomicPred. . . . . . . . . . . . . . .atomicPred

parameters is roughly any Z declaration.

atomicPred is any Z atomic predicate and some keywords.

Subtyping and equivalence rules.

User extensible (LATEX).

No deduction, only pattern-matching.

17 / 34

Some examples

Update SP 3st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} = {}

ETheorem T1 [x : X ]

setExtension(x) = {}

setExtension(x) matches any set

extension containing x

Update SP 7st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} 6= {}dom st ⊂ dom{s? 7→ v?}

ETheorem T2[R : X ↔ Y ; x : X ; y : Y ]

R 6= {}dom R ⊂ dom{x 7→ y}

subtyping is applied to st

18 / 34

Some examples

Update SP 3st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} = {}

ETheorem T1 [x : X ]

setExtension(x) = {}

setExtension(x) matches any set

extension containing x

Update SP 7st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} 6= {}dom st ⊂ dom{s? 7→ v?}

ETheorem T2[R : X ↔ Y ; x : X ; y : Y ]

R 6= {}dom R ⊂ dom{x 7→ y}

subtyping is applied to st

18 / 34

Some examples

Update SP 3st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} = {}

ETheorem T1 [x : X ]

setExtension(x) = {}

setExtension(x) matches any set

extension containing x

Update SP 7st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} 6= {}dom st ⊂ dom{s? 7→ v?}

ETheorem T2[R : X ↔ Y ; x : X ; y : Y ]

R 6= {}dom R ⊂ dom{x 7→ y}

subtyping is applied to st

18 / 34

Some examples

Update SP 3st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} = {}

ETheorem T1 [x : X ]

setExtension(x) = {}

setExtension(x) matches any set

extension containing x

Update SP 7st : SYM 7→ VALs? : SYMv? : VAL

st 6= {}{s? 7→ v?} 6= {}dom st ⊂ dom{s? 7→ v?}

ETheorem T2[R : X ↔ Y ; x : X ; y : Y ]

R 6= {}dom R ⊂ dom{x 7→ y}

subtyping is applied to st

18 / 34

The TTF in 9 Steps

1 Partition the IS by applying testing tactics.

2 Generate a testing tree

3 Prune the testing tree.

4 Generate abstract test cases.

(Optionally) Translate test cases into

natural language.

5 Refine abstract test cases.

6 Execute test cases.

7 Get the program output.

8 Abstract the program output.

9 Check for errors.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

Generating Abstract Test Cases

Find constants satisfying test conditions.

DepositTC1 ==[DepositSP4 | n? = accnum2 ∧ sa = {(accnum2, 1)} ∧ a? = 1]

DepositTC2 ==[DepositSP5 |

n? = accnum2 ∧ sa = {(accnum2, 1), (accnum1, 1)} ∧ a? = 1]

DepositTC3 == [DepositDNF2 | n? = accnum2 ∧ sa = ∅ ∧ a? = 0]

DepositTC4 == [DepositDNF3 | n? = accnum2 ∧ sa = ∅ ∧ a? = 0]

genalltca

20 / 34

Test Case Generation in Detail

Currently, Fastest implements a rough satisfiability algorithm:

1 Finite sets are bound to variables and types in a test template.

ACCNUM, n? → {accnum0, accnum1, accnum2}AMOUNT ,BALANCE , a? → {0, 1, 2}

ACCNUM 7→ BALANCE , sa → {∅,{accnum2 7→ 1},{accnum2 7→ 1, accnum1 7→ 1},. . . }

2 The predicate is evaluated on some elements of the Cartesianproduct of those finite sets.

ZLive (from CZT) is used as evaluator.Some simple heuristics are applied.The mathematical structure of the predicate is not considered.

21 / 34

Is it good enough?

Fastest has been able to find test cases for an average of 80 % ofthe satisfiable test conditions from eleven case studies.

For the remaining 20 %, Fastest provides a command to help it tofind a test case.

setfinitemodel Deposit_SP_4 -fm "sa==\{\{accnum_2 \mapsto 1\}\}"

Nevertheless, Fastest’s algorithm needs to be improved or replacedby more sophisticated ones.

22 / 34

Difficulties with Test Case Generation in the TTF

The ideas of test templates and of automatic partition ofZ schema have been proposed several years ago. Theiractual application to systems of industrial size wasproblematic due to the lack of efficient and adequateconstraint solvers and theorem provers. Given theprogresses recently achieved for these kinds of tools,these techniques are worth revisiting . . .

Ana Cavalcanti and Marie-Claude Gaudel (2011): Testing for refinement in

Circus. Acta Informatica, 48(2).

23 / 34

SMT Solvers for the TTF

Satisfiability Modulo Theory (SMT) solvers

SMT solvers are tools that solve the problem of satisfiability for arange of mathematical and logical theories.

No SMT solver supports the Zermelo-Fraenkel set theory.

Must define a shallow embedding.

Many questions:

Is the language of SMT solvers expressive enough?Is there just one shallow embedding?Will the chosen embedding solve all the satisfiable testspecifications appearing in the TTF and real Z specifications?Which SMT solver and which embedding will be the best insatisfying more test specifications in less time?Will it be better than Fastest in finding test cases?Or should the SMT solver complement Fastest in this task?

24 / 34

First Results

Two shallow embeddings

We have defined a shallow embedding of the Zermelo-Franekel settheory for two mainstream SMT solvers: Yices and CVC3.

Initial empirical assessment

We manually translated 69 satisfiable test templates, for whichFastest cannot find a test case, into these shallow embeddings andran the SMT solvers over them.

25 / 34

Empirical assessment

Main embeddings Variant embeddingsCase study Yices CVC3 Yices CVC3

Sat Unk Sat Unk Sat Unk Sat UnkSavings accounts (3) 8 8 8 8Savings accounts (1) 2 2 2 2Launching vehicle 8 8 8 8Plavis 29 29 29 29SWPDC 13 13 13 13Scheduler 4 4 4 4Security class 4 4 4 4Pool of sensors 1 1 1 1

Totals 9 60 17 52 9 60 17 52

Time spent in processing all the 69 test templates:

Yices: 3 s

CVC3: 420 s

Fastest: 390 s (but no test case is discovered)

26 / 34

The TTF in 9 Steps

1 Partition the IS by applying testing tactics.

2 Generate a testing tree

3 Prune the testing tree.

4 Generate abstract test cases.

(Optionally) Translate test cases into

natural language.

5 Refine abstract test cases.

6 Execute test cases.

7 Get the program output.

8 Abstract the program output.

9 Check for errors.

MBT Process

requeriments

model

abstract

tests

tests results

abstract

results

error?

generate

refine

execute

abstract

verify

Refining Abstract Test Cases

The implementation (C programming language)

Assume sa is implemented as follows:struct sanode

{int an;

float bal;

struct sanode *n;} *sa;

where sa is a global variable. Besides, Deposit is implemented bythe following function:

int deposit(int n, float a)

Refinement rules

We need to link each specification variable to theirimplementation counterparts.

This process is called abstract test case refinement.

Write refinement rules and then run the refine command.

28 / 34

Refining Abstract Test Cases (II)

A refinement rule for Deposit

@RRULE deposit

@PREAMBLE

#include <bank.h>

@LAWS

l1:n? ==> n

l2:a? ==> a

l3:sa ==> sa AS LIST[SLL,n] WITH[sa.@dom ==> sa.an,

sa.@ran ==> sa.bal]

@UUT deposit(n, a)

loadrefrule deposit.txt

refine Deposit to test /bank/src implemented in C with deposit

29 / 34

Concrete Test Cases

Refinement of DepositTC2 .

#include <bank.h>

int main() {

int n = 345; n? = accnum2

float a = 1; a? = 1

sa = {(accnum2, 1), (accnum1, 1)}struct sanode sanode0 = {876,1,NULL};

struct sanode sanode1 = {345,1,NULL};

sa = &sanode0;

sanode0.n = &sanode1;

deposit(n,a);

return 1;

}

30 / 34

The Remaining Steps

Before executing a concrete test case it is necessary to insertcode to get the program output.

Abstraction rules, similar to refinement rules.

Once a test case has been executed its output must beabstracted.

Same abstraction rules, different command.

The final verification is performed with another command.

(Optional) Translating abstract test cases into naturallanguage involves writing translation rules.

Quite different from refinement rules.

Work in progress

Fastest is still a prototype: many of these features are roughlyimplemented and there can be bugs.

31 / 34

Our Experience with Fastest

We have applied Fastest in the following projects (only up toabstract test case generation):

On-board flight control software of a satellite launcher vehicle

Along with Instituto de Aeronautica e Espaco (IAE), Brazil.

139 lines of Z.

One operation, 36 abstract test cases.

Satellite communication protocols

Two protocols, along with Instituto Nacional de PesquisasEspaciais (INPE), Brazil.

Two specifications comprising a total of 1,800 lines of Z.

26 operations, 195 abstract test cases.

32 / 34

Our Experience with Fastest (II)

The ECSS-E-70-41A aerospace standard

Along with INVAP, Argentina.

A single specification of 2,011 lines of Z.

Abstract test case translation to English.

256 test conditions.

Many challenges to solve.

Tokeneer

First specification not written by us, 2,057 lines of Z.

139 schemas selected as operations.

More than 25,000 test conditions, only DNF was applied.

We had to modify the DNF algorithm due to “termexplosion”.

33 / 34

Current and Future Work

Continue with the SMT solvers.

Z3.More satisfiable test specifications.A decision procedure for the Z notation?

Test case refinement: greatly improve the implementation.

Z specifications as program annotations.

Implement the remaining steps of the TTF.

Validate with more case studies.

34 / 34