TECHNISCHEUNIVERSITATMUNCHEN
Learning meets VerificationMartin Leucker
Institut fur Informatik
TU Munchen
TECHNISCHEUNIVERSITATMUNCHEN
The idea
Learning meets Verification — 2
TECHNISCHEUNIVERSITATMUNCHEN
Learning
Learning here means:
Given exemplifying behavior of a system
in terms of words
Learn a model conforming to the given behavior
in terms of a deterministic finite automaton (DFA)
Learning meets Verification — 3
TECHNISCHEUNIVERSITATMUNCHEN
Learning
Learning here means:
Given exemplifying behavior of a systemin terms of words
Learn a model conforming to the given behaviorin terms of a deterministic finite automaton (DFA)
Learning meets Verification — 3
TECHNISCHEUNIVERSITATMUNCHEN
Examples?
All given behavior must be accepted by the model
Exactly the given behavior must be accepted by the model
Give positive and negative examples
All given positive behavior must be accepted by the model and
All given negative behavior must be rejected by the model
Occam’s razor: In case of different explanations, choose thesimplest one
Here: Learn the minimal DFA conforming to the given examples
Learning meets Verification — 4
TECHNISCHEUNIVERSITATMUNCHEN
Examples?
All given behavior must be accepted by the model
Exactly the given behavior must be accepted by the model
Give positive and negative examples
All given positive behavior must be accepted by the model and
All given negative behavior must be rejected by the model
Occam’s razor: In case of different explanations, choose thesimplest one
Here: Learn the minimal DFA conforming to the given examples
Learning meets Verification — 4
TECHNISCHEUNIVERSITATMUNCHEN
Examples?
All given behavior must be accepted by the model
Exactly the given behavior must be accepted by the model
Give positive and negative examples
All given positive behavior must be accepted by the model and
All given negative behavior must be rejected by the model
Occam’s razor: In case of different explanations, choose thesimplest one
Here: Learn the minimal DFA conforming to the given examples
Learning meets Verification — 4
TECHNISCHEUNIVERSITATMUNCHEN
Examples?
All given behavior must be accepted by the model
Exactly the given behavior must be accepted by the model
Give positive and negative examples
All given positive behavior must be accepted by the model and
All given negative behavior must be rejected by the model
Occam’s razor: In case of different explanations, choose thesimplest one
Here: Learn the minimal DFA conforming to the given examples
Learning meets Verification — 4
TECHNISCHEUNIVERSITATMUNCHEN
Plan
Biermann’s approach
Angluin’s learning algorithm
Angluin’s + Biermann’s
Extensions to regularrepresentative systems
Application scenarios
Learning meets Verification — 5
TECHNISCHEUNIVERSITATMUNCHEN
Biermann’s algorithm
Learning meets Verification — 6
TECHNISCHEUNIVERSITATMUNCHEN
Setting
A sample is as a partial function O : Σ∗ → {+,−, ?}, defined onprefix-closed domain.
Goal: Given a sample O , find automaton conforming to O .
Learning meets Verification — 7
TECHNISCHEUNIVERSITATMUNCHEN
Learning as Constraint Satisfaction (CSP)
Let A = (Q, q0, δ,Q+) conforming to O . For u ∈ D(O),
Su = δ(q0, u). Obviously:
q0 = Sε,
O(u) = + implies Su ∈ Q+
O(u) = − implies Su 6∈ Q+
Su = Sv implies ∀a ∈ Σ Sua = Sva
We can understand the Su as variables.
Learning meets Verification — 8
TECHNISCHEUNIVERSITATMUNCHEN
Learning as Constraint Satisfaction (CSP) (2)
Let CSP(O) denote the set of equationsWhen O(u) = +, O(v) = − or O(u) = −, O(v) = + then
Su 6= Sv
For a ∈ Σ, ua, va ∈ D(O), we get
Su = Sv ⇒ Sua = Sva
Lemma(Learning as CSP, [Biermann,72])For a sample O , a DFA with N states conforming to O existsiff CSP(O) is solvable over {1, . . . , N}.
Further constraints can be added to speed up CSP solution.
Learning meets Verification — 9
TECHNISCHEUNIVERSITATMUNCHEN
Learning as Constraint Satisfaction (CSP) (2)
Let CSP(O) denote the set of equationsWhen O(u) = +, O(v) = − or O(u) = −, O(v) = + then
Su 6= Sv
For a ∈ Σ, ua, va ∈ D(O), we get
Su = Sv ⇒ Sua = Sva
Lemma(Learning as CSP, [Biermann,72])For a sample O , a DFA with N states conforming to O existsiff CSP(O) is solvable over {1, . . . , N}.
Further constraints can be added to speed up CSP solution.
Learning meets Verification — 9
TECHNISCHEUNIVERSITATMUNCHEN
Learning as Constraint Satisfaction (CSP) (2)
Let CSP(O) denote the set of equationsWhen O(u) = +, O(v) = − or O(u) = −, O(v) = + then
Su 6= Sv
For a ∈ Σ, ua, va ∈ D(O), we get
Su = Sv ⇒ Sua = Sva
Lemma(Learning as CSP, [Biermann,72])For a sample O , a DFA with N states conforming to O existsiff CSP(O) is solvable over {1, . . . , N}.
Further constraints can be added to speed up CSP solution.
Learning meets Verification — 9
TECHNISCHEUNIVERSITATMUNCHEN
Constraint Solving as SAT problem
Equations to translate
1. Su ∈ {1, . . . , N}
2. Su 6= Su′
3. Su = Su′ ⇒ Sua = Su′a
4. Su = i for some i ∈ {1, . . . N}.
Learning meets Verification — 10
TECHNISCHEUNIVERSITATMUNCHEN
Binary encoding
Su = S1u . . . Sm
u
for Su taking a value in {1, . . . , N}, m := dlog2 Ne
Su 6= Su′ holds iff ∨
k∈{1,...,m}
Sku 6= Sk
u′
. . .
Binary SAT encoding yields O(n2N log N) clauses over O(log N)variables.
Learning meets Verification — 11
TECHNISCHEUNIVERSITATMUNCHEN
Unary encoding
Su = S1u . . . SN
u — exactly one is bit is 1for Su taking a value in {1, . . . , N}
Unary SAT encoding yields O(n2N2) clauses over O(N) variables.
Learning meets Verification — 12
TECHNISCHEUNIVERSITATMUNCHEN
Learning with Biermann
Learner
Oracle
Is A equivalent to system to learn?
Yes/Counterexample
Learning meets Verification — 13
TECHNISCHEUNIVERSITATMUNCHEN
Angluin’s learning algorithm
Learning meets Verification — 14
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Overview
Learner
Teacher
Oracle
Is A equivalent to system to learn?
Is “aaba” a member of the language?
Yes/No
Yes/Counterexample
Learning meets Verification — 15
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b
{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
00
11
22
error
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}
{ε, a, b, ba}
{a, ba}
{ε}
{b}
{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}
∅00
11
22
error
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
00
11
22
error
ε b a
ε T T Ta T F Fb T F T
aa F F F
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
0
0
11
22
error
ε b a
ε T T Ta T F Fb T F T
aa F F F
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
00
1
1
22
error
ε b a
ε T T Ta T F Fb T F T
aa F F F
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
00
11
2
2
error
ε b a
ε T T Ta T F Fb T F T
aa F F F
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm (2)
0
1
2
a
b
a error
b
a,b a,b{ε}{ε, a, b, ba}
{a, ba}{ε}
{b}{ε, a}
{bb(a + b)?+
(ba + a)(a + b)+}∅
00
11
22
error
ε b a
ε T T Ta T F Fb T F T
aa F F F
Learning meets Verification — 16
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
0
1
2
a
b
a error
b
a,b a,b
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
ε
a,b
Counterexample is bb.
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
ε
ε Tb Tbb F
a Tba Tbba Fbbb F
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
ε
ε Tb Tbb F
a Tba Tbba Fbbb F
Inconsistent since row(ε · b · ε) 6= row(b · b · ε).
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
ε
ε Tb Tbb F
a Tba Tbba Fbbb F
ε bε
ε T Tb T Fbb F F
a T Fba T Fbba F Fbbb F F
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Algorithm - Example
ε
ε T
a Tb T
ε
ε Tb Tbb F
a Tba Tbba Fbbb F
ε bε
ε T Tb T Fbb F F
a T Fba T Fbba F Fbbb F F
ε b bba,b b
a a,b
Counterexample is aa.
Learning meets Verification — 17
TECHNISCHEUNIVERSITATMUNCHEN
Angluin + Biermann
Learning meets Verification — 18
TECHNISCHEUNIVERSITATMUNCHEN
Overview
Learner
Teacher
Oracle
Is A equivalent to system to learn?
Is “aaba” a member of the language?
Yes/No/?
Yes/Counterexample
Learning meets Verification — 19
TECHNISCHEUNIVERSITATMUNCHEN
Biermann + Angluin
Make Angluin’s table weakly closed and weakly consistent
Translate table to sample
Use Biermann’s aprroach
Learning meets Verification — 20
TECHNISCHEUNIVERSITATMUNCHEN
Angluin + Biermann
A different view:
Speed up Biermanns approach by asking queries
queries may be answered by don’t know
Use Biermann’s aprroach
but erlage sample by using queries
following Angluin’s algorithm
Learning meets Verification — 21
TECHNISCHEUNIVERSITATMUNCHEN
Learning Regular Representative Objects
Learning meets Verification — 22
TECHNISCHEUNIVERSITATMUNCHEN
Overview
Regular Languages Objects
D
D
u ≈ u′
w′′w ≈ w′
obj
∼
Learning meets Verification — 23
TECHNISCHEUNIVERSITATMUNCHEN
Learning Setup
DefinitionLet O be a set of objects and ∼ ⊆ O ×O be an equivalencerelation. A learning setup for (O,∼) is a quintuple (Σ,D,≈,`, obj )where
Σ is an alphabet,
D ⊆ Σ∗ is the domain,
≈ ⊆ D ×D is an equivalence relation such that, for any w ∈ D,[w]≈ is finite,
` ⊆ 2D × 2Σ∗ such that, for any (L1, L2) ∈ `, L1 is both finiteand ≈-closed, and L2 is a nonempty decidable language,
obj : RminDFA(Σ,D,≈,`) → [O]∼ is a bijective effectivemapping in the sense that, for L ∈ RminDFA(Σ,D,≈,`), arepresentative of obj (L) can be computed.
Learning meets Verification — 24
TECHNISCHEUNIVERSITATMUNCHEN
Learning Setup (continued)
Furthermore, we require that the following hold for DFA A over Σ:
(D1) The problem whether L(A) ⊆ D is decidable . If, moreover,L(A) 6⊆ D, one can compute w ∈ L(A) \ D. We then say thatINCLUSION(Σ,D) is constructively decidable.
(D2) If L(A) ⊆ D, it is decidable whether L(A) is ≈-closed. If not,one can compute w,w′ ∈ D such that w ≈ w′, w ∈ L(A), andw′ 6∈ L(A). We then say that the problemEQCLOSURE(Σ,D,≈) is constructively decidable.
(D3) If L(A) ⊆ D is closed under ≈, it is decidable whether L(A) is`-closed. If not, we c an compute (L1, L2) ∈ ` (hereby, L2 shallbe given in terms of a decision algorithm that checks a word formembership) such that L1 ⊆ L(A) and L(A) ∩ L2 = ∅. Wethen say that INFCLOSURE(Σ,D,≈,`) is constructivelydecidable.
Learning meets Verification — 25
TECHNISCHEUNIVERSITATMUNCHEN
Applications
Used for learning
prefix-closed languages
closed under independence relation
symmetry-closed languages
Used for learning message-passing automata acceptingmessage sequence charts
Learning meets Verification — 26
TECHNISCHEUNIVERSITATMUNCHEN
Applications
Used for learning
prefix-closed languages
closed under independence relation
symmetry-closed languages
Used for learning message-passing automata acceptingmessage sequence charts
Learning meets Verification — 26
TECHNISCHEUNIVERSITATMUNCHEN
Learning Timed Systems
Learning meets Verification — 27
TECHNISCHEUNIVERSITATMUNCHEN
Event Recording Automata as DFAs
A Deterministic Event-Recording Automaton over Σ and guards G
is a DFA over Σ × G
0
1
2
3
4
5
6
7
b
a
c
b
c [xb ≥ 3]
b [xc ≤ 3]
a [xb ≥ 2]
a [xb ≥ 1]
c
Learning meets Verification — 28
TECHNISCHEUNIVERSITATMUNCHEN
Learning DERAs
Main problem to solve:
Queries of guarded word has to be answered by checkingtimed words
Learning meets Verification — 29
TECHNISCHEUNIVERSITATMUNCHEN
Applications
Learning meets Verification — 30
TECHNISCHEUNIVERSITATMUNCHEN
Minimizing of automata
Minimization of Incompletely Specified Finite-State Machine
Incremental Learning(Angluin+Biermann)
Equivalence Check
report minimalsystem
Big systemcurrent model
no counterexample
counter example
Learning meets Verification — 31
TECHNISCHEUNIVERSITATMUNCHEN
Black-box Checking
BlackBox
|= ϕ
Learning meets Verification — 32
TECHNISCHEUNIVERSITATMUNCHEN
Black-box Checking
Incremental Learning(Angluin)
Model Checkingwrt. current model
Check equivalence(VC algorithm)
Comparecounterexample
with system
reportno error found
reportcounterexample
No counterexample Counterexample found
Conformance established Counterexample confirmed
Model and system do not conform Counterexample refuted
Learning meets Verification — 33
TECHNISCHEUNIVERSITATMUNCHEN
Testing
Model-based test generation is to
generate test cases based on given model
Use Learning to learn approximation of the system
Learning meets Verification — 34
TECHNISCHEUNIVERSITATMUNCHEN
Conformance Testing versus Learning
Conformance Testing:Test Suite
Specification Implementation=
Assumptions
Conformance
Regular Inference:Observations
Hypothesis Implementation=
Assumptions
Queries
Learning meets Verification — 35
TECHNISCHEUNIVERSITATMUNCHEN
Testing versus Learning II
Theorem
if O is a conformance test suite for Mthen M is uniquely inferred from O, and
if A is uniquely inferred from O
then O is a conformance test suite for A.
Learning meets Verification — 36
TECHNISCHEUNIVERSITATMUNCHEN
Verification by Invariants
∀n S[n] := P ‖ . . . P?
|= ϕ
∀n S[n] := P ‖ . . . P?
v ϕ
Solution: Find proper network invariant I such thatP v I
P ‖ I v I
I v ϕ
⇒ P ‖ . . . ‖ P v I v ϕ
Use Learning to find proper invariant I
Learning meets Verification — 37
TECHNISCHEUNIVERSITATMUNCHEN
Verification by Invariants
∀n S[n] := P ‖ . . . P?
|= ϕ
∀n S[n] := P ‖ . . . P?
v ϕ
Solution: Find proper network invariant I such thatP v I
P ‖ I v I
I v ϕ
⇒ P ‖ . . . ‖ P v I v ϕ
Use Learning to find proper invariant I
Learning meets Verification — 37
TECHNISCHEUNIVERSITATMUNCHEN
Verification by Invariants
∀n S[n] := P ‖ . . . P?
|= ϕ
∀n S[n] := P ‖ . . . P?
v ϕ
Solution: Find proper network invariant I such thatP v I
P ‖ I v I
I v ϕ
⇒ P ‖ . . . ‖ P v I v ϕ
Use Learning to find proper invariant I
Learning meets Verification — 37
TECHNISCHEUNIVERSITATMUNCHEN
Verification by Invariants II
Incremental Learning(Angluin+Biermann)
current model
yesno
reportproper invariant
Pϕ
spawn
w ∈ I | w /∈ P ‖ I
Check current I
P ‖ I v I
Learning meets Verification — 38
TECHNISCHEUNIVERSITATMUNCHEN
Further applications
Learning Assumptions in assume-guarantee reasoning
Learning transitive closure of transducers in regular modelchecking
Learning to verify branching time properties
Learning message passing automata based on messagesequence charts to incrementally derive a specification (model)
Learning used for getting strategies in games
. . .
Learning meets Verification — 39
TECHNISCHEUNIVERSITATMUNCHEN
Conclusion – TO DOs
Learning meets Verification — 40
TECHNISCHEUNIVERSITATMUNCHEN
Thanks. . .
Bengt Jonsson
Agha, Alur, Angluin, Berg, Biermann, Bol-lig, Gold, Groce, Grinchtein, Katoen, Kern,Madhusudan, Maler, Peled, Piterman, Pnueli,Oliveira, Raffelt, Sen, Silva, Steffen, Vardhan,Vardi, Viswanathan, Yannakakis, . . .
Learning meets Verification — 41
TECHNISCHEUNIVERSITATMUNCHEN
TO DOs
efficient implementations of Angluin’s algorithm
efficient implementations ofBiermann’s+Angluin’s algorithm
Learning of Büchi automata
Can different kinds of learning algorithms helpfor verification tasks?
Learning meets Verification — 42
TECHNISCHEUNIVERSITATMUNCHEN
TO DOs
efficient implementations of Angluin’s algorithm
efficient implementations ofBiermann’s+Angluin’s algorithm
Learning of Büchi automata
Can different kinds of learning algorithms helpfor verification tasks?
Learning meets Verification — 42