predicate learning and selective theory deduction for solving difference logic

23
Predicate Learning and Selective Theory Deduction for Solving Difference Logic Chao Wang , Aarti Gupta, Malay Ganai NEC Laboratories America Princeton, New Jersey, USA August 21, 2006 Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06]

Upload: nuncio

Post on 17-Jan-2016

34 views

Category:

Documents


1 download

DESCRIPTION

Predicate Learning and Selective Theory Deduction for Solving Difference Logic. Chao Wang , Aarti Gupta, Malay Ganai NEC Laboratories America Princeton, New Jersey, USA August 21, 2006. Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06]. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

Predicate Learning and Selective Theory Deduction for Solving Difference Logic

Chao Wang, Aarti Gupta, Malay Ganai

NEC Laboratories AmericaPrinceton, New Jersey, USA

August 21, 2006

Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06]

Page 2: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

2

Difference LogicLogic to model systems at the “word-level”

Subset of quantifier-free first order logic Boolean connectives + predicates like (x – y ≤ c)

Formal verification applications Pipelined processors, timed systems, embedded software e.g., back-end of the UCLID Verifier

Existing solvers Eager approach [Strichman et al. 02], [Talupur et al. 04], UCLID Lazy approach TSAT++, MathSAT, DPLL(T), Saten, SLICE, Yices, HTP,

… Hybrid approach [Seshia et al. 03], UCLID, SD-SAT

Page 3: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

3

Our contributionLessons learned from previous works

Incremental conflict detection and zero-cost theory backtracking [Wang et al. LPAR’05]

Exhaustive theory deduction [Nieuwenhuis & Oliveras CAV’05]

Eager chordal transitivity constraints [Strichman et al. FMCAD’02]

What’s new? Incremental conflict detection PLUS selective theory

deduction with little additional cost

Dynamic predicate learning to combat exponential blow-up

Page 4: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

4

Outline

PreliminariesSelective theory (implication) deductionDynamic predicate learningExperimentsConclusions

Page 5: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

5

PreliminariesDifference logic formula

Difference predicates

Boolean skeleton

Constraint graph for assignment (A,¬B,C,D)

x y

wz

A:2

D:10C:3

¬B:-7

A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )

A: ( x – y ≤ 2 ), B: ( z – x ≤ -7 )C: ( y - z ≤ 3 ), D: ( w - y ≤ 10 )

Page 6: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

6

Theory conflict: infeasible Boolean assignment

Negative weighted cycle Theory conflict Theory conflict Lemma or blocking clause Boolean

conflict

A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )

x y

wz

A:2

D:10C:3

¬B:-7

Lemma learned:

(¬A + B + ¬C)

A:2

C:3

¬B:-7

Conflicting clause: (false + false + false)

Page 7: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

7

Theory implication: implied Boolean assignment

If adding an edge creates a negative cycle negated edge is implied

Theory implication var assignment Boolean implication (BCP)

A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )

x y

wz

A:2

D:10C:3

¬B:-7

Theory implication:

A ^ ¬B → (¬C)

C:3

Implied Boolean assignment trigger a series of BCP

Page 8: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

8

Negative cycle detectionCalled repeatedly to solve many similar subproblems

For conflict detection (incremental, efficient) For implication deduction (often expensive)

Incremental detection versus exhaustive deduction

SLICE: Incremental cycle detection -- O(n log n) DPLL(T): Exhaustive theory deduction -- O(n * m)

SLICE[LPAR’05]

DPLL(T) – Barcelogic[CAV’05]

Conflict detection Incremental NO

Implication deduction NO Exhaustive

Page 9: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

9

1

10

100

1000

10000

1 10 100 1000 10000

vs. UCLID

1

10

100

1000

10000

1 10 100 1000 10000

vs. ICS 2.0

1

10

100

1000

10000

1 10 100 1000 10000

vs. MathSAT

1

10

100

1000

10000

1 10 100 1000 10000

vs. DPLL(T) – Barcelogic

1

601

1201

1801

2401

3001

3601

1 601 1201 1801 2401 3001 3601

vs. DPLL(T)–B (linear scale)

1

10

100

1000

10000

1 10 100 1000 10000

vs. TSAT++

Data from [LPAR’05]: Comparing SLICE solver (SMT benchmarks repository,as of 08-2005)

Points above the diagonals Wins for SLICE solver

Page 10: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

10

From the previous resultsWe have learned that

Incremental conflict detection more scalable Exhaustive theory deduction also helpful Can we combine their relative strengths?

Our new solution Incremental conflict detection (SLICE) Zero-cost theory backtracking (SLICE) PLUS selective theory deduction with O(n) cost

Page 11: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

11

Outline

Preliminaries Selective theory (implication) deduction

Dynamic predicate learningExperimentsConclusions

Page 12: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

12

Deduce()

{

while (implications.empty()) {

set_var_value(implications.pop());

if (detect_conflict())

return CONFLICT;

add_new_implications();

if ( ready_for_theory_propagation() ) {

if (theory_detect_conflict())

return CONFLICT;

theory_add_new_implications();

}

}

}

Constraint Propagation

Boolean CP (BCP)

Theory Constraint Propagation

Page 13: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

13

0

Incremental conflict detectionRelax Edge (u,v): if ( d[v] > d[u]+w[u,v] ) { d[v] = d[u]+w[u,v]; pi[v] = u

}

[Ramalingam 1999] [Bozzano et al. 2005] [Cotton 2005] [Wang et al. LPAR’05]

x y

wz

2

10

-7

0

-7 0

3

X -4 (z,y) d[y]=-4

pi[y]=z

X 6

(y,w) d[w]=6 pi[w]=y(y,x) d[x]=-2 pi[x]=y

X -2

(x,z) d[z]=-9 pi[z]=x

X -9

•Add an edge relax, relax, relax, …•Remove an edge do nothing (zero-cost backtracking in SLICE)

(z,y) CONFLICT !!!

Page 14: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

14

Selective theory deduction

x y wz

Post(x) = {x, z, … } Pre(y) = {y, w, … }

d[z] - d[y] <= w[y,z]

Edge (y,z) is an implied assignment

FWD: Pre(y) = {y} Post(x) = {x,z,…}

Both: Pre(y) = {y,w,…} Post(x) = {x,z,…} Significantly cheaper than exhaustive theory deduction

Pi[y] = wthrough relax

Page 15: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

15

Outline

Preliminaries Selective theory (implication) deduction Dynamic predicate learning

ExperimentsConclusions

Page 16: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

16

Diamonds: with O(2^n) negative cycles

-1

e1 e2

e0

Observations:

With existing predicates (e1,e2,…) exponential number of lemmas

Add new predicates (E1,E2,E3) and dummies (E1+!E1) & (E2+!E2) & …

almost linear number of lemmas

Previous eager chordal transitivity used by [Strichmann et al. FMCAD’02]

E1 E2 E3

Page 17: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

17

Add new predicates to reduce lemmas

x y wz

Heuristics to choose GOOD predicates (short-cuts)

Nodes that show up frequently in negative cycles

Nodes that are re-convergence points of the graph

(Conceptually) adding a dummy constraint (E3 + ! E3)

E3: x – y <= (d[x] - d[y])

Predicates: E1: x – y < 5 E2: y – x < 5Lemma: ( ! E1 + ! E2 )

Page 18: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

18

Experiments with SLICE+

Implemented upon SLICE i.e., [Wang et al. LPAR’05]

Controlled experiments Flexible theory propagation invocation

Per predicate assignment, per BCP, or per full assignment

Selective theory deduction No deduction, Fwd-only, or Both-directions

Dynamic predicate learning With, or Without

Page 19: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

19

When to call the theory solver?

1

10

100

1000

1 10 100 1000Per decision level

Per

full

assig

nmen

t1

10

100

1000

1 10 100 1000Per decision level

Per

vari

able

ass

ignm

ent

per BCP versus per predicate assignment per BCP versus per full assignment

Points above the diagonals Wins for per BCP

On the DTP benchmark suite

Page 20: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

20

Comparing theory deduction schemes

1

10

100

1000

1 10 100 1000No deduction

For

war

d de

duct

ion

1

10

100

1000

1 10 100 1000No deduction

Bot

h di

rect

ions

Fwd-only deduction vs. no deductiontotal 660 seconds

Both-directions vs. no deductiontotal 1138 seconds

Points above the diagonals Wins for no deduction

On the DTP benchmark suite

Page 21: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

21

Comparing dynamic predicate learning

0

20

40

60

80

100

120

140

160

180

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35

Different diamonds formulae

CP

U t

ime

(s)

no new predicate

with new predicate

On the diamonds benchmark suite

Page 22: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

22

Comparing dynamic predicate learning

1

10

100

1000

1 10 100 1000No predicate learning

Dyn

amic

pre

dica

te le

arni

ng

Dyn. pred. learning vs. No pred. learning

Points above the diagonals Wins for No pred. learning

On the DTP benchmark suite

Page 23: Predicate Learning and Selective Theory Deduction for Solving Difference Logic

23

Lessons learned

Timing to invoke theory solver “after every BCP finishes” gives the best

performance

Selective implication deduction Little added cost, but improves the performance

significantly

Dynamic predicate learning Reduces the exponential blow-up in certain

examples In the spirit of “predicate abstraction”

Questions ?