principles of program analysis: algorithmshrni/ppa/slides6.pdf · principles of program analysis:...

48
Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis. Springer Verlag 2005. c Flemming Nielson & Hanne Riis Nielson & Chris Hankin. PPA Chapter 6 c F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 1

Upload: others

Post on 05-Aug-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Principles of Program Analysis:

Algorithms

Transparencies based on Chapter 6 of the book: Flemming Nielson,

Hanne Riis Nielson and Chris Hankin: Principles of Program Analysis.

Springer Verlag 2005. c©Flemming Nielson & Hanne Riis Nielson & Chris

Hankin.

PPA Chapter 6 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 1

Page 2: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Worklist Algorithms

We abstract away from the details of a particular analysis:

We want to compute the solution to a set of equations

{x1 = t1, · · · , xN = tN}

or inequations

{x1 w t1, · · · , xN w tN}

defined in terms of a set of flow variables x1, · · · , xN ; here t1, · · · , tNare terms using the flow variables.

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 2

Page 3: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Equations or inequations?

It does not really matter:

• A solution of the equation system {x1 = t1, · · · , xN = tN}is also a solution of the inequation system {x1 w t1, · · · , xN w tN}

• The least solution to the inequation systems {x1 w t1, · · · , xN w tN}is also a solution to the equation system {x1 = t1, · · · , xN = tN}

– The inequation system {x w t1, · · · , x w tn} (same left hand sides)

and the equation {x = x t t1 t · · · t tn} have the same solutions.

– The least solution to the equation {x = x t t1 t · · · t tn}is also the least solution of {x = t1 t · · · t tn} (where the x com-

ponent has been removed on the right hand side).

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 3

Page 4: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example While program

Reaching Definitions Analysis of

if [b1]1 then (while [b2]

2 do [x := a1]3)

else (while [b3]4 do [x := a2]

5);

[x := a3]6

gives equations of the form

RDentry(1) = X? RDexit(1) = RDentry(1)

RDentry(2) = RDexit(1) ∪ RDexit(3) RDexit(2) = RDentry(2)

RDentry(3) = RDexit(2) RDexit(3) = (RDentry(3)\X356?) ∪X3

RDentry(4) = RDexit(1) ∪ RDexit(5) RDexit(4) = RDentry(4)

RDentry(5) = RDexit(4) RDexit(5) = (RDentry(5)\X356?) ∪X5

RDentry(6) = RDexit(2) ∪ RDexit(4) RDexit(6) = (RDentry(6)\X356?) ∪X6

where e.g. X356? denotes the definitions of x at labels 3, 5, 6 and ?

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 4

Page 5: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example (cont.)

Focussing on RDentry and expressed as equations using the flow variables

{x1, · · · , x6} :

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3x3 = x2

x4 = x1 ∪ (x5\X356?) ∪X5x5 = x4x6 = x2 ∪ x4

Alternatively we can use inequations:

x1 ⊇ X?x2 ⊇ x1x2 ⊇ x3\X356?

x2 ⊇ X3x3 ⊇ x2

x4 ⊇ x1x4 ⊇ x5\X356?x4 ⊇ X5

x5 ⊇ x4x6 ⊇ x2x6 ⊇ x4

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 5

Page 6: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Assumptions

• There is a finite constraint system S of the form (xi w ti)Ni=1 for

N ≥ 1 where the left hand sides xi are not necessarily distinct; the

form of the terms ti of the right hand sides is left unspecified.

• The set FV(ti) of flow variables occurring in ti is a subset of the

finite set X = {xi | 1 ≤ i ≤ N}.

• A solution is a total function, ψ : X → L, assigning to each flow vari-

able a value in the complete lattice (L,v) satisfying the Ascending

Chain Condition.

• The terms are interpreted with respect to solutions, ψ : X → L, and

we write [[t]]ψ∈ L to represent the value of t relative to ψ.

• The interpretation [[t]]ψ of a term t is monotone in ψ and its value

only depends on the values of the flow variables occurring in t.

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 6

Page 7: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Abstract Worklist Algorithm

INPUT: A system S of constraints: x1 w t1, · · · , xN w tN

OUTPUT: The least solution: Analysis

DATA STRUCTURES: W: worklist of constraints

A: array indexed by flow variablescontaining elements of the lattice L(the current value of the flow variable)

Infl: array indexed by flow variablescontaining the set of constraintsinfluenced by the flow variable

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 7

Page 8: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Worklist Algorithm: initialisation

W := empty;

for all x w t in S do

W := insert((x w t),W); initially all constraints in the worklist

Analysis[x] := ⊥; initialised to the least element of L

infl[x] := ∅;

for all x w t in S do

for all x′ in FV(t) do

infl[x′] := infl[x′] ∪ {x w t}; changes to x′ might influence x

via the constraint x w t

OBS: After the initialisation we have infl[x′] = {(x w t) in S | x′ ∈ FV(t)}

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 8

Page 9: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Worklist Algorithm: iteration

while W 6= empty do

((x w t),W) := extract(W); consider the next constraint

new := eval(t,Analysis);

if Analysis[x] 6w new then any work to do?

Analysis[x] := Analysis[x] t new; update the analysis information

for all x′ w t′ in infl[x] do

W := insert((x′ w t′),W); update the worklist

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 9

Page 10: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Operations on worklists

• empty is the empty worklist;

• insert((x w t),W) returns a new worklist that is as W except that a

new constraint x w t has been added; it is normally used as in

W := insert((x w t),W)

so as to update the worklist W to contain the new constraint x w t;

• extract(W) returns a pair whose first component is a constraint x w t

in the worklist and whose second component is the smaller worklist

obtained by removing an occurrence of x w t; it is used as in

((x w t),W) := extract(W)

so as to select and remove a constraint from W.

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 10

Page 11: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Organising the worklist

In its most abstract form the worklist could be viewed as a set of con-

straints with the following operations:

empty = ∅

function insert((x w t),W)

return W ∪ {x w t}

function extract(W)

return ((x w t),W\{x w t}) for some x w t in W

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 11

Page 12: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Extraction based on LIFO

The worklist is represented as a list of constraints with the following

operations:

empty = nil

function insert((x w t),W)

return cons((x w t),W)

function extract(W)

return (head(W), tail(W))

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 12

Page 13: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Extraction based on FIFO

The worklist is represented as a list of constraints:

empty = nil

function insert((x w t),W)

return append(W,[x w t])

function extract(W)

return (head(W), tail(W))

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 13

Page 14: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: initialisation

Equations:

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3x3 = x2

x4 = x1 ∪ (x5\X356?) ∪X5x5 = x4x6 = x2 ∪ x4

Initialised data structures:

x1 x2 x3 x4 x5 x6

infl {x2, x4} {x3, x6} {x2} {x5, x6} {x4} ∅

A ∅ ∅ ∅ ∅ ∅ ∅W [x1, x2, x3, x4, x5, x6]

OBS: in this example the left hand sides of the equations uniquely iden-

tify the equations

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 14

Page 15: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: iterationW x1 x2 x3 x4 x5 x6

[x1, x2, x3, x4, x5, x6] ∅ ∅ ∅ ∅ ∅ ∅[x2, x4, x2, x3, x4, x5, x6] X? − − − − −

[x3, x6, x4, x2, x3, x4, x5, x6] − X3? − − − −[x2, x6, x4, x2, x3, x4, x5, x6] − − X3? − − −

[x6, x4, x2, x3, x4, x5, x6] − − − − − −[x4, x2, x3, x4, x5, x6] − − − − − X3?

[x5, x6, x2, x3, x4, x5, x6] − − − X5? − −[x4, x6, x2, x3, x4, x5, x6] − − − − X5? −

[x6, x2, x3, x4, x5, x6] − − − − − −[x2, x3, x4, x5, x6] − − − − − X35?

[x3, x4, x5, x6] − − − − − −[x4, x5, x6] − − − − − −

[x5, x6] − − − − − −[x6] − − − − − −[ ] − − − − − −

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 15

Page 16: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Correctness of the algorithm

Given a system of constraints, S = (xi w ti)Ni=1, we define

FS : (X → L) → (X → L)

by:

FS(ψ)(x) =⊔{[[t]]ψ | x w t in S}

This is a monotone function over a complete lattice X → L.

It follows from Tarski’s Fixed Point Theorem:

If f : L→ L is a monotone function on a complete lattice (L,v)

then it has a least fixed point lfp(f) = uRed(f) ∈ Fix(f)

that FS has a least fixed point, µS, which is the least solution to the

constraints S.

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 16

Page 17: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Tarski’s Fixed Point Theorem (again)

Let L = (L,v) be a complete lattice

and let f : L → L be a monotone func-

tion.

The greatest fixed point gfp(f) satisfy:

gfp(f) = t{l | l v f(l)} ∈ {l | f(l) = l}

The least fixed point lfp(f) satisfy:

lfp(f) = u{l | f(l) v l} ∈ {l | f(l) = l} y

y

y

yl v f(l)

l = f(l)

f(l) v l

������

�������

���� ����

������

�������

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 17

Page 18: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Correctness of the algorithm (2)

Since L satisfies the Ascending Chain Condition and since X is finite

it follows that also X → L satisfies the Ascending Chain Condition;

therefore µS is given by

µS = lfp(FS) =⊔j≥0

FjS(⊥)

and the chain (FnS (⊥))n eventually stabilises.

PPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 18

Page 19: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Lemma

Given the assumptions, the abstract worklist algorithm computes the

least solution of the given constraint system, S.

Proof

• termination – of initialisation and iteration loop

• correctness is established in three steps:

– A v µS – holds initially and is preserved by the loop

– FS(A) v A – proved by contradiction

– µS v A – follows from Tarski’s fixed point theorem

• complexity: O(h ·M2 ·N) for h being the height of L, M being the

maximal size of the right hand sides of the constraints and N being

the number of constraintsPPA Section 6.1 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 19

Page 20: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Worklist & Reverse Postorder

• Changes should be propagated throughout the rest of the program

before returning to re-evaluate a constraint.

• To ensure that every other constraint is evaluated before re-evaluating

the constraint which caused the change is to impose some total order

on the constraints.

• We shall impose a graph structure on the constraints and then use

an iteration order based on reverse postorder.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 20

Page 21: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Graph structure of constraint system

Given a constraint system S = (xi w ti)Ni=1 we can construct a graphical

representation GS of the dependencies between the constraints in the

following way:

• there is a node for each constraint xi w ti, and

• there is a directed edge from the node for xi w ti to the node for

xj w tj if xi appears in tj (i.e. if xj w tj appears in infl[xi]).

This constructs a directed graph.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 21

Page 22: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: graph representation

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3x3 = x2x4 = x1 ∪ (x5\X356?) ∪X5x5 = x4x6 = x2 ∪ x4

x5

x4

x1

x3

x2

x6

? ?

?

?

?

?

? ?

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 22

Page 23: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Handles and roots

Observations:

• A constraint systems corresponding to forward analyses of Whileprograms will have a root

• A constraint systems corresponding to backward analyses for Whileprograms will not have a single root

A handle is a set of nodes such that each node in the graph is reachablethrough a directed path starting from one of the nodes in the handle.

• A graph G has a root r if and only if G has {r} as a handle

• Minimal handles always exist (but they need not be unique)

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 23

Page 24: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Depth-First Spanning Forest

We can then construct a depth-first spanning forest (abbreviated DFSF)

from the graph GS and handle HS:

INPUT: A directed graph (N,A) with k nodes and handle H

OUTPUT: (1) A DFSF T = (N,AT ), and

(2) a numbering rPostorder of the nodes indicating thereverse order in which each node was last visitedand represented as an element of array [N ] of int

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 24

Page 25: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Algorithm for DFSF

METHOD: i := k;

mark all nodes of N as unvisited;

let AT be empty;

while unvisited nodes in H exists do

choose a node h in H; DFS(h);

USING: procedure DFS(n) is

mark n as visited;

while (n, n′) ∈ A and n′ has not been visited do

add the edge (n, n′) to AT ; DFS(n′);

rPostorder[n] := i; i := i − 1;

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 25

Page 26: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: DFST

x5

x4

x1

x3

x2

x6

? ?

?

?

?

?

? ?

x1

x4 x2

x6 x5 x3

��

���

@@

@@@R

��

���

@@

@@@R

@@

@@@R

reverse postorder: x1, x2, x3, x4, x5, x6pre-order: x1, x4, x6, x5, x2, x3breadth-first order: x1, x4, x2, x6, x5, x3

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 26

Page 27: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Categorisation of edges

Given a spanning forest one can categorise the edges in the original

graph as follows:

• Tree edges: edges present in the spanning forest.

• Forward edges: edges that are not tree edges and that go from a

node to a proper descendant in the tree.

• Back edges: edges that go from descendants to ancestors (including

self-loops).

• Cross edges: edges that go between nodes that are unrelated by the

ancestor and descendant relations.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 27

Page 28: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Properties of Reverse Postorder

Let G = (N,A) be a directed graph, T a depth-first spanning forest ofG and rPostorder the associated ordering computed by the algorithm.

• (n, n′) ∈ A is a back edge if and only if rPostorder[n] ≥ rPostorder[n′].

• (n, n′) ∈ A is a self-loop if and only if rPostorder[n] = rPostorder[n′].

• Any cycle of G contains at least one back edge.

• Reverse postorder (rPostorder) topologically sorts tree edges as wellas the forward and cross edges.

• Preorder and breadth-first order also sorts tree edges and forwardedges but not necessarily cross edges.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 28

Page 29: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Extraction based on Reverse Postorder

Idea: The iteration amounts to an outer iteration that contains an inneriteration that visits the nodes in reverse postorder:

We organise the worklist W as a pair (W.c,W.p) of two structures:

• W.c is a list of current nodes to be visited in the current inneriteration.

• W.p is a set of pending nodes to be visited in a later inner iteration.

Nodes are always inserted into W.p and always extracted from W.c.

When W.c is exhausted the current inner iteration has finished and inpreparation for the next inner iteration we must sort W.p in the reversepostorder given by rPostorder and assign the result to W.c.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 29

Page 30: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Iterating in Reverse Postorder

empty = (nil,∅)

function insert((x w t),(W.c,W.p))

return (W.c,(W.p ∪ {x w t})) insert into pending set

function extract((W.c,W.p))

if W.c = nil then no more constraints in current list

W.c := sort rPostorder(W.p); sort pending set and update

W.p := ∅ current list and pending set

return ( head(W.c), (tail(W.c),W.p) ) extract from current round

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 30

Page 31: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: Reverse Postorder iterationW.c W.p x1 x2 x3 x4 x5 x6

[ ] {x1, · · · , x6} ∅ ∅ ∅ ∅ ∅ ∅[x2, x3, x4, x5, x6] {x2, x4} X? − − − − −

[x3, x4, x5, x6] {x2, x3, x4, x6} − X3? − − − −[x4, x5, x6] {x2, x3, x4, x6} − − X3? − − −

[x5, x6] {x2, · · · , x6} − − − X5? − −[x6] {x2, · · · , x6} − − − − X5? −

[x2, x3, x4, x5, x6] ∅ − − − − − X35?[x3, x4, x5, x6] ∅ − − − − − −

[x4, x5, x6] ∅ − − − − − −[x5, x6] ∅ − − − − − −

[x6] ∅ − − − − − −[ ] ∅ − − − − − −

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3

x3 = x2x4 = x1 ∪ (x5\X356?) ∪X5

x5 = x4x6 = x2 ∪ x4

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 31

Page 32: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Complexity

• A list of N elements can be sorted in O(N · log2(N)) steps.

• If we use a linked list representation of lists then inserting an element

to the front of a list and extracting the head of a list can be done

in constant time.

• The overall complexity for processing N insertions and N extractions

is O(N · log2(N)).

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 32

Page 33: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

The Round Robin Algorithm

Assumption: the constraints are sorted in reverse postorder.

• each time W.c is exhausted we assign it the list [1, · · · , N ]

• W.p is replaced by a boolean, change, that is false whenever W.p is

empty

• the iterations are split into an outer iteration with an explicit in-

ner iteration; each inner iteration is a simple iteration through all

constraints in reverse postorder.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 33

Page 34: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Round Robin Iteration

empty = (nil,false)

function insert((x w t),(W.c,change))

return (W.c,true) pending constraints

function extract((W.c,change))

if W.c = nil then a new round is needed

W.c := [1, · · · , N ]; all constraints are re-considered

change := false no pending constraints

return (head(W.c),(tail(W.c),change))

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 34

Page 35: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

The Round Robin Algorithm

INPUT: A system S of constraints: x1 w t1, · · · , xN w tN

ordered 1 to N in reverse postorder

OUTPUT: The least solution: Analysis

METHOD: Initialisation

for all x ∈ X do Analysis[x] := ⊥

change := true;

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 35

Page 36: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

The Round Robin Algorithm (cont.)

METHOD: Iteration (updating Analysis)

while change do

change := false;

for i := 1 to N do

new := eval(ti,Analysis);

if Analysis[xi] 6w new then

change := true;

Analysis[xi] := Analysis[xi] t new;

Lemma:

The Round Robin algorithm computes the least solution of the

given constraint system, S.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 36

Page 37: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: Round Robin iteration

change x1 x2 x3 x4 x5 x6true ∅ ∅ ∅ ∅ ∅ ∅

∗ falsetrue X? − − − − −true − X3? − − − −true − − X3? − − −true − − − X5? − −true − − − − X5? −true − − − − − X35?

∗ falsefalse − − − − − −false − − − − − −false − − − − − −false − − − − − −false − − − − − −false − − − − − −

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3x3 = x2x4 = x1 ∪ (x5\X356?) ∪X5x5 = x4x6 = x2 ∪ x4

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 37

Page 38: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Loop connectness parameter

Consider a depth-first spanning forest T and a reverse postorder rPost-

order constructed for the graph G with handle H.

The loop connectedness parameter d(G,T ) is defined as the largest

number of back edges found on any cycle-free path of G.

For While programs the loop connectedness parameter equals the max-

imal nesting depth of while loops.

Empirical studies of Fortran programs show that the loop connectness

parameter seldom exceeds 3.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 38

Page 39: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Complexity

The constraint system (xi w ti)Ni=1 is an instance of a Bit Vector Frame-

work when L = P(D) for some finite set D and when each right handside ti is of the form

(xji ∩ Y1i ) ∪ Y 2

i

for sets Y ki ⊆ D and variable xji ∈ X.

Lemma:

For Bit Vector Frameworks, the Round Robin Algorithm termi-nates after at most d(G,T ) + 3 iterations.

It performs at most O((d(G,T ) + 1) ·N) assignments.

For While programs: the overall complexity is O((d+1)·b) where d is themaximal nesting depth of while-loops and b is the number of elementaryblocks.

PPA Section 6.2 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 39

Page 40: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Worklist & Strong Components

Two nodes n and n′ are said to be strongly connected whenever there

is a (possibly trivial) directed path from n to n′ and a (possibly trivial)

directed path from n′ to n. Defining

SC = {(n, n′) | n and n′ are strongly connected}

we obtain a binary relation SC ⊆ N ×N .

• SC is an equivalence relation.

• The equivalence classes of SC are called the strong components.

A graph is said to be strongly connected whenever it contains exactly

one strongly connected component.

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 40

Page 41: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: Strong Components

x5

x4

x1

x3

x2

x6

? ?

?

?

?

?

? ?

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 41

Page 42: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Reduced graph

The interconnections between strong components can be represented

by the reduced graph.

• nodes: the strongly connected components

• edges: there is an edge from one node to another distinct node if

and only if there is an edge from some node in the first strongly

connected component to a node in the second in the original graph.

For any graph G the reduced graph is a DAG.

The strong components can be linearly ordered in topological order:

SC1 ≤ SC2 whenever there is an edge from SC1 to SC2.

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 42

Page 43: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: Strong Components and reduced graph

x5

x4

x1

x3

x2

x6?

?

?

?

{x4, x5}

{x1}

{x2, x3}

{x6}

? ?

- �

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 43

Page 44: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

The overall idea behind the algorithm

Idea: strong components are visited in topological order with nodes

being visited in reverse postorder within each strong component.

The iteration amounts to three levels of iteration:

• the outermost level deals with the strong components one by one;

• the intermediate level performs a number of passes over the con-

straints in the current strong component;

• the inner level performs one pass in reverse postorder over the ap-

propriate constraints.

To make this work for each constraint we record

• the strong component it occurs in and

• its number in the local reverse postorder for that strong component.

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 44

Page 45: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Pseudocode for constraint numbering

INPUT: A graph partitioned into strong components

OUTPUT: srPostorder

METHOD: scc := 1;

for each scc in topological order do

rp := 1;

for each x w t in the strong component scc

in local reverse postorder do

srPostorder[x w t] := (scc,rp);

rp := rp + 1

scc := scc + 1;

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 45

Page 46: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Organisation of the worklist

The worklist W as a pair (W.c,W.p) of two structures:

• W.c, is a list of current nodes to be visited in the current inneriteration.

• W.p, is a set of pending nodes to be visited in a later intermediateor outer iteration.

Nodes are always inserted into W.p and always extracted from W.c.

When W.c is exhausted the current inner iteration has finished and inpreparation for the next we must extract a strong component from W.p,sort it and assign the result to W.c.

An inner iteration ends when W.c is exhausted, an intermediate iterationends when scc gets a higher value than last time it was computed, andthe outer iteration ends when both W.c and W.p are exhausted.

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 46

Page 47: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Iterating through Strong Components

empty = (nil,∅)

function insert((x w t),(W.c,W.p))

return (W.c,(W.p ∪ {x w t}))

function extract((W.c,W.p))

local variables: scc, W scc

if W.c = nil then

scc := min{fst(srPostorder[x w t]) | (x w t) ∈ W.p};

W scc := {(x w t) ∈ W.p | fst(srPostorder[x w t]) = scc};

W.c := sort srPostorder(W scc);

W.p := W.p \ W scc;

return ( head(W.c), (tail(W.c),W.p) )

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 47

Page 48: Principles of Program Analysis: Algorithmshrni/PPA/slides6.pdf · Principles of Program Analysis: Algorithms Transparencies based on Chapter 6 of the book: Flemming Nielson, Hanne

Example: Strong Component iterationW.c W.p x1 x2 x3 x4 x5 x6[ ] {x1, · · · , x6} ∅ ∅ ∅ ∅ ∅ ∅[ ] {x2, · · · , x6} X? − − − − −

[x3] {x3, · · · , x6} − X3? − − − −[ ] {x2, · · · , x6} − − X3? − − −

[x3] {x4, x5, x6} − − − − − −[ ] {x4, x5, x6} − − − − − −

[x5] {x5, x6} − − − X5? − −[ ] {x4, x5, x6} − − − − X5? −

[x5] {x6} − − − − − −[ ] {x6} − − − − − −[ ] ∅ − − − − − X35?

x1 = X?x2 = x1 ∪ (x3\X356?) ∪X3

x3 = x2x4 = x1 ∪ (x5\X356?) ∪X5

x5 = x4x6 = x2 ∪ x4

PPA Section 6.3 c© F.Nielson & H.Riis Nielson & C.Hankin (Dec. 2004) 48