mbd and csp
DESCRIPTION
MBD and CSP. Meir Kalech. Partially based on slides of Jia You and Brian Williams. Outline. Last lecture: Autonomous systems Model-based programming Livingstone Today’s lecture: Constraints satisfaction problems (CSP) Optimal CSP Conflict-directed A* Expand best child. - PowerPoint PPT PresentationTRANSCRIPT
MBD and CSP
Meir Kalech
Partially based on slides of Jia You and Brian Williams
Outline Last lecture:
1. Autonomous systems
2. Model-based programming
3. Livingstone
Today’s lecture:
1. Constraints satisfaction problems (CSP)
2. Optimal CSP
3. Conflict-directed A*
4. Expand best child
Intro Example: 8-QueensIntro Example: 8-Queens
Generate-and-test, 88 combinations
Intro Example: 8-QueensIntro Example: 8-Queens
Constraint Satisfaction Constraint Satisfaction ProblemProblem
Set of variables {X1, X2, …, Xn} Each variable Xi has a domain Di of possible
values Usually Di is discrete and finite
Set of constraints {C1, C2, …, Cp} Each constraint Ck involves a subset of variables
and specifies the allowable combinations of values of these variables
Goal: Assign a value to every variable such that all constraints are satisfied
Example: 8-Queens ProblemExample: 8-Queens Problem 8 variables Xi, i = 1 to 8 Domain for each variable {1,2,…,8} Constraints are of the forms:
Xi = k Xj k for all j = 1 to 8, ji
Example: Map ColoringExample: Map Coloring
• 7 variables {WA,NT,SA,Q,NSW,V,T}
• Each variable has the same domain {red, green, blue}• No two adjacent variables have the same value: WANT, WASA, NTSA, NTQ, SAQ, SANSW, SAV,QNSW, NSWV
WA
NT
SA
Q
NSWV
T
WA
NT
SA
Q
NSWV
T
Example: Task SchedulingExample: Task Scheduling
T1 must be done during T3T2 must be achieved before T1 startsT2 must overlap with T3T4 must start after T1 is complete
T1
T2
T3
T4
Constraint GraphConstraint GraphBinary constraints
T
WA
NT
SA
Q
NSW
V
Two variables are adjacent or neighbors if theyare connected by an edge or an arc
T1
T2
T3
T4
CSP as a Search ProblemCSP as a Search Problem Initial state: empty assignment Successor function: a value is assigned to
any unassigned variable, which does not conflict with the currently assigned variables
Goal test: the assignment is complete Path cost: irrelevant
Backtracking example
Backtracking example
Backtracking example
Backtracking example
Backtracking AlgorithmBacktracking Algorithm
CSP-BACKTRACKING(PartialAssignment a) If a is complete then return a X select an unassigned variable D select an ordering for the domain of X For each value v in D do
If v is consistent with a then Add (X= v) to a result CSP-BACKTRACKING(a) If result failure then return result Remove (X= v) from a
Return failure
Start with CSP-BACKTRACKING({})
Improving backtracking efficiency
Which variable should be assigned next?
In what order should its values be tried?
Can we detect obvious failure early?
Outline Last lecture:
1. Autonomous systems
2. Model-based programming
3. Livingstone
Today’s lecture:
1. Constraints satisfaction problems (CSP)
2. Optimal CSP
3. Conflict-directed A*
4. Expand best child
What is an Optimal CSP (OCSP)? Set of decision variables y, A utility function g on y, A set of constraints C that y must satisfy.
The solution is an assignment to y, maximizing g and satisfying C.
What is an OCSP formally? OCSP is: Where CSP =
x – set of variables Dx – Domain over variables
Cx – set of constraints over variables
- set of decision variables Cost function –
We call the elements of Dy decision states. A solution y* to an OCSP is a minimum cost decision
state that is consistent with the CSP.
Methods for solving OCSP
Traditional method: A* Good at finding optimal solutions. Visits every state that whose estimated cost is less
than the true optimum*. Computationally unacceptable for model-based executives.* A* heuristic is admissible, that is, it never overestimates cost.
Solution: CONFLICT-DIRECTED A* Also searches in best-first order Eliminates subspaces around inconsistent states
Outline Last lecture:
1. Autonomous systems
2. Model-based programming
3. Livingstone
Today’s lecture:
1. Constraints satisfaction problems (CSP)
2. Optimal CSP
3. Conflict-directed A*
4. Expand best child
CONFLICT-DIRECTED A*Williams&Rango 03
Step 1: Best candidate S generated
Step 2: S is tested for consistency
Step 3: If S is inconsistent, the inconsistency
is generalized to conflicts, removing
a subspace of the solution space
Step 4: The program jumps to the next-best
candidate, resolving all conflicts.
Enumerating Probable Candidates
GenerateCandidate
TestCandidate
Consistent?Keep
ComputePosterior p
BelowThreshold?
ExtractConflict
DoneYes No
Yes No
Leading CandidateBased on Priors
IncreasingCost
consistent
inconsistent
A* - must visit all states with cost lower than optimum
IncreasingCost
Conflict-directed A* - found inconsistent state eliminates all states entail the same symptom
consistent
inconsistent
IncreasingCost Conflict 1
Conflict-directed A*
consistent
inconsistent
IncreasingCost Conflict 1
Conflict-directed A*
consistent
inconsistent
IncreasingCost
Conflict 2
Conflict 1
Conflict-directed A*
consistent
inconsistent
IncreasingCost
Conflict 2
Conflict 1
Conflict-directed A*
consistent
inconsistent
IncreasingCost
Conflict 3
Conflict 2
Conflict 1
Conflict-directed A*
consistent
inconsistent
IncreasingCost
Conflict 3
Conflict 2
Conflict 1
Conflict-directed A*
• Feasible regions described by the implicates of known conflicts (Kernel Assignments)
•Want kernel assignment containing the best cost candidate
consistent
inconsistent
Conflict-directed A*Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP] {} OCSP Initialize-Best-Kernels(OCSP) Solutions[OCSP] {} loop do decision-state Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else
new-conflicts Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP] Eliminate-Redundant-Conflicts(Conflicts[OCSP] new-conflicts)end
Conflict-directed A*Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP] {} OCSP Initialize-Best-Kernels(OCSP) Solutions[OCSP] {} loop do decision-state Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else
new-conflicts Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP] Eliminate-Redundant-Conflicts(Conflicts[OCSP] new-conflicts)end
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
Assume Independent failures: PG(mi) >> PU(mi) //G-good, U-unknown
Psingle >> Pdouble
PU(M2) > PU(M1) > PU(M3) > PU(A1) > PU(A2)
Example: Diagnosis as an OCSP
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
OBS = observation.
COMPONENTS = variable set y.
System Description = constraints - Cy.
M1=G X=A+B
Candidate = a mode assignments to y.
Diagnosis = a candidate that is consistent with Cy and OBS.
Utility = candidate probability P(y).
Cost = 1/P(y).
Example: Diagnosis
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
Conflicts / Constituent Diagnoses none
Best Kernel: {}
Best Candidate: ?
First Iteration
{ }
M1=? M2=? M3=? A1=? A2=?
M1=G M2=G M3=G A1=G A2=G
Select most likely value for unassigned modes
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
Extract Conflict and Constituent Diagnoses:
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
Extract Conflict and Constituent Diagnoses:
Test: M1=G M2=G M3=G A1=G A2=G
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
Extract Conflict and Constituent Diagnoses: ¬ [M1=G M2=G A1=G]
Test: M1=G M2=G M3=G A1=G A2=G
Extract Conflict and Constituent Diagnoses: ¬ [M1=G M2=G A1=G]
M1=U M2=U A1=U
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
This process has been done through propositional satisfiability algorithm
DPLL algorithm*
Conflicts / Constituent Diagnoses M1=U M2=U A1=U
Best Kernel: M2=U
Best Candidate: M1=G M2=U M3=G A1=G A2=G
Second Iteration
3
2
2
3
3
10M1
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
6
12
Later we will describe how to determine the best Kernel
Test: M1=G M2=U M3=G A1=G A2=G
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
A1
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
Test: M1=G M2=U M3=G A1=G A2=G
A1
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
Test: M1=G M2=U M3=G A1=G A2=G
A1
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
6
Test: M1=G M2=U M3=G A1=G A2=G
A1
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
610
Test: M1=G M2=U M3=G A1=G A2=G
A1
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
610
Test: M1=G M2=U M3=G A1=G A2=G
A1
Extract Conflict:
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
610
Test: M1=G M2=U M3=G A1=G A2=G
A1
Extract Conflict:Not [G(M1) & G(M3) & G(A1) & G(A2)]
3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
610
Test: M1=G M2=U M3=G A1=G A2=G
A1
Extract Conflict:¬ [M1=G M3=G A1=G A2=G]
M1=U M3=U A1=U A2=U
Conflicts / Constituent Diagnoses M1=U M2=U A1=U
M1=U M3=U A1=U A2=U
Best Kernel: M1=U
Best Candidate: M1=U M2=G M3=G A1=G A2=G
Third Iteration3
2
2
3
3
10M1
M3
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
4
610
A1
Test: M1=U M2=G M3=G A1=G A2=G
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
Test: M1=U M2=G M3=G A1=G A2=G
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
Test: M1=U M2=G M3=G A1=G A2=G
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
Test: M1=U M2=G M3=G A1=G A2=G
Test: M1=U M2=G M3=G A1=G A2=G
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
4
Test: M1=U M2=G M3=G A1=G A2=G
3
2
2
3
3
10
M2
M3
A1
A2
A
B
C
D
E
F
G
X
Y
Z
12
6
6
4
12
Consistent!
A2=U M1=U
M3=UA1=U
A1=U, A2=U, M1=U, M3=UA1=U M1=U M2=U
A1=U M1=U M1=U A2=U M2=U M3=U
Conflict-Directed A*: Generating Best Kernels
A1=U, M1=U , M2=U
Constituent Diagnoses
• Minimal set covering is an instance of breadth first search.• To find the best kernel, expand tree in best first order.
Insight:• Kernels found by minimal set covering
Superset of M1
A1=U, A2=U, M1=U, M3=UA1=U M1=U M2=U
A2=U M1=U
M3=UA1=U
M2=U A2=U M2=U M3=UM1=U
Conflict-Directed A*: Generating Best Kernels
A1=U, M1=U , M2=U
Constituent Diagnoses
A1=U
• Minimal set covering is an instance of breadth first search.• To find the best kernel, expand tree in best first order.
Insight:• Kernels found by minimal set covering
• Continue expansion to find best candidate.
PU(M2) > PU(M1)
Outline Last lecture:
1. Autonomous systems
2. Model-based programming
3. Livingstone
Today’s lecture:
1. Constraints satisfaction problems (CSP)
2. Optimal CSP
3. Conflict-directed A*
4. Expand best child
OCSP Admissible Estimate:
M1=? M3=? A1=? A2=?
x PM1=G x PM3=G x PA1=G x PA2=G
Select most likely value for unassigned modes
F = G + H (H is admissible)
PM2=u
M2=U
MPI: mutual, preferential independenceTo find the best candidate we assign each variable its best utility value, independent of the values assigned to the other variables.
M2=U A1=UM1=U
M2=U M1=U A1=U
{ }
For any Node N: The child of N containing the best cost candidate is
the child with the best estimated cost f = g+h (by MPI). Only need to expand the best child.
Conflict-directed A*: Expand Best Child & Sibling
Constituent kernels
M2=U A1=UM1=U
Order constituents by decreasing likelihood
{ }
> >
Conflict-directed A*: Expand Best Child & Sibling
For any Node N: The child of N containing the best cost candidate is
the child with the best estimated cost f = g+h. Only need to expand the best child.
M2=U M1=U A1=U
Constituent kernels
M2=U
Order constituents by decreasing likelihood
{ }
Conflict-directed A*: Expand Best Child & Sibling
For any Node N: The child of N containing the best cost candidate is
the child with the best estimated cost f = g+h. Only need to expand the best child.
M2=U M1=U A1=U
Constituent kernels
M1=U M3=U A1=U A2=U
M1=U
M2=U M1=U
Conflict-directed A*: Expand Best Child & Sibling
M2=U M1=U A1=U
Constituent kernels
When a best child loses any candidate, expand child’s next best sibling: If unresolved conflicts:
expand as soon as next conflict of child is resolved. If conflicts resolved:
expand as soon as child is expanded to a full candidate.
{ }
M2=G
M3=G
A1=G
A2=G
M2=U
M3=U
A1=U
A2=U
Appendix: A* Search: PreliminariesProblem: State Space Search Problem Initial State Expand(node) Children of Search Node = next states Goal-Test(node) True if search node at a goal-state
h Admissible Heuristic -Optimistic cost to go
Search Node: Node in the search tree State State the search is at Parent Parent in search tree
A* Search: PreliminariesProblem: State Space Search Problem Initial State Expand(node) Children of Search Node = adjacent states Goal-Test(node) True if search node at a goal-state Nodes Search Nodes to be expanded Expanded Search Nodes already expanded Initialize Search starts at , with no expanded nodes
h Admissible Heuristic -Optimistic cost to go
Search Node: Node in the search tree State State the search is at Parent Parent in search tree
Nodes[Problem]: Remove-Best(f) Removes best cost node according to f Enqueue(new-node, f ) Adds search node to those to be expanded
A* SearchFunction A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return nodeend
Expandbest first
A* SearchFunction A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return nodeend
Terminateswhen . . .
A* SearchFunction A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return nodeend
DynamicProgrammingPrinciple . . .
Next Best State Resolving ConflictsFunction Next-Best-State-Resolving-Conflicts (OCSP) returns the best cost state consistent with Conflicts[OCSP]. f(x) G[problem] (g[problem](x), h(x)) loop do if Nodes[OCSP] is empty then return failure node Remove-Best(Nodes[OCSP], f) state State[node] add state to Visited[OCSP] new-nodes Expand-State-Resolving-Conflicts(node, OCSP) for each new-node in new-nodes unless Exists n in Nodes[OCSP] such that State[new-node] = state OR State[new-node] is in Visited[problem] then Nodes[OCSP] Enqueue(Nodes[OCSP], new-node, f ) if Goal-Test-State-Resolving-Conflicts[OCSP] applied to state succeeds then return nodeend
An instanceof A*