search by partial solutions. where are we? optimization methods complete solutions partial solutions...

35
Search by partial solutions

Upload: colin-gilmore

Post on 17-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Search by partial solutions

Where are we?Optimization methodsOptimization methods

Complete solutionsComplete solutionsPartial solutions

Partial solutions

Exhaustive searchHill climbingRandom restartGeneral Model of Stochastic Local SearchSimulated AnnealingTabu search

Exhaustive searchHill climbingRandom restartGeneral Model of Stochastic Local SearchSimulated AnnealingTabu search

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programmingConstraint Propagation

Exhaustive searchBranch and boundGreedyBest firstA*Divide and ConquerDynamic programmingConstraint Propagation

Search by partial solutions

nodes are partial or complete states graphs are DAGs (may be trees)

source (root) is empty state sinks (leaves) are complete states

directed edges represent setting parameter values

4 queens:separaterow and column

complete solutions

possible pruningpossible pruning

Implications of partial solutions

pruning of “impossible” partial solutions need partial evaluation function

Partial Solution Trees and DAGs

trees: search might use tree traversal methods based on BFS, DFSadvantage: depth is limited!(contrast to complete solution space)

DAGs: spanning trees

Partial solution algorithms

greedy divide and conquer dynamic programming branch and bound A* constraint propegation

Greedy algorithm

make best ‘local’ parameter selection at each step:

complete solutions

Greedy SAT partial evaluation order of setting propositions T/F

P = {P1, P2,…,Pn}

f(P) = D1 ∧ D2 ∧ ... ∧ Dk

e.g., Di = Pf ∨ ~Pg ∨ Ph

How much pre-processing?

Greedy TSP

partial evaluation order of adding edges

Cities: C1, C2,…,Cn

symmetric distances

How much preprocessing?

C1

C2

C2

Partial solution algorithms

greedy branch and bound A* divide and conquer dynamic programming constraint propegation

4 queens:separaterow and column

complete solutions

possible pruningpossible pruning

More in next slide set…

Branch and Bound

eliminate subtrees of possible solutions based on evaluation of partial solutions

complete solutions

e.g., branch and bound TSP

complete solutions

current best solution distance 498

current best solution distance 498

distance so far 519

distance so far 519

Branch and Bound

requirement a bound on best solution possible

from current partial solution(e.g., if distance so far is 519, total distance is >519) ‘tight’ as possible quick to calculate

a current best solution (e.g., 498)

Tight bounds value of partial solution

plus estimate of value of remaining steps

must be ‘optimistic estimate’

example: path so far: 519remainder of path: 0 (not

tight)bounded estimate: 519

Optimistic but Tight Bound

optimistictight

Current bestCurrent best

Actual values if calculatedActual values if calculated

Example - b&b for TSP

Estimate a lower bound for the path length: Fast to calculate Easy to revise A B C D E

A 7 12 8 11

B 7 10 7 13

C 12 10 9 12

D 8 7 9 10

E 11 13 12 10

Example - b&b for TSP1.Assume shortest edge to each city:(7+7+9+7+10) = 40 A B C D E

2.Assume two shortest edges to each:((7+8)+(7+7)+(9+10)+(7+8)+(10+11))/2 = 7.5 + 7 + 9.5 + 7.5 + 10.5 = 42

Which is better?

A B C D E

A 7 12 8 11

B 7 10 7 13

C 12 10 9 12

D 8 7 9 10

E 11 13 12 10

Tight Bound

(7+7+9+7+10)

A B C D E

A 7 12 8 11

B 7 10 7 13

C 12 10 9 12

D 8 7 9 10

E 11 13 12 10

Path so far 0 40

Path so far 11 44

Path so far 11+13 = 24 47

Best Path found 46

(-7+11)

(-10+13)

A

E

B

B&B algorithm

Depth first traversal of partial solution space - leaves are complete solutions

Subtrees are pruned below a partial solution that cannot be better than the current best solution

Partial solution algorithms

greedy branch and bound A* divide and conquer dynamic programming constraint propegation

A* algorithm - improved b&b

Ultimate partial solution search Based on tree searching algorithms

you already know - bfs, dfs Two versions:

Simple - used on trees Advanced - used on general graphs

general search algorithm for trees*algorithm search (emptyState, fitnessFn()) returns bestState openList = new StateCollection(); openList.insert(emptyState) bestState = null bestFitness = min // assuming maximum wanted while (notEmpty(openList) && resourcesAvailable) state = openList.get()

fitness = fitnessFn(state) // partial or complete fitness if (state is complete and fitness > bestFitness ) bestFitness = fitness

bestState = state for all values k in domain of next variable nextState = state.include(k)

openList.insert(nextState) return solutionState

*For graphs, cycles are a problem

Versions of general search

Based on the openList collection class Breadth first search Depth first search (->branch and bound) Best first search (informed search)

Best first A*

A*

Best first search witha heuristic fitness function:

Admissible (never pessimistic) Tradeoff: Simplicity vs accuracy/tightness

The heuristic heuristic: “Reduced problem” strategy

e.g., min path length in TSP

Partial solution algorithms

greedy branch and bound A* divide and conquer dynamic programming constraint propegation

Divide and Conquer

1. subdivide problem into parts2. solve parts individually3. reassemble parts

examples: sorting – quicksort, mergesort, radix sort ordered trees

Partial solution algorithms

greedy branch and bound A* divide and conquer dynamic programming constraint propegation

Dynamic programming

extreme divide-and-conquer: solve all possible subproblems

works on problems where a good solution to a small problem is always a good partial solution to a larger problem

Example – Knapsack problem

unlimited quantities of items 1, 2, 3, ..., Nare availableeach item has weight w1, w2, w3, ... , wN

each item has value v1, v2, v3, ... , vN

Knapsack has capacity Mmost valuable combination of items to pack?

x1, x2, x3, ... , xN

maximize ∑ xj.vi

constraint: ∑ xj.wj ≤ M

Knapsack Algorithm O(NM)

solve for all sizes up to M with first item repeat with first two items, three, etc

value[M] // array of highest values by capacitylastItem[M] // last item added to achieve value

//(needed to determine contents of knapsack)for j = 1 to N for capacity = 1 to M

if (capacity – wj ) >= 0 if ( value[capacity] < (value[capacity – wj] +

vj))value[capacity] = value[capacity – wj] + vj

lastItem[capacity] = j

Exampleitem j 1 2 3 4 5

weight wj 3 4 7 8 9

value vj 4 5 10 11 12

Knapsack capacity M = 17

try it...

Exampleitem j 1 2 3 4 5

weight wj 3 4 7 8 9

value vj 4 5 10 11 12

Knapsack capacity M = 17

M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

value 0 0 4 4 4 8 8 8 12 12 12 16 16 16 20 20 20

lastItem 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

value 0 0 4 5 5 8 9 10 12 13 14 16 17 18 20 21 22

lastItem 0 0 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2

M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

value 0 0 4 5 5 8 10 10 12 14 15 16 18 20 20 22 24

lastItem 0 0 1 2 2 1 3 2 1 3 3 1 3 3 1 3 3

M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

value 0 0 4 5 5 8 10 11 12 14 15 16 18 20 21 22 24

lastItem 0 0 1 2 2 1 3 4 1 3 3 1 3 3 4 3 3

M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

value 0 0 4 5 5 8 10 11 13 14 15 17 18 20 21 23 24

lastItem 0 0 1 2 2 1 3 4 5 3 3 5 3 3 4 5 3

N = 1

Partial solution algorithms

greedy branch and bound A* divide and conquer dynamic programming constraint propegation

next week