search - cs.duke.edu · search specifically: • problem can be in various states. • start in an...

46
Search George Konidaris [email protected] Spring 2016 (pictures: Wikipedia)

Upload: others

Post on 16-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Search

George [email protected]

Spring 2016(pictures: Wikipedia)

Page 2: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Search

Basic to problem solving:• How to take action to reach a goal?

Page 3: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

SearchSpecifically:

• Problem can be in various states.• Start in an initial state.• Have some actions available.• Each action has a cost.• Want to reach some goal, minimizing cost.

Happens in simulation.Not web search.

Page 4: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Formal DefinitionSet of states!Start state!Set of actions and action rules!Goal test!Cost function!!So a search problem is specified by a tuple, .

S

s 2 S

A a(s) ! s0

g(s) ! {0, 1}

C(s, a, s0) ! R+

(S, s,A, g, C)

Page 5: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Problem StatementFind a sequence of actions and corresponding states ! … such that:!!!!while minimizing:

a1, ..., ans1, ..., sn

si = ai(si�1), i = 1, ..., ns0 = s

g(sn) = 1

nX

i=1

C(si�1, a, si)

Page 6: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Problem StatementFind a sequence of actions and corresponding states ! … such that:!!!!while minimizing:

a1, ..., ans1, ..., sn

si = ai(si�1), i = 1, ..., ns0 = s

g(sn) = 1

nX

i=1

C(si�1, a, si)

start state

Page 7: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Problem StatementFind a sequence of actions and corresponding states ! … such that:!!!!while minimizing:

a1, ..., ans1, ..., sn

si = ai(si�1), i = 1, ..., ns0 = s

g(sn) = 1

nX

i=1

C(si�1, a, si)

start statelegal moves

Page 8: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Problem StatementFind a sequence of actions and corresponding states ! … such that:!!!!while minimizing:

a1, ..., ans1, ..., sn

si = ai(si�1), i = 1, ..., ns0 = s

g(sn) = 1

nX

i=1

C(si�1, a, si)

start statelegal movesend at the goal

Page 9: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Problem StatementFind a sequence of actions and corresponding states ! … such that:!!!!while minimizing:

a1, ..., ans1, ..., sn

si = ai(si�1), i = 1, ..., ns0 = s

g(sn) = 1

nX

i=1

C(si�1, a, si)

start statelegal movesend at the goal

minimize sum of costs - rational agent

Page 10: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

ExampleSudoku!States: all legal Sudoku boards.!Start state: a particular, partially filled-in, board.!Actions: inserting a valid number into the board.!Goal test: all cells filled and no collisions.!Cost function: 1 per move.

Page 11: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

ExampleFlights - e.g., ITA Software.!States: airports.!Start state: RDU.!Actions: available flights from each airport.!Goal test: reached Tokyo.!Cost function: time and/or money.

Page 12: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

The Search TreeClassical conceptualization of search.

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

Page 13: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

The Search Tree

3

3 5

16

9

8 3

14

Page 14: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Important QuantitiesBreadth (branching factor)

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

breadth

Page 15: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

The Search TreeDepth• min solution depth m

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

depth

leaves in a search tree of breadth b, depth d. O(bd)

Page 16: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

The Search TreeExpand the tree one node at a time.Frontier: set of nodes in tree, but not expanded.

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

Key to a search algorithm: which node to expand next?

Page 17: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

How to Expand?Uninformed strategy: • nothing known about likely solutions in the tree.!What to do?• Expand deepest node (depth-first search)• Expand closest node (breadth-first search)

!Properties• Completeness• Optimality• Time Complexity (total number of nodes visited)• Space Complexity (size of frontier)

Page 18: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Depth-First Search

s0

s1

Expand deepest node

Page 19: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Depth-First Search

s0

s1

s2

Expand deepest node

Page 20: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Depth-First Search

s0

s1

s2

X

Expand deepest node

Page 21: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Depth-First Search

s0

s1

s2

X s3

Expand deepest node

Page 22: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Time

worst case:

solution on this branch

O(bd+1 � bd�m) = O(bd+1)

Page 23: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Space

worst case:search is here

b-1 nodes openat each level

!d levels

O((b� 1)d) = O(bd)

Page 24: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Space

worst case:search is here

b-1 nodes openat each level

!d levels

O((b� 1)d) = O(bd)

Page 25: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Space

worst case:search is here

b-1 nodes openat each level

!d levels

O((b� 1)d) = O(bd)

Page 26: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Space

worst case:search is here

b-1 nodes openat each level

!d levels

O((b� 1)d) = O(bd)

Page 27: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

DFS: Space

worst case:search is here

b-1 nodes openat each level

!d levels

O((b� 1)d) = O(bd)

Page 28: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Depth-First SearchProperties:• Completeness: Only for finite trees.• Optimality: No.• Time Complexity: • Space Complexity: !Here m is depth of found solution (not necessarily min solution depth).!The deepest node happens to be the one you most recently visited - easy to implement recursively OR manage frontier using LIFO queue.

O(bd+1)O(bd)

Page 29: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First Search

s0

s1

Expand shallowest node

Page 30: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First Search

s0

s1 s2

Expand shallowest node

Page 31: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First Search

s0

s1 s2

s3

Expand shallowest node

Page 32: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First Search

s0

s1 s2

s3 s4

Expand shallowest node

Page 33: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First Search

s0

s1 s2

s3 s4 s5

Expand shallowest node

Page 34: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

BFS: Time

O(bm+1)

Page 35: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

BFS: Space

O(bm+1)

Page 36: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Breadth-First SearchProperties:• Completeness: Yes.• Optimality: Yes for constant cost.• Time Complexity: • Space Complexity: !Better than depth-first search in all respects except memory cost - must maintain a large frontier.!Manage frontier using FIFO queue.

O(bm+1)O(bm+1)

Page 37: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Iterative Deepening SearchCombine these two strengths.!The core problems in DFS are a) not optimal, and b) not complete … because it fails to explore other branches.!Otherwise it’s a very nice algorithm!!Iterative Deepening: • Run DFS to a fixed depth z.• Start at z=1. If no solution, increment z and rerun.

Page 38: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

run DFS to this depth

Page 39: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

run DFS to this depth

Page 40: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

run DFS to this depth

Page 41: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS

s0

s1 s2 s3 s4

s5 s6

s8s7

s10s9

run DFS to this depth

Page 42: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDSOptimal for constant cost! Proof?!How can that be a good idea? It duplicates work.!Sure but:• Low memory requirement (equal to DFS).• Not many more nodes expanded than BFS. (About twice as

many for binary tree.)

Page 43: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS

visited m + 1 times

visited m times

Page 44: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDS (Reprise)mX

i=0

bi(m� i+ 1) =b(bm+1 �m� 2) +m+ 1

(b� 1)2

# nodes at level i# revisits

bm+1 � 1

b� 1DFS worst case:

Page 45: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

IDSKey Insight:• Many more nodes at depth m+1 than at depth m.!!!!MAGIC.!“In general, iterative deepening search is the preferred uninformed search method when the state space is large and the depth of the solution is unknown.” (R&N)

Page 46: Search - cs.duke.edu · Search Specifically: • Problem can be in various states. • Start in an initial state. • Have some actions available. • Each action has a cost. •

Next WeekInformed searches … what if you know something about the the solution? !What form should such knowledge take? !How should you use it? !