11. shortest paths
TRANSCRIPT
11. Shortest Paths
Motivation, Universal Algorithm, Dijkstra’s algorithm on distance graphs,[Ottman/Widmayer, Kap. 9.5.1-9.5.2 Cormen et al, Kap. 24.1-24.3]
243
River Crossing (Missionaries and Cannibals)
Problem: Three cannibals and three missionaries are standing at a riverbank. The available boat can carry two people. At no time may at anyplace (banks or boat) be more cannibals than missionaries. How can themissionaries and cannibals cross the river as fast as possible? 16
K K K
M M MB
16There are slight variations of this problem. It is equivalent to the jealous husbandsproblem.
244
Problem as Graph
Enumerate permitted con�gurations as nodes and connect them with anedge, when a crossing is allowed. The problem then becomes a shortestpath problem.Example
links rechtsmissionaries 3 0cannibals 3 0boat x
links rechtsmissionaries 2 1cannibals 2 1boat x
Possible crossing
6 People on the left bank 4 People on the left bank
245
The whole problem as a graph
3 03 0x
3 02 1x
3 01 2x
3 00 3x
2 12 1x
1 21 2x
0 31 2x
0 32 1x
0 33 0x
6 5 4 3 4 2 1 2 3
3 02 1
x
3 01 2
x
3 00 3
x
2 12 1
x
1 21 2
x
0 31 2
x
0 32 1
x
0 33 0
x
0 30 3
x
5 4 3 4 2 1 2 3 0
246
Problem as Graph1 2 34 5 67 8
1 2 34 5 67 8
1 2 34 5 6
7 8
1 2 34 57 8 6
1 2 34 57 8 6
1 2 34 8 57 6
1 2 34 5
7 8 6
2 4 67 5 31 8
248
Route Finding
Provided cities A - Z and distances between cities.
A
B
C
D
E
F
G
H
I Z
3
1
6
4
1
3
5
7
1
4 5
1
4
1
7 4
38
5
10
5
What is the shortest path from A to Z?249
Weighted GraphsGiven: G = (V,E, c), c : E → R, s, t ∈ V .Wanted: length (weight) of a shortest path from s to t.Path: p = 〈s = v0, v1, . . . , vk = t〉, (vi, vi+1) ∈ E (0 ≤ i < k)Weight: c(p) := ∑k−1
i=0 c((vi, vi+1)).
S
t2 1
3
2
1
Path with weight 9
251
Shortest Paths
Notation: we writeu
p v oder p : u v
and mean a path p from u to vNotation: δ(u, v) = weight of a shortest path from u to v:
δ(u, v) =
∞ no path from u to vmin{c(p) : u p
v} otherwise
252
Observations (1)
It may happen that a shortest paths does not exist: negative cycles canoccur.
s u
v
w
t1
1
−1
−1
1
1
253
Observations (2)
There can be exponentially many paths.
s
t(at least 2|V |/2 paths from s to t)
⇒ To try all paths is too ine�cient
254
Observations (3)
Triangle InequalityFor all s, u, v ∈ V :
δ(s, v) ≤ δ(s, u) + δ(u, v)
s
u
v
A shortest path from s to v cannot be longer than a shortest path from s to v thathas to include u
255
Observations (4)
Optimal SubstructureSub-paths of shortest paths are shortest paths. Let p = 〈v0, . . . , vk〉 be ashortest path from v0 to vk. Then each of the sub-paths pij = 〈vi, . . . , vj〉(0 ≤ i < j ≤ k) is a shortest path from vi to vj .
u x y vp p
q
p
If not, then one of the sub-paths could be shortened which immediately leads toa contradiction.
256
Observations (5)
Shortest paths do not contain cycles
1. Shortest path contains a negative cycle: there is no shortest path,contradiction
2. Path contains a positive cycle: removing the cycle from the path will reducethe weight. Contradiction.
3. Path contains a cycle with weight 0: removing the cycle from the path will notchange the weight. Remove the cycle (convention).
257
Ingredients of an Algorithm
Wanted: shortest paths from a starting node s.Weight of the shortest path found so far
ds : V → R
At the beginning: ds[v] =∞ for all v ∈ V .Goal: ds[v] = δ(s, v) for all v ∈ V .Predecessor of a node
πs : V → V
Initially πs[v] unde�ned for each node v ∈ V
258
General Algorithm
1. Initialise ds and πs: ds[v] =∞, πs[v] = null for each v ∈ V2. Set ds[s]← 03. Choose an edge (u, v) ∈ E
Relaxiere (u, v):if ds[v] > ds[u] + c(u, v) then
ds[v]← ds[u] + c(u, v)πs[v]← u
4. Repeat 3 until nothing can be relaxed any more.(until ds[v] ≤ ds[u] + c(u, v) ∀(u, v) ∈ E)
259
It is Safe to Relax
At any time in the algorithm above it holds
ds[v] ≥ δ(s, v) ∀v ∈ V
In the relaxation step:
δ(s, v) ≤ δ(s, u) + δ(u, v) [Triangle Inequality].δ(s, u) ≤ ds[u] [Induction Hypothesis].δ(u, v) ≤ c(u, v) [Minimality of δ]
⇒ ds[u] + c(u, v) ≥ δ(s, v)
⇒ min{ds[v], ds[u] + c(u, v)} ≥ δ(s, v)
260
It is Safe to Relax
At any time in the algorithm above it holds
ds[v] ≥ δ(s, v) ∀v ∈ V
In the relaxation step:
δ(s, v) ≤ δ(s, u) + δ(u, v) [Triangle Inequality].δ(s, u) ≤ ds[u] [Induction Hypothesis].δ(u, v) ≤ c(u, v) [Minimality of δ]
⇒ ds[u] + c(u, v) ≥ δ(s, v)
⇒ min{ds[v], ds[u] + c(u, v)} ≥ δ(s, v)
260
Special Case: Directed Acyclic Graph (DAG)
DAG⇒ topological sorting returns optimal visiting order
s
v1
v2
v3
v4
v5
v6
v7
v8
2
4
−3
1
−1
2
−2
2
−2
2
3
−1
0
2
4
−1
−2
0
−4
3
−6
Top. Sort: ⇒ Order s, v1, v2, v3, v4, v6, v5, v8, v7.262
Special Case: Directed Acyclic Graph (DAG)
DAG⇒ topological sorting returns optimal visiting order
s
v1
v2
v3
v4
v5
v6
v7
v8
2
4
−3
1
−1
2
−2
2
−2
2
3
−10
2
4
−1
−2
0
−4
3
−6
Top. Sort: ⇒ Order s, v1, v2, v3, v4, v6, v5, v8, v7.262
Observation (Dijkstra)
s
u
v
w
4
7
2
t0
4
7
2
upper bounds
Smallest upper boundglobal minimum!cannot be relaxed further
264
Basic Idea
Set V of nodes is partitioned intothe setM of nodes for which a shortestpath from s is already known,the set R = ⋃
v∈M N+(v) \M of nodeswhere a shortest path is not yet knownbut that are accessible directly fromM ,the set U = V \ (M ∪R) of nodes thathave not yet been considered.
s
2
2
5
3
5
2
1
2
265
Induction
Induction over |M |: choose nodes from Rwith smallest upper bound. Add r to M andupdate R and U accordingly.
Correctness: if within the “wavefront” a nodewith minimal weight w has been found thenno path over later nodes (providing weight ≥d) can provide any improvement.
s
2
2
5
3
5
2
1
2
266
Algorithm Dijkstra(G, s)Input: Positively weighted Graph G = (V,E, c), starting point s ∈ V ,Output: Minimal weights d of the shortest paths and corresponding predecessor
node for each node.
foreach u ∈ V dods[u]←∞; πs[u]← null
ds[s]← 0; R← {s}while R 6= ∅ do
u← ExtractMin(R)foreach v ∈ N+(u) do
if ds[u] + c(u, v) < ds[v] thends[v]← ds[u] + c(u, v)πs[v]← uR← R ∪ {v}
267
Example
s
a
b
c
d
e
2
3
2
6
1
3
1
1
0
∞
∞
∞
∞
∞ss
a
b
2
3
a c8
b d4
M = {s, a, b}
R = {c, d}
U = {e}
268
Example
s
a
b
c
d
e
2
3
2
6
1
3
1
1
0
∞
∞
∞
∞
∞ss
a
b
2
3
a c8
b d4
d
e5
7
M = {s, a, b, d}
R = {c, e}
U = {}
268
Example
s
a
b
c
d
e
2
3
2
6
1
3
1
1
0
∞
∞
∞
∞
∞ss
a
b
2
3
a c8
b d4
d
e5
7
e
6
M = {s, a, b, d, e}
R = {c}
U = {}
268
Example
s
a
b
c
d
e
2
3
2
6
1
3
1
1
0
∞
∞
∞
∞
∞ss
a
b
2
3
a c8
b d4
d
e5
7
e
6c
M = {s, a, b, d, e, c}
R = {}
U = {}
268
Implementation: Data Structure for R?Required operations:Insert (add to R)ExtractMin (over R) and DecreaseKey (Update in R)foreach v ∈ N+(u) do
if ds[u] + c(u, v) < ds[v] thends[v]← ds[u] + c(u, v)πs[v]← uif v ∈ R then
DecreaseKey(R, v) // Update of a d(v) in the heap of Relse
R← R ∪ {v} // Update of d(v) in the heap of R
MinHeap!
269
Implementation: Data Structure for R?Required operations:Insert (add to R)ExtractMin (over R) and DecreaseKey (Update in R)foreach v ∈ N+(u) do
if ds[u] + c(u, v) < ds[v] thends[v]← ds[u] + c(u, v)πs[v]← uif v ∈ R then
DecreaseKey(R, v) // Update of a d(v) in the heap of Relse
R← R ∪ {v} // Update of d(v) in the heap of R
MinHeap!269
DecreaseKey
DecreaseKey: climbing in MinHeap in O(log |V |)
Problem: unknown position in the heap.
→ Store position at the nodes (in-place) or with a hash-table (external)
Complicated: the position in the heap changes frequently, the informationneeds to be made consistent
270
DecreaseKey
DecreaseKey: climbing in MinHeap in O(log |V |)Problem: unknown position in the heap.→ Store position at the nodes (in-place) or with a hash-table (external)
Complicated: the position in the heap changes frequently, the informationneeds to be made consistent
270
DecreaseKey
DecreaseKey: climbing in MinHeap in O(log |V |)Problem: unknown position in the heap.→ Store position at the nodes (in-place) or with a hash-table (external)
Complicated: the position in the heap changes frequently, the informationneeds to be made consistent
270
DecreaseKey
DecreaseKey: climbing in MinHeap in O(log |V |)Problem: unknown position in the heap.→ Store position at the nodes (in-place) or with a hash-table (external)Complicated: the position in the heap changes frequently, the informationneeds to be made consistent
270
DecreaseKey vermeiden!
Solution: avoid decreaseKey→ alterantive (c): re-insert node after successful relax operation and mark
it "deleted" once extracted (Lazy Deletion)(No big) disadvantage: Memory consumption of heap can grow to Θ(|E|).
Vecause |E| ≤ |V |2, Running times of the operations Insert and ExtractMinalso in
O(log |V |2) = O(log |V |).
271
Runtime of Dijkstra
|V |× ExtractMin: O(|V | log |V |)|E|× Insert or DecreaseKey: O(|E| log |V |)1× Init: O(|V |)Overal17 18 :
O((|V |+ |E|) log |V |)
17For connected graphs: O(|E| log |V |)18Can be improved when a data structure optimized for ExtractMin and DecreaseKey ist
used (Fibonacci Heap), then runtime O(|E|+ |V | log |V |).272
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
3
4
4 4
4
4
4
4
4 4
4
4
4
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.
274
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
Motivation A*
s
t
0
1
1 1
1
2 2
2 2
2 2
2
3
3
3
3
3
3
3
4
4 4
4
5 5
5
6
6
7
8
Dijkstra Algorithm searchesfor all shortest paths, in alldirections.which is correct, becausethe algorithm does notknow about the graph’sstructure.
275
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t
5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
A* in Action
f(u) = ds[u] + h(u) (h = δx + δy Manhattan-Distance)
s
t5 4 3 2 1 0
6 5 4 3 2 1
7 6 5 4 3 2
8 7 6 5 4 3
9 8 7 6 5 4
04
14
16
14
16
26
24
24
26
34
36
3828
28
28
38
48
310
310
310
410
510
58
68
78
88
Idea: equip algorithm with apreferred direction by ways of adistance heuristic hThe value of this heuristics needsto underestimate the distance to tand is added to the founddistance ds to s
276
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t
4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
Keep backward path
f(u) = ds[u] + h(u) (h = δx + δy)
s
t4 3 2 1 0
5 4 3 2 1
6 5 4 3 2
7 6 5 4 3
8 7 6 5 4
06
18
16
18
16
26
28
26
26
28
36
36
38
36
38
46
46
48
56
66
The algorithm works like theDijkstra-algorithmFor �nding the next candidate ofR instead of the value ds thevalue of f = h+ ds is used
277
A*-Algorithm
PrerequisitesPositively weighted, �nite graph G = (V,E, c)s ∈ V , t ∈ VDistance estimate ht(v) ≤ ht(v) := δ(v, t) ∀ v ∈ V .Wanted: shortest path p : s t
278
A*-Algorithm(G, s, t, h)Input: Positively weighted Graph G = (V,E, c), starting point s ∈ V , end point
t ∈ V , estimate h(v) ≤ δ(v, t)Output: Existence and value of a shortest path from s to t
foreach u ∈ V do
d[u]←∞; f [u]←∞; π[u]← null
d[s]← 0; f [s]← h(s); R← {s}; M ← {}while R 6= ∅ do
u← ExtractMinf(R); M ←M ∪ {u}
if u = t then return successforeach v ∈ N+(u) with d[v] > d[u] + c(u, v) do
d[v]← d[u] + c(u, v); f [v]← d[v] + h(v); π[v]← uR← R ∪ {v}; M ←M − {v}
return failure279
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t
16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
What if h does not underestimate
f(u) = ds[u] + h(u) (h = δ2x + δ2
y)
s
t16 9 4 1 0
17 10 5 2 1
20 13 8 5 4
25 18 13 10 9
32 25 20 17 16
018
126
114
126
114
212
210
222
313
38
414
46
320
514
522
622
610
514
610
78
88
622
Algorithm can terminate with thewrong result when h does notunder-estimate the distance to t.although the heuristics looksreasonable otherwise (it ismonotonic, for instance)
280
Revisiting nodes
The A*-algorithm can re-insert nodes that had been extracted from Rbefore.This can lead to suboptimal behavior (w.r.t. running time of thealgorithm).If h, in addition to being admissible (h(v) ≤ h(v) for all v ∈ V ), ful�lsmonotonicity, i.e. if for all (u, u′) ∈ E:
h(u′) ≤ h(u) + c(u′, u)
then the A*-Algorithm is equivalent to the Dijsktra-algorithm with edgeweights c(u, v) = c(u, v) + h(u)− h(v), and no node is re-inserted into R.It is not always possible to �nd monotone heuristics.
281
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
22
38
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
46
4646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282
A crazy h
f(u) = ds[u] + h(u)
s
t
8 7 6 5 4
7 0 0 0 3
6 5 1 0 2
5 0 0 0 1
4 0 2 1 0
00
18
11
18
16
28
22
2238
33
33
44
44
55
510
68
610
28
610
22
33
38
48
464646
56
66
Algorithm terminates correctlyeven if the distance heuristic isnot monotonicIt is then possible that nodes areremoved and re-inserted into Rmultiple times.
282