Data Structures and AlgorithmData Structures and Algorithm
Xiaoqing ZhengXiaoqing [email protected]
f d d hRepresentations of undirected graph
1 2 2 5 /1
31 3 4 /2
2 43
2 5 3 /4
/
5 42 5 3 /4
4 1 /5
Adjacency-list representation of graph
G (V E)G = (V, E)
f d d hRepresentations of undirected graph
1 2
1 2 3 4 5
1 0 1 0 0 1
3 2 1 0 1 1 0
3 0 1 0 1 0
5 4
3 0 1 0 1 0
4 0 1 1 0 1
5 1 0 0 1 0
d f h
5 1 0 0 1 0
Adjacency-matrix representation of graph
G (V E)G = (V, E)
f d d hRepresentations of directed graph
1 2 2 4 /135 /2
6 53
2 /4
/
4 52 /4
6 4 /5
6 /6 6 /6
Adjacency-list representation of graph
G (V E)G = (V, E)
f d d hRepresentations of directed graph
1 2 3
1 2 3 4 5 6
1 0 1 0 1 0 0
2 0 0 0 0 1 0
3 0 0 0 0 1 1
4 5 64 0 1 0 0 0 0
5 0 0 0 1 0 0
d f h
6 0 0 0 0 0 1
Adjacency-matrix representation of graph
G (V E)G = (V, E)
G hGraphsD fi i i A di d h (di h)Definition. A directed graph (digraph)G = (V, E) is an ordered pair consisting of
V f i ( i l )a set V of vertices (singular: vertex),a set E ∈ V × V of edges.
In an undirected graph G = (V, E), the edge set Econsists of unordered pairs of vertices.In either case, we have |E| = O(V2). Moreover, if Gis connected then |E| ≥ |V| – 1is connected, then |E| ≥ |V| 1.
f hRepresentations of graphAdjacency list representationAdjacency-list representationAn adjacency list of a vertex v ∈ V is the list Adj[v]of ertices adjacent toof vertices adjacent to v.
For undirected graphs, |Adj[v]| = degree(v).For directed graphs |Adj[v]| out degree(v)For directed graphs, |Adj[v]| = out-degree(v).
Adjacency-matrix representationThe adjacency matrix of a graph G = (V, E), whereV = {1, 2, …, n}, is the matrix A[1 … n, 1 … n]given by
1 if (i, j) ∈ E,A[i j] = 0 if (i, j) ∉ E.A[i, j]
d h f hBreadth-first searchGi en a graph G (V E) and a disting ished sourceGiven a graph G = (V, E) and a distinguished sourcevertex s, breadth-first search systematically explores the edges of G to "discover" every vertex that isthe edges of G to "discover" every vertex that is reachable from s.
∞ 0 ∞ ∞
r s t u
∞ ∞ ∞ ∞
v w x y
d h f h lBreadth-first search example
r s t u∞ 0 ∞ ∞
rsQ
∞ ∞ ∞ ∞
0
Gray verticesv w x y
y
d h f h lBreadth-first search example
r s t u
wQ r1 0 ∞ ∞
r
1 1∞ 1 ∞ ∞ Gray verticesv w x y
y
d h f h lBreadth-first search example
r s t u
rQ t1 0 2 ∞
rx
1 2∞ 1 2 ∞ Gray vertices
2
v w x yy
d h f h lBreadth-first search example
r s t u
tQ x1 0 2 ∞
rv
2 22 1 2 ∞ Gray vertices
2
v w x yy
d h f h lBreadth-first search example
r s t u
xQ v1 0 2 3
ru
2 22 1 2 ∞ Gray vertices
3
v w x yy
Why not x?
d h f h lBreadth-first search example
r s t u
vQ u1 0 2 3
ry
2 32 1 2 3 Gray vertices
3
v w x yy
d h f h lBreadth-first search example
r s t u
uQ y1 0 2 3
r
3 32 1 2 3 Gray verticesv w x y
y
d h f h lBreadth-first search example
r s t u
yQ1 0 2 3
r
32 1 2 3 Gray verticesv w x y
y
d h f h lBreadth-first search example
r s t u
Q1 0 2 3
r
∅
2 1 2 3 Gray verticesv w x y
y
d h f h l hBreadth-first search algorithmBFS(G )BFS(G, s)1. for each vertex u ∈ V[G] − {s}2 d l [ ] WHITE2. do color[u] ← WHITE3. d[u] ←∞4 [ ] NIL4. π[u] ← NIL5. color[s] ← GRAY6 d[ ] 06. d[s] ← 07. π[s] ← NIL8 Q ∅8. Q ←9. ENQUEUE(Q, s)
∅
d h f h l hBreadth-first search algorithmBFS(G )BFS(G, s)10. while Q ≠11 d DEQUEUE(Q)
∅11. do u ← DEQUEUE(Q)12. for each v ∈ Adj[u]13 d if l [ ] WHITE13. do if color[v] = WHITE14. then color[v] ← GRAY15 d[ ] d[ ] + 1
O(V)ti
O(E)i15. d[v] ← d[u] + 1
16. π[v] ← u17 ENQUEUE(Q )
timestimes
17. ENQUEUE(Q, v)18. color[u] ← BLACK
Running time is O(V + E)
Sh hShortest pathsPRINT PATH(G )PRINT-PATH(G, s, v)1. if v = s2 th i t2. then print s3. else if π[v] = NIL4 th i t " th f " "t " " i t "4. then print "no path from" s "to" v "exists."5. else PRINT-PATH(G, s, π[v] )6 i t6. print v
d b h f SPredecessor subgraph of BFS
s
rw
x v
P d b h f BFS i
t
y Predecessor subgraph of BFS is Gπ = (Vπ , Eπ ), where
u
Vπ= { v ∈ V : π[v] ≠ NIL } ∪ {s} andEπ= { (π[v], v): v ∈ Vπ − {s} }.
h f hDepth-first searchGi en a graph G (V E) depth first search is toGiven a graph G = (V, E), depth-first search is to search deeper in the graph whenever possible. Edges are explored out of the most recently discoveredare explored out of the most recently discovered vertex v that still has unexplored edges leaving it.
u v w
x y x
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
12/12 12/12 12/12
x y xy
uTOP
S
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
12/12 12/12 12/12
x y x TOP vy
uTOP v
S
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
12/12 13/12 12/12
x y xTOP
vy
y
uv
S
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
TOP x14/12 13/12 12/12
x y x vy
y
uv
S
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
xB
14/52 13/12 12/12
x y xTOP
vy
y
uv
S
h f h lDepth-first search example
/
u v w12/12 12/1211/12 12/12 12/12
B
14/52 13/62 12/12
x y x TOP vy
y
uTOP v
S
h f h lDepth-first search example
/
u v w12/72 12/1211/12 12/72 12/12
B
14/52 13/62 12/12
x y x vy
uTOPv
S
h f h lDepth-first search example
/
u v w12/72 12/1211/82 12/72 12/12
BF
14/52 13/62 12/12
x y xy
u
STOP
h f h lDepth-first search example
/
u v w12/72 19/1211/82 12/72 19/12
BF
14/52 13/62 12/12
x y xy
wTOP
S
h f h lDepth-first search example
/
u v w12/72 19/1211/82 12/72 19/12
BF
14/52 13/62 10/12
x y x TOP xy
wTOP x
S
h f h lDepth-first search example
/
u v w12/72 19/1211/82 12/72 19/12
BF
14/52 13/62 10/11
x y x xBy
wTOPx
S
h f h lDepth-first search example
/
u v w12/72 19/1211/82 12/72 19/12
BF C
14/52 13/62 10/11
x y xBy
w
STOP
h f h lDepth-first search example
/
u v w12/72 19/1211/82 12/72 19/12
BF C
14/52 13/62 10/11
x y xBy
STOP
h f h l hDepth-first search algorithmDFS(G)DFS(G)1. for each vertex u ∈ V[G]2 d l [ ] WHITE2. do color[u] ← WHITE3. π[u] ← NIL4 i 04. time ← 05. for each vertex u ∈ V[G]6 d if l [ ] WHITE O(V)6. do if color[u] = WHITE7. then DFS-VISIT(u)
( )times
h f h l hDepth-first search algorithmDFS VISIT( )DFS-VISIT(u)1. color[u] ← GRAY2 i i + 12. time ← time + 13. d[u] ← time4 f h t ∈ Adj[ ]4. for each vertex v ∈ Adj[u]5. do if color[v] = WHITE6 th [ ]
O(E)i6. then π[v] ← u
7. DFS-VISIT(v)8 l [ ] BLACK
times
8. color[u] ← BLACK9. f[u] ← time ← time + 1
Running time is O(V + E)
d b h f SPredecessor subgraph of DFS
u w
v zC
B
y
FB
P d b h f DFS iy Predecessor subgraph of DFS is Gπ = (V , Eπ ), where
x Eπ= { (π[v], v): v ∈ V and π[v] ≠ NIL }.
d b h f SPredecessor subgraph of DFS
u w
Each edge (u, v) can be classified by the color of the
v zC
B vertex v that is reached when the edge is first explored:
y
FB
WHITE indicates a tree edge;GRAY indicates a back edge;
i di f dy BLACK indicates a forward (if d[u] < d[v]) or cross edge (if d[ ] d[ ])x (if d[u] > d[v]).
dPrecedence among eventsUnderwear Socks Watch
Pants Shoes
Shirt
Belt Tie
Jacket
dPrecedence among eventsUnderwear Socks Watch
11/16 17/18 19/10
Pants Shoes12/15 13/14
Shirt
12/15
11/86
3/
C ll d th fi tBelt Tie
16/76
11/86
12/56
Call depth first search algorithm.
Jacket
16/76 12/56
13/46
dPrecedence among eventsUnderwear Socks Watch
11/26 17/11 19/50
Pants Shoes12/35 13/44
Shirt
12/35
11/66
3/
S t b fi i hi tiBelt Tie
16/76
11/66
12/86
Sort by finishing time.
Jacket
16/76 12/86
13/96
CurriculumWeb
Application
Object Oriented
InternshipJava or C++ALL
COURSESjProgramming
Data Structure and Algorithm
DatabaseThesis
Computer
Computer Systems
Software Engineering Calculus
pArchitecture
y
Computer N t k
Project Management Probability
Discrete
Network
Intelligent Systems
Probability and Statistics
Discrete Mathematics
Systems
T l lTopological sort( )TOPOLOGICAL-SORT(G)
1. call DFS(G) to compute finishing times f [v] for heach vertex v.
2. as each vertex is finished, insert it onto the front f li k d liof a linked list.
3. return the linked list of vertices.
T l lTopological sortA l i l f di d li h "d "A topological sort of a directed acyclic graph or "dag"G = (V, E) is a linear ordering of all its vertices such h if G i d ( ) h b fthat if G contains an edge (u, v), then u appears before
v in the ordering.T l i l f h b i dTopological sort of a graph can be viewed as an linear ordering of its vertices.T l i l i i diff f h l ki dTopological sorting is different from the usual kind of "sorting" studied before.If h h i li h li d i iIf the graph is not acyclic, then no linear ordering is possible.
Disjoint setsC t da b e hf j Connected components
c d g i
Edge processed Collection of disjoint setsEdge processed Collection of disjoint sets
initial sets {a} {b} {c} {d} {e} {f} {g} {h} {i} {j}
Disjoint setsC t da b e hf j Connected components
c d g i
Edge processed Collection of disjoint setsEdge processed Collection of disjoint sets
initial sets(b, d)
{a}{a}
{b}{b, d}
{c}{c}
{d} {e}{e}
{f}{f}
{g}{g}
{h}{h}
{i}{i}
{j}{j}(b, d) {a} {b, d} {c} {e} {f} {g} {h} {i} {j}
Disjoint setsC t da b e hf j Connected components
c d g i
Edge processed Collection of disjoint setsEdge processed Collection of disjoint sets
initial sets(b, d)
{a}{a}
{b}{b, d}
{c}{c}
{d} {e}{e}
{f}{f}
{g}{g}
{h}{h}
{i}{i}
{j}{j}(b, d)
(e, g){a}{a}
{b, d}{b, d}
{c}{c}
{e}{e, g}
{f}{f}
{g} {h}{h}
{i}{i}
{j}{j}
Disjoint setsC t da b e hf j Connected components
c d g i
Edge processed Collection of disjoint setsEdge processed Collection of disjoint sets
initial sets(b, d)
{a}{a}
{b}{b, d}
{c}{c}
{d} {e}{e}
{f}{f}
{g}{g}
{h}{h}
{i}{i}
{j}{j}(b, d)
(e, g)(a, c)
{a}{a}{a, c}
{b, d}{b, d}{b, d}
{c}{c}
{e}{e, g}{e, g}
{f}{f}{f}
{g} {h}{h}{h}
{i}{i}{i}
{j}{j}{j}
Disjoint setsC t da b e hf j Connected components
c d g i
Edge processed Collection of disjoint setsEdge processed Collection of disjoint sets
initial sets(b, d)
{a}{a}
{b}{b, d}
{c}{c}
{d} {e}{e}
{f}{f}
{g}{g}
{h}{h}
{i}{i}
{j}{j}(b, d)
(e, g)(a, c)
{a}{a}{a, c}
{b, d}{b, d}{b, d}
{c}{c}
{e}{e, g}{e, g}
{f}{f}{f}
{g} {h}{h}{h}
{i}{i}{i}
{j}{j}{j}
(h, i )(a, b)(e f )
{a, c}{a, b, c, d}{a b c d}
{b, d} {e, g}{e, g}{e f g}
{f}{f}
{h, i}{h, i}{h i}
{j}{j}{j}(e, f )
(b, c){a, b, c, d}{a, b, c, d}
{e, f, g}{e, f, g}
{h, i}{h, i}
{j}{j}
Disjoint set operationsMAKE SET( ) h lMAKE-SET(x) creates a new set whose only member is x.UNION( ) i h d i h iUNION(x, y) unites the dynamic sets that contain xand y, say Sx and Sy, into a new set that is the union
f h Th d bof these two sets. The two sets are assumed to bedisjoint prior to the operation.FIND SET( ) i h iFIND-SET(x) returns a pointer to the representativeof the unique set containing x.
Linked list representation of disjoint sets
a b c d / h i /
e f g / je f g / j
UNION(a e)UNION(a, e)
a b c d e f g /
A l f l k d lAnalysis of linked list representationLinked list representation of the UNION operationLinked list representation of the UNION operation requires an average of Φ(n) time per call because we may be appending a longer list onto a shorter listmay be appending a longer list onto a shorter list.
Weighted-union heuristic: Suppose that each list also g ppincludes the length of the list and that we always append the smaller list onto the longer.pp g
A sequence of m MAKE-SET, UNION, and FIND SET operations n of which are MAKE SETFIND-SET operations, n of which are MAKE-SEToperations, takes O(m + nlgn) time.
fDisjoint set forests
e h h
c if e i
UNION(e, h)
a d g c f
ba d g
b
b kUnion by rank
ee h
c f h
UNION(e, h)c if
a d g ia d g
bb
hPath compression
eFIND-SET(b)
ee
c f h
( )c fa b hc
a d g i d g ia
bb
fDisjoint set forestsMAKE SET( )MAKE-SET(x)1. p[x] ← x2 k[ ] 02. rank[x] ← 0
FIND SET(x)FIND-SET(x)1. if x ≠ p[x]2 then p[x] ← FIND SET(p[x])2. then p[x] ← FIND-SET(p[x])3. return p[x]
fDisjoint set forestsUNION( )UNION(x, y)1. LINK(FIND-SET(x), FIND-SET(y))
LINK(x, y)1. if rank[x] > rank[y]1. if rank[x] rank[y]2. then p[y] ← x3. else p[x] ← y3. else p[x] y4. if rank[x] = rank[y]5. then rank[y] ← rank[y] + 15. then rank[y] rank[y] 1
Union by rank and path compressionWh b h i b k d h iWhen we use both union by rank and path compression, the worst-case running time is O(m α(n)), where α(n) is a
l l i f ivery slowly growing function.0 for 0 ≤ n ≤ 2,
α(n)1 for n = 3,2 for 4 ≤ n ≤ 7,( )3 for 8 ≤ n ≤ 2047,4 for 2047 ≤ n ≤ A4(1) >> 1080.
A (j)j + 1 if k = 0,
( 1)1 ( )j
kA j+−
Ak(j) if k ≥ 1.
Minimum spanning tree
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
Minimum spanning tree
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37
Minimum spanning treeI A d di d h G (V E) i hInput: A connected, undirected graph G = (V, E) with weight function w: E → .
Output: A spanning tree T — a tree that connects all vertices — of minimum weight:vertices of minimum weight:
( ) ( )w T w u v∑( , )
( ) ( , )u v T
w T w u v∈
= ∑
lDestroy cycles
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
lDestroy cycles
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i
8
11
106
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i
8
11
106
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i
8
116
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i
8
116
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i11
6
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
e
7a i11
6
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
ea i116
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
ea i116
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
ea i11
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i11
92
ea i11
h g f1 2
lDestroy cycles
b c d8 7
e
4
4a i
92
ea i
h g f1 2
Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37
A d lAvoid cycles
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37
k l' l hKruskal's algorithmMST KRUSKAL(G )MST-KRUSKAL(G, w)1. A ←2 for each vertex v ∈ V [G]
∅2. for each vertex v ∈ V [G]3. do MAKE-SET(v)4. sort the edges of E into nondecreasing order by weight w.5. for each edge (u, v) ∈ E, taken in nondecreasing order by
weight.6 d if FIND SET( ) ≠ FIND SET( )6. do if FIND-SET(u) ≠ FIND-SET(v) 7. then A ← A ∪ {(u, v)} 8 UNION(u v)8. UNION(u, v) 9. return A
R i ti i O(El E)Running time is O(ElgE)
l bOptimal substructureMi iMinimum spanning tree T
f G (V E)of G = (V, E).
l bOptimal substructureMi iMinimum spanning tree T
f G (V E)u
of G = (V, E).
vRemove any edge (u, v) ∈ T. v
l bOptimal substructureMi iMinimum spanning tree T
f G (V E)u
T1
of G = (V, E).
vT2
Remove any edge (u, v) ∈ T. v
Then, T is partitioned into two subtrees T1 and T2.Theorem. The subtree T1 is an MST of G1 = (V1, E1),the subgraph of G induced by the vertices of T1:the subgraph of G induced by the vertices of T1:
V1 = vertices of T1,E = { (x y) ∈ E: x y ∈ V }E1 = { (x, y) ∈ E: x, y ∈ V1 }.
Similarly for T2.
f f l bProof of optimal substructureP f C dProof. Cut and paste:
w(T) = w(u, v) + w(T1) + w(T2).( ) ( ) ( 1) ( 2)If T1' were a lower-weight spanning tree than T1 for G1, then T ' = {(u, v)} ∪ T1' ∪ T2 would be a lower-weightthen T {(u, v)} ∪ T1 ∪ T2 would be a lower weight spanning tree than T for G.
l h l i b bl ? !Do we also have overlapping subproblems?Yes!Great, then dynamic programming may work!Yes!
But minimum spanning tree exhibits another powerful property which leads to an even more efficient algorithm.p p y g
ll k f " d " l hHallmark for "greedy" algorithms
Greedy choice propertyGreedy-choice propertyA locally optimal choice
is globally optimalis globally optimal.
Theorem. Let T be the minimum spanning tree of G = (V E) and let A V Suppose that⊆tree of G = (V, E), and let A V. Suppose that (u, v) ∈ E is the least-weight edge connecting A to V − A Then (u v) ∈ T
⊆
A to V A. Then, (u, v) ∈ T.
f f hProof of theoremProof Suppose ( ) ∉ T Cut and pasteProof. Suppose (u, v) ∉ T. Cut and paste.
T: v
u
v∈ A∈ T − A u
(u, v) = least-weight edge connecting A to V − A.
f f hProof of theoremProof Suppose ( ) ∉ T Cut and pasteProof. Suppose (u, v) ∉ T. Cut and paste.
T: v
u
v∈ A∈ T − A u
(u, v) = least-weight edge connecting A to V − A.
Consider the unique simple path from u to v in T.
f f hProof of theoremProof Suppose ( ) ∉ T Cut and pasteProof. Suppose (u, v) ∉ T. Cut and paste.
T: v
u
v∈ A∈ T − A u
(u, v) = least-weight edge connecting A to V − A.
Consider the unique simple path from u to v in T.Swap (u, v) with the first edge on this path that connects p ( , ) g pa vertex in A to a vertex in V – A.
f f hProof of theoremProof Suppose ( ) ∉ T Cut and pasteProof. Suppose (u, v) ∉ T. Cut and paste.
T: v
u
v∈ A∈ T − A u
(u, v) = least-weight edge connecting A to V − A.
Consider the unique simple path from u to v in T.Swap (u, v) with the first edge on this path that connects p ( , ) g pa vertex in A to a vertex in V – A.A lighter-weight spanning tree than T resultsA lighter weight spanning tree than T results.
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
l f ' l hExample of Prim's algorithm
b c d8 7
e
4
4 14a i11
92
e
7
14a i
8
11
106
h g f1 2
' l hPrim's algorithmIDEA M i t i V A i it Q K h tIDEA: Maintain V − A as a priority queue Q. Key each vertex in Q with the weight of the least weight edge connecting it to a vertex in A.vertex in A.MST-PRIM(G, w, r)1. for each u ∈ V[G] Minimum spanning tree A for G is
h A { ( [ ]) V { }}1. for each u ∈ V[G]2. do key[u] ←∞3. π[u] ← NIL
thus A = { (v, π[v]): v ∈ V − {r}}
4. key[r] ←5. Q ← V[G] 6. while Q ≠ 0
7. do u ← EXTRACT-MIN(Q)
∅(Q)
8. for each v ∈ Adj[u]9. do if v ∈ Q and w(u, v) < key(v)10 th [ ]10. then π[v] ← u11. key[v] ← w(u, v)
A l f l hAnalysis of Prim algorithmMST-PRIM(G w r)MST PRIM(G, w, r)1. for each u ∈ V[G]2. do key[u] ←∞ Φ(V)
total3. π[u] ← NIL4. key[r] ← 05 Q V[G]
total
5. Q ← V[G]6. while Q ≠ 7. do u ← EXTRACT-MIN(Q)
∅7. do u ← EXTRACT MIN(Q)8. for each v ∈ Adj[u]9. do if v ∈ Q and w(u, v) < key(v)degree(u) |V | times10. then π[v] ← u11. key[v] ← w(u, v)
times
Θ(E) implicit DECREASE-KEY's.Time = Θ(V) · TimeEXTRACT-MIN + Θ(E) · TimeDECREASE-KEY
A l f l hAnalysis of Prim algorithmTi Θ(V) T Θ(E) TTime = Θ(V) · TimeEXTRACT-MIN + Θ(E) · TimeDECREASE-KEY
Q TimeEXTRACT-MIN TimeDECREASE-KEY Total
Array O(V) O(1) O(V2)Binary heap O(lgV) O(lgV) O(ElgV)y p ( g ) ( g ) ( g )
Running time of Prim's algorithm is O(ElgV )Running time of Prim s algorithm is O(ElgV )
d Sh hPudong Shanghai
Sh hShortest paths
t x1
10
2 3 4 6
9
5
s 2 3 4 6
75
y z2
y z
Sh h blShortest paths problemC id di d h G (V E) i h i hConsider a directed graph G = (V, E), with weight function w: E → mapping edges to real-valued
i h Th i h f h iweights. The weight of path p = v1 → v2 → … → vk is defined to be
k
11
( ) ( , )k
i ii
w p w v v−=
= ∑
We define the shortest path weight from u to v by pi { ( ) } f h i h fpmin{w(p): u v}
∞( , )u vδ = If there is a path from u to v,
Otherwise.
l bOptimal substructureTheorem. A subpath of a shortest path is a shortest path.
Proof. Cut and paste:
u v
yx y
S l h hSingle-source shortest pathsProblem From a gi en so rce erte ∈ V find theProblem. From a given source vertex s ∈ V, find the shortest-path weights δ(s, v) for all v ∈ V. If all edge weights w(u v) are nonnegative all shortest pathweights w(u, v) are nonnegative, all shortest-path weights must exist.
IDEA: Greedy.1. Maintain a set S of vertices whose shortest path
distances from s are known.2. At each step add to S the vertex v ∈ V – S whose
distance estimate from s is minimal.3. Update the distance estimates of vertices adjacent
to v.
l f k ' l hExample of Dijkstra's algorithm
∞ ∞
t x1
∞ ∞10
2 3 4 6
9
0
5
s 2 3 4 6
7
∞ ∞5
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
∞ ∞
t x1
∞ ∞10
2 3 4 6
9
0
5
s 2 3 4 6
7
∞ ∞5
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
∞
t x1
10 ∞10
2 3 4 6
9
10
0
5
s 2 3 4 6
7
5 ∞5
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
8
t x1
14810
2 3 4 6
9
14
0
5
s 2 3 4 6
7
5 75
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
8
t x1
13810
2 3 4 6
9
13
0
5
s 2 3 4 6
7
5 75
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
8 9
t x1
8 910
2 3 4 6
9
0
5
s 2 3 4 6
7
5 75
y z2
y z
l f k ' l hExample of Dijkstra's algorithm
8 9
t x1
8 910
2 3 4 6
9
0
5
s 2 3 4 6
7
5 75
y z2
y z
k ' l hDijkstra's algorithmDIJKSTRA(G w s)DIJKSTRA(G, w, s)1. for each vertex v ∈ V[G]2. do d[v] ← ∞[ ]3. π(v) ← NIL4. d[s] ← 0
S ∅5. S ←6. Q ← V[G]7 while Q ≠
∅
∅ Relaxation step.7. while Q ≠8. do u ← EXTRACT-MIN(Q)9. S ← S ∪ {u}
∅ Relaxation step.Implicit DECREASE-KEY.
{ }10. for each vertex v ∈ Adj[u]11. do if d[v] > d[u] + w(u, v)12 h d[ ] d[ ] ( )12. then d[v] ← d[u] + w(u, v)13. π(v) ← u
C f k ' l hCorrectness of Dijkstra's algorithmLemma Initializing d[s] ← 0 and d[v] ←∞ for allLemma. Initializing d[s] ← 0 and d[v] ←∞ for allv ∈ V – {s} establishes d[v] ≥ δ(s, v) for all v ∈ V,and this invariant is maintained over any sequence ofand this invariant is maintained over any sequence of relaxation steps.Proof Suppose not Let v be the first vertex for whichProof. Suppose not. Let v be the first vertex for which d[v] < δ(s, v), and let u be the vertex that caused d[v]to change: d[v] = d[u] + w(u v) Thento change: d[v] = d[u] + w(u, v). Then,
d[v] < δ(s, v) supposition≤ δ(s u) + δ(u v) triangle inequality≤ δ(s, u) + δ(u, v) triangle inequality≤ δ(s, u) + w(u, v) sh. path ≤ specific path≤ d[u] + w(u v) v is first violation≤ d[u] + w(u, v) v is first violation
Contradiction
C f k ' l hCorrectness of Dijkstra's algorithmLemma Let u be v's predecessor on a shortest pathLemma. Let u be v s predecessor on a shortest path from s to v. Then, if d[u] = δ(s, u) and edge (u, v) is relaxed we have d[v] = δ(s v) after the relaxationrelaxed, we have d[v] = δ(s, v) after the relaxation.Proof. Observe that δ(s, v) = δ(s, u) + w(u, v).Suppose that d[v] > δ(s v) before the relaxationSuppose that d[v] > δ(s, v) before the relaxation.(Otherwise, we're done.) Then, the test d[v] > d[u] + w(u v) succeeds because d[v] > δ(s v) = δ(s u) +w(u, v) succeeds, because d[v] > δ(s, v) = δ(s, u) + w(u, v) = d[u] + w(u, v), and the algorithm sets d[v] = d[u] + w(u v) = δ(s v)d[v] = d[u] + w(u, v) = δ(s, v).
C f k ' l hCorrectness of Dijkstra's algorithmTheoremTheorem.Dijkstra's algorithm terminates with d[v] = δ(s, v) for all v ∈ Vall v ∈ V.Proof. It suffices to show that d[v] = δ(s, v) for every v ∈ V when v is added to S Suppose u is the firstv ∈ V when v is added to S. Suppose u is the first vertex added to S for which d[u] > δ(s, u). Let y be the first vertex in V − S along a shortest path from s to ufirst vertex in V S along a shortest path from s to u, and let x be its predecessor:
up2J t b f
s
u
ySJust before
adding u.
xp1
C f k ' l hCorrectness of Dijkstra's algorithm
s
up2
Ss
xy
p1
Since u is the first vertex violating the claimed invariant, we have d[x] = δ(s, x). When x was added to S, the edge (x, y) was relaxed, which implies that
d δ( ) δ( ) dd[y] = δ(s, y) ≤ δ(s, u) < d[u]. But, d[u] ≤ d[y] by our choice of u. Contradiction.
A l f k ' l hAnalysis of Dijkstra's algorithmDIJKSTRA(G w s)DIJKSTRA(G, w, s)1. for each vertex v ∈ V[G]2. do d[v] ← ∞[ ]3. π(v) ← NIL4. d[s] ← 0
S ∅
Time = Θ(V) · TimeEXTRACT-MIN + Θ(E) · TimeDECREASE-KEY
5. S ←6. Q ← V[G]7 while Q ≠
∅
∅7. while Q ≠8. do u ← EXTRACT-MIN(Q)9. S ← S ∪ {u}
∅
|V |{ }10. for each vertex v ∈ Adj[u]11. do if d[v] > d[u] + w(u, v)12 h d[ ] d[ ] ( )
|V |timesdegree(u)
ti12. then d[v] ← d[u] + w(u, v)13. π(v) ← u
times
A l f k ' l hAnalysis of Dijkstra's algorithmTi Θ(V) T Θ(E) TTime = Θ(V) · TimeEXTRACT-MIN + Θ(E) · TimeDECREASE-KEY
Q TimeEXTRACT-MIN TimeDECREASE-KEY Total
Array O(V) O(1) O(V2)Binary heap O(lgV) O(lgV) O(ElgV)y p ( g ) ( g ) ( g )
Running time of Dijkstra's algorithm is O(ElgV )Running time of Dijkstra s algorithm is O(ElgV )
h d hUnweighted graphsS ppose that ( ) 1 for all ( ) ∈ ESuppose that w(u, v) = 1 for all (u, v) ∈ E.Can Dijkstra's algorithm be improved?
B dth fi t hBreadth-first search.Use a simple FIFO queue instead of a priority queue.
1. while Q ≠2. do u ← DEQUEUE(Q)
∅
3. for each vertex v ∈ Adj[u]4. do if d[v] = ∞5. then d[v] ← d[u] + 16. ENQUEUE(Q, v)
Running time is O(V + E).
l f b d h f hExample of breadth-first search
∞ ∞
t x∞ ∞
sQ0s 0
Gray vertices∞ ∞
y z
y
y z
l f b d h f hExample of breadth-first search
1 ∞
t x1 ∞
tQ y0s 1
Gray vertices1
1 ∞
y z
y
y z
l f b d h f hExample of breadth-first search
1 2
t x1 2
yQ x0s 1
Gray vertices2
1 ∞
y z
y
y z
l f b d h f hExample of breadth-first search
1 2
t x1 2
xQ z0s 2
Gray vertices2
1 2
y z
y
y z
l f b d h f hExample of breadth-first search
1 2
t x1 2
zQ0s 2
Gray vertices1 2
y z
y
y z
l f b d h f hExample of breadth-first search
1 2
t x1 2
Q ∅0s
Gray vertices1 2
y z
y
y z
Sh h h d dShortest paths in weighted dagWh i d ?What is dag?A dag is a directed acyclic graph. DAG-SHORTEST-PATH(G, w, s)1. Topologically sort the vertices of G2 for each erte ∈ V[G]2. for each vertex v ∈ V[G]3. do d[v] ← ∞4. π(v) ← NIL4. π(v) NIL5. d[s] ← 06. for each vertex u, taken in topologically sorted order7. do for each vertex v ∈ Adj[u]8. do if d[v] > d[u] + w(u, v)9 then d[v] ← d[u] + w(u v)9. then d[v] ← d[u] + w(u, v)10. π(v) ← u
l f d h hExample of dag shortest paths
6 1
∞ 0 ∞ ∞ ∞ ∞
s tr x y z5 2 7 −1 −2
6 1
∞ 0 ∞ ∞ ∞ ∞
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 ∞ ∞ ∞ ∞
s tr x y z5 2 7 −1 −2
6 1
∞ 0 ∞ ∞ ∞ ∞
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 2 6 ∞ ∞
s tr x y z5 2 7 −1 −2
6 1
∞ 0 2 6 ∞ ∞
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 2 6 6 4
s tr x y z5 2 7 −1 −2
6 1
∞ 0 2 6 6 4
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 2 6 5 4
s tr x y z5 2 7 −1 −2
6 1
∞ 0 2 6 5 4
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 2 6 5 3
s tr x y z5 2 7 −1 −2
6 1
∞ 0 2 6 5 3
3 4
22
l f d h hExample of dag shortest paths
6 1
∞ 0 2 6 5 3
s tr x y z5 2 7 −1 −2
6 1
∞ 0 2 6 5 3
3 4
22
h lPositive-weight cycle
…
> 0
u v…
Shortest path cannot contain aShortest path cannot contain a positive-weight cycle.
h l0-weight cycle
…
= 0
u v…
0-weight cycle can be removed0 weight cycle can be removed from the shortest path.
h lNegative-weight cycle
…
< 0
u v…
If a graph contains a negative-If a graph contains a negativeweight cycle, then some shortest paths may not existpaths may not exist.
Al h f h lAlgorithm for negative weight cycleB ll F d l i h Fi d ll h hBellman-Ford algorithm: Finds all shortest-path lengths from a source s ∈ V to all v ∈ V or d i h i i h l idetermines that a negative-weight cycle exists.
l f ll d l hExample of Bellman-Ford algorithm
∞ ∞
5t x−2∞ ∞
6−3
−2
480
7
s
2
−4 78
∞ ∞7
y z
2
9y z
Initialization.Initialization.
l f ll d l hExample of Bellman-Ford algorithm
∞ ∞
5t x−2
①
④∞ ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
∞ ∞7
y z
2
9
⑧
⑥
⑩
y z
Order of edge relaxation.Order of edge relaxation.
l f ll d l hExample of Bellman-Ford algorithm
∞ ∞
5t x−2
①
④∞ ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
∞ ∞7
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
∞ ∞
5t x−2
①
④∞ ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
∞ ∞7
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6 ∞
5t x−2
①
④6 ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
∞ ∞7
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6 ∞
5t x−2
①
④6 ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 ∞7
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6 ∞
5t x−2
①
④6 ∞6
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 ∞7
y z
2
9
⑧
⑥
⑩
y z
End of pass 1.End of pass 1.
l f ll d l hExample of Bellman-Ford algorithm
6
5t x−2
①
④ 1166
−3
−2
4③
④
⑤⑨
11
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 ∞7
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6
5t x−2
①
④ 1166
−3
−2
4③
④
⑤⑨
11
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 27
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6 4
5t x−2
①
④6 46
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 27
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
6 4
5t x−2
①
④6 46
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 27
y z
2
9
⑧
⑥
⑩
y z
End of pass 2.End of pass 2.
l f ll d l hExample of Bellman-Ford algorithm
2 4
5t x−2
①
④2 46
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 27
y z
2
9
⑧
⑥
⑩
y z
l f ll d l hExample of Bellman-Ford algorithm
2 4
5t x−2
①
④2 46
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
7 27
y z
2
9
⑧
⑥
⑩
y z
End of pass 3.End of pass 3.
l f ll d l hExample of Bellman-Ford algorithm
2 4
5t x−2
①
④2 46
−3
−2
4③
④
⑤⑨
8②0
7
s
2
−4③
⑧⑩
78②
⑦
77
y z
2
9
⑧
⑥
⑩−2
y z
l f ll d l hExample of Bellman-Ford algorithm
2 4
5t x−2
①
④2 46
−3
−2
48② ③
④
⑤⑨
0
7
s 7
2
−48② ③
⑧
⑦
⑩
77
y z
2
9
⑧
⑥
⑩−2
y z
End of pass 4.End of pass 4.
l f ll d l hExample of Bellman-Ford algorithm
2 4
5t x−2
①
④2 46
−3
−2
48② ③
④
⑤⑨
0
7
s 7
2
−48② ③
⑧
⑦
⑩
77
y z
2
9
⑧
⑥
⑩−2
y z
EndEnd
ll d l hBellman-Ford algorithmBELLMAN FORD(G w s)BELLMAN-FORD(G, w, s)1. for each vertex v ∈ V[G]2. do d[v] ← ∞[ ]3. π(v) ← NIL4. d[s] ← 0
f 1 | [G]| 15. for i ← 1 to |V[G]| − 16. do for each edge (u, v) ∈ E[G]7 do if d[v] > d[u] + w(u v)7. do if d[v] > d[u] + w(u, v)8. then d[v] ← d[u] + w(u, v)9. π(v) ← u( )10. for each edge (u, v) ∈ E[G]11. do if d[v] > d[u] + w(u, v)12 h FALSE12. then return FALSE13. return TURE
Correctness of Bellman-Ford algorithmTheorem If G (V E) contains no negati e eightTheorem. If G = (V, E) contains no negative weight cycles, then after the Bellman-Ford algorithm executes, d[ ] δ( ) for all ∈ Vd[v] = δ(s, v) for all v ∈ V.
Proof. Let v ∈ V be any vertex, and consider a shortestf y ,path p from s to v with the minimum number of edges.
s v…v0 v1 v2 v3 vk
s vp:
Since p is a shortest path, we haveδ(s v ) = δ(s v ) + w(v v )δ(s, vi) = δ(s, vi – 1) + w(vi – 1, vi).
Correctness of Bellman-Ford algorithms v
…v0 v1 v2 v3 vk
s vp:
Initially, d[v0] = 0 = δ(s, v0),After 1 pass through E, we have d[v1] = δ(s, v1).After 1 pass through E, we have d[v1] δ(s, v1).After 2 passes through E, we have d[v2] = δ(s, v2)....After k passes through E, we have d[vk] = δ(s, vk).
Since G contains no negative-weight cycles, p is simple.g g y , p pLongest simple path has ≤ |V | – 1 edgeIf a value d[v] fails to converge after |V | – 1 passes thereIf a value d[v] fails to converge after |V | 1 passes, there exists a negative-weight cycle in G reachable from s.
Correctness of Bellman-Ford algorithmCon ersel s ppose that graph G contains a negati eConversely, suppose that graph G contains a negative-weight cycle that is reachable from the source s; let this cycle be c v v v where v v Thencycle be c = v0 → v1 →…→ vk, where v0 = vk. Then,
1( , ) 0k
i iw v v− <∑ 11
( , )i ii=∑If the Bellman-Ford algorithm returns TRUE, then,d[vi] ≤ d[vi−1] + w(vi−1, vi) for i = 1, 2, …, k, and
[ ] ( [ ] ( ))k k
d d≤∑ ∑ 1 11 1
[ ] ( [ ] ( , ))i i i ii i
d v d v w v v− −= =
≤ +∑ ∑[ ] ( )
k k
d∑ ∑1 11 1
[ ] ( , )i i ii i
d v w v v− −= =
= +∑ ∑
Correctness of Bellman-Ford algorithm
1 11 1 1
[ ] [ ] ( , )k k k
i i i ii i i
d v d v w v v− −= = =
≤ +∑ ∑ ∑1 1 1i i i
Since v0 = vk, and so,k k
11 1
[ ] [ ]k k
i ii i
d v d v −= =
=∑ ∑ , thus,
11
0 ( , )k
i ii
w v v−=
≤ ∑ contradicts with 11
( , ) 0k
i ii
w v v−=
<∑1i
l f h lExample of negative-weight cycle
4③ 5④stx
∞ ∞ 0
①
4③ 5④
2①−8②
∞
yInitialization and order of edge relaxation.g
l f h lExample of negative-weight cycle
4③ 5④stx
∞ 5 0
①
4③ 5④
2①−8②
∞
y
l f h lExample of negative-weight cycle
4③ 5④stx
∞ 5 0
①
4③ 5④
2①−8②
∞
yEnd of pass 1.
l f h lExample of negative-weight cycle
4③ 5④stx
9 5 0
①
4③ 5④
2①−8②
∞
y
l f h lExample of negative-weight cycle
4③ 5④stx
9 5 0
①
4③ 5④
2①−8②
∞
yEnd of pass 2.
l f h lExample of negative-weight cycle
4③ 5④stx
9 5 0
①
4③ 5④
2①−8②
1
y
l f h lExample of negative-weight cycle
4③ 5④stx
9 5 0
①
4③ 5④
2①−8②
1
yEnd of pass 3.
l f h lExample of negative-weight cycle
4③ 5④stx
9 5 0
①
4③ 5④
2①−8②
5 > 2 +11
5
yEnd.
Single-source shortest-paths algorithmsAlgorithms Unweighted Positive Negative Cycle Time
Breadth-first ○ × × ○ O(V + E )search ○ × × ○ O(V + E )
Dag shortest ○ ○ ○ × O(V + E )paths ○ ○ ○ × O(V + E )
Dijkstra ○ ○ × ○ O(ElgV )Dijkstra ○ ○ × ○ O(ElgV )
Bellman-F d ○ ○ ○ ○ O(VE )Ford ○ ○ ○ ○ ( )
All h hAll-pairs shortest pathsI Di h G (V E) h V {1 2 }Input: Digraph G = (V, E), where V = {1, 2, …, n}, with edge-weight function w: E → .O i f h h l h δ( )Output: n × n matrix of shortest-path lengths δ(i, j)for all i, j ∈ V.
IDEA:Run Bellman-Ford once from each vertex.Run Bellman Ford once from each vertex.Time = O(V2E).Dense graph (n2 edges) Θ(n4) time in the worstDense graph (n edges) Θ(n ) time in the worst case.
Dynamic programmingC id h dj i A ( ) f hConsider the n × n adjacency matrix A = (aij) of the digraph, and definedij
(m) = weight of a shortest path from i to j that uses atmost m edges.g
Claim: We have0 if i = jdij
(0) = 0 if i = j,∞ if i ≠ j.
and for m = 1, 2, …, n − 1,
dij(m) = mink{dik
(m − 1) + akj}dij mink{dik + akj}.
f f lProof of claimk'sk'sdij
(m) = mink{dik(m − 1) + akj}.
i j
...for k ← 1 to nRelaxation!
≤ 1 d
do if dij > dik + akjthen dij ← dik + akj ≤ m − 1 edges
Note: No negative-weight cycles impliesδ(i j) d (m 1) d (m) d (m + 1)δ(i, j) = dij
(m − 1) = dij(m) = dij
(m + 1) = …
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 3 8 40 1 7
∞ −⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
(1) 4 02 5 0
D⎜ ⎟⎜ ⎟= ∞ ∞ ∞⎜ ⎟
∞ − ∞⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
1 1 1∅ ∅⎛ ⎞5 46 1 1 1
2 2∅ ∅⎛ ⎞⎜ ⎟∅ ∅ ∅⎜ ⎟⎜ ⎟(1) 3
4 4⎜ ⎟Π = ∅ ∅ ∅ ∅⎜ ⎟
∅ ∅ ∅⎜ ⎟⎜ ⎟5⎜ ⎟⎜ ⎟∅ ∅ ∅ ∅⎝ ⎠
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 3 8 40 1 7
∞ −⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
(1) 4 02 5 0
D⎜ ⎟⎜ ⎟= ∞ ∞ ∞⎜ ⎟
∞ − ∞⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
( ) ( 1)5 46 dij
(m) = mink{dik(m − 1) + akj}
d14(2) = mink{dik
(1) + akj}.
d14(2) = min{(d11
(1) + a14), (d12(1) + a24), (d13
(1) + a34), (d14
(1) + a44), (d15(1) + a54)}.
d14(2) = min{(0 + ∞), (3 + 1), (8 + ∞), (∞ + 0), (−4 + 6)} = 2
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 3 8 2 43 0 4 1 7
−⎛ ⎞⎜ ⎟−⎜ ⎟
1 38
1
( 2) 4 0 5 112 1 5 0 2
D⎜ ⎟⎜ ⎟= ∞⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 1 6 0⎜ ⎟⎜ ⎟∞⎝ ⎠
1 1 5 1∅⎛ ⎞5 46 1 1 5 1
4 4 2 2∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟( 2) 3 2 2
4 3 4 1⎜ ⎟Π = ∅ ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Length 2
4 4 5⎜ ⎟⎜ ⎟∅ ∅⎝ ⎠
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 3 3 2 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
(3) 7 4 0 5 112 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
1 4 5 1∅⎛ ⎞5 46 1 4 5 1
4 4 2 1∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟(3) 4 3 2 2
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Length 3
4 3 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 1 3 2 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
( 4) 7 4 0 5 32 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
3 4 5 1∅⎛ ⎞5 46 3 4 5 1
4 4 2 1∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟( 4) 4 3 2 1
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Length 4
4 3 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 1 3 2 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
( 4) 7 4 0 5 32 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
3 4 5 1∅⎛ ⎞5 46 3 4 5 1
4 4 2 1∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟
Path12= 1 → 5 → 4 → 3 → 2( 4) 4 3 2 1
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Path12 1 → 5 → 4 → 3 → 2= −4 + 6 + −5 + 4
1d12
(4)
4 3 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠
= 1
All h h lAll-pairs shortest paths example⎛ ⎞2
43
0 1 3 2 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
( 4) 7 4 0 5 32 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
3 4 5 1∅⎛ ⎞5 46 3 4 5 1
4 4 2 1∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟
Path35= 3 → 2 → 4 → 1 → 5( 4) 4 3 2 1
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Path35 3 → 2 → 4 → 1 → 5= 4 + 1 + 2 + −4
3d35
(4)
4 3 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠
= 3
l lMatrix multiplicationAll pairs shortest paths for graph G = (V E)All-pairs shortest paths for graph G = (V, E).
Dynamic programming: compute D(|V| −1).d (m) = min {d (m − 1) + a }
Observe that if we make the substitutionsD(m − 1)
dij(m) = mink{dik
(m 1) + akj}.
D(m 1) → a,D(0) → b,D(m)D(m) → c,min → +,
++ → · .Problem change to compute C = A · B, where C, A, and Bare n × n matrices: nare n n matrices:
1ij ik kj
k
c a b=
= ⋅∑
l lMatrix multiplicationC lConsequently, we can compute
D(1) = D(0) · D(0)
D(2) = D(1) · D(0)...
D(n − 1) = D(n − 2) · D(0)
Yi ldi D(n − 1) δ(i j) Ti Θ( 3) ( 4) NYielding D(n 1) = δ(i, j). Time = Θ(n·n3) = (n4). No better than running Bellman-Ford once from each vertex.
d l lImproved matrix multiplicationR d i D(2k) D(k) D(k)Repeated squaring: D(2k) = D(k) · D(k).Compute D(2), D(4), …, .
lg 12 n
D−⎡ ⎤⎢ ⎥
O(lgn) squaringsNote: D(n − 1) = D(n) = D(n + 1) = ...Note: D( ) = D( ) = D( ) = ....
Time = Θ(n3lgn).To detect negative-weight cycles, check the diagonal for negative values in O(n) additional time.
l d h ll l hFloyd-Warshall algorithmAl d i i b f !Also dynamic programming, but faster!
Define dij(k) = weight of a shortest path from i to jDefine dij weight of a shortest path from i to j
with intermediate vertices belonging to the set {1, 2, …, k}.to the set {1, 2, …, k}.
Thus, δ(i, j) = dij(n). Also, dij
(0) = wij.
l d h ll l hFloyd-Warshall algorithmall intermediate vertices all intermediate verticesall intermediate vertices in {1, 2, ..., k − 1}
all intermediate vertices in {1, 2, ..., k − 1}
k
ji
j
p: all intermediate vertices in {1, 2, ..., k}
if k 0if k = 0,if k ≥ 1.dij
(k) = min(dij(k − 1), dik
(k − 1) + dkj(k − 1))
wij
dij(m) = mink{dik
(m − 1) + akj}.
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 3 8 40 1 7
∞ −⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
(0) 4 02 5 0
D⎜ ⎟⎜ ⎟= ∞ ∞ ∞⎜ ⎟
∞ − ∞⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
1 1 1∅ ∅⎛ ⎞5 46
Initialization
1 1 12 2
∅ ∅⎛ ⎞⎜ ⎟∅ ∅ ∅⎜ ⎟⎜ ⎟Initialization (0) 3
4 4⎜ ⎟Π = ∅ ∅ ∅ ∅⎜ ⎟
∅ ∅ ∅⎜ ⎟⎜ ⎟5⎜ ⎟⎜ ⎟∅ ∅ ∅ ∅⎝ ⎠
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 3 8 40 1 7
∞ −⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
(1) 4 02 5 5 0 2
D⎜ ⎟⎜ ⎟= ∞ ∞ ∞⎜ ⎟
− −⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
1 1 1∅ ∅⎛ ⎞5 46
Node 1
1 1 12 2
∅ ∅⎛ ⎞⎜ ⎟∅ ∅ ∅⎜ ⎟⎜ ⎟Node 1 (1) 3
4 1 4 1⎜ ⎟Π = ∅ ∅ ∅ ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟5⎜ ⎟⎜ ⎟∅ ∅ ∅ ∅⎝ ⎠
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 3 8 4 40 1 7
−⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
( 2) 4 0 5 112 5 5 0 2
D⎜ ⎟⎜ ⎟= ∞⎜ ⎟
− −⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
1 1 2 1∅⎛ ⎞5 46
Node 2
1 1 2 12 2
∅⎛ ⎞⎜ ⎟∅ ∅ ∅⎜ ⎟⎜ ⎟Node 2 ( 2) 3 2 2
4 1 4 1⎜ ⎟Π = ∅ ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟5⎜ ⎟⎜ ⎟∅ ∅ ∅ ∅⎝ ⎠
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 3 8 4 40 1 7
−⎛ ⎞⎜ ⎟∞ ∞⎜ ⎟
1 38
1
(3) 4 0 5 112 1 5 0 2
D⎜ ⎟⎜ ⎟= ∞⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 6 0⎜ ⎟⎜ ⎟∞ ∞ ∞⎝ ⎠
1 1 2 1∅⎛ ⎞5 46
Node 3
1 1 2 12 2
∅⎛ ⎞⎜ ⎟∅ ∅ ∅⎜ ⎟⎜ ⎟Node 3 (3) 3 2 2
4 3 4 1⎜ ⎟Π = ∅ ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟5⎜ ⎟⎜ ⎟∅ ∅ ∅ ∅⎝ ⎠
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 3 1 4 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
( 4) 7 4 0 5 32 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
1 4 2 1∅⎛ ⎞5 46 1 4 2 1
4 4 2 4∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟Node 4 ( 4) 4 3 2 4
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Path13= 1 → 2 → 4 → 3
Node 4
4 4 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠= 3 + 1 + −5 = −1d13
(4)
Example of Floyd-Warshall algorithm⎛ ⎞2
43
0 1 3 2 43 0 4 1 1
− −⎛ ⎞⎜ ⎟− −⎜ ⎟
1 38
1
(5) 7 4 0 5 32 1 5 0 2
D⎜ ⎟⎜ ⎟=⎜ ⎟
− − −⎜ ⎟
5 4
2 1−5−4
7 8 5 1 6 0⎜ ⎟⎜ ⎟⎝ ⎠
5 4 5 1∅⎛ ⎞5 46 5 4 5 1
4 4 2 4∅⎛ ⎞⎜ ⎟∅⎜ ⎟⎜ ⎟Node 5 (5) 4 3 2 4
4 3 4 1⎜ ⎟Π = ∅⎜ ⎟
∅⎜ ⎟⎜ ⎟
Node 5
4 4 4 5⎜ ⎟⎜ ⎟∅⎝ ⎠
l d h ll l hFloyd-Warshall algorithmFLOYD WARSHALL(W )FLOYD-WARSHALL(W )1. n ← rows[W]2 d0← W2. d0 ← W 3. for k ← 1 to n4. do for i ← 1 to n5. do for j ← 1 to n6. do dij
(k)← min(dij(k − 1), dik
(k − 1) + dkj(k − 1))
7 t D(n)7. return D(n)
3Running time is Φ(n3)
G h hGraph reweightingI i ibl h ll i i h dIs it any possible to change all negative-weight edges to nonnegative? Then, we can use Dijkstra's algorithm to compute the shortest path for all edges. Graph reweighting properties.
For all pairs of vertices u, v ∈ V, a path p is a shortest p , , p ppath from u to v using weight function w if and only if p is also a shortest path from u to v using weight p p g gfunction after reweighting.For all edges (u, v), the new weight is
wˆ ( , )w u vg ( , ), g
nonnegative.( , )
G h hGraph reweightingTheoremTheorem.Given a function h: V → , reweight each edge (u, v) ∈ Eby . Then, for any two vertices, ˆ ( , ) ( , ) ( ) ( )w u v w u v h u h v= + −y , y ,all paths between them are reweighted by the same amount.
( , ) ( , ) ( ) ( )w u v w u v h u h v
Proof. Let p = v1 → v2 → ... → vk be a path in G. We havef p 1 2 k p
11
ˆ ˆ( ) ( , )k
i ii
w p w v v−= ∑1i=
1 11
( ( , ) ( ) ( ))k
i i i ii
w v v h v h v− −=
= + −∑1i=
1 01
( , ) ( ) ( )k
i i ki
w v v h v h v−=
= + −∑ Same amount!1i
0( ) ( ) ( )kw p h v h v= + −
G h hGraph reweighting
2
43
1 3
2
8
154
5 4
2
6
−5−47
6
G h hGraph reweighting0
0
0
2
430
1 3
2
8
154
0
5 4
2
6
−5−470
6
0
Run Bellman-Ford algorithm for vertex v0
G h hGraph reweighting0
0
0
−1
2
430 0
1 3
2
8
154
0 −5
5 4
2
6
−5−470
6
0−4 0 h(vk) = δ(v0, vk)
Run Bellman-Ford algorithm for vertex v0
G h hGraph reweighting0
0
0
204
0
1 3
2
13
000
0
5 4
2
2
00100
2
0h(vk) = δ(v0, vk)
ˆ ( , ) ( , ) ( ) ( )i j i j i jw v v w v v h v h v= + −
Run Dijkstra's algorithm for each vk
( ) ( ) ( ) ( )i j i j i j
G h hGraph reweightingˆ ( )Wh i i ?ˆ ( , )w u vWhy is nonnegative?
u
s
u
w(u, v)s
v
Triangle inequality
δ(s, v) ≤ δ(s, u) + w(u, v)w(u, v) + δ(s, u) − δ(s, v) ≥ 0h(v) = δ(s, v), h(u) = δ(s, u) ˆ ( , ) ( , ) ( ) ( ) 0w u v w u v h u h v= + − ≥
( ) ( , ), ( ) ( , )
h ' l hJohnson's algorithmJOHNSON(G )JOHNSON(G )1. Compute G', where V[G'] = V[G] ∪ {s},
E[G'] = E[G] ∪ {(s v): v ∈ V[G]} andE[G ] E[G] ∪ {(s, v): v ∈ V[G]}, and w(s, v) = 0 for all v ∈ V[G]
2. if BELLMAN-FORD(G', w, s) = FALSE3. then print "the input graph contains a negative-weight cycle"4. else for each vertex v ∈ V[G']5. do set h(v) to the value of δ(s, v)6. for each edge (u, v) ∈ E[G']7 do ˆ ( ) ( ) ( ) ( )w u v w u v h u h v← +7. do 8. for each vertex u ∈ V[G]9. do run DIJKSTRA(G, ,u) to compute
( , ) ( , ) ( ) ( )w u v w u v h u h v← + −
w ˆ( , )u vδ( , , ) pfor all v ∈ V[G]
( , )
h ' l hJohnson's algorithm10 f h V[G]10. for each vertex v ∈ V[G]11. do12 return D
ˆ( , ) ( ) ( )uvd u v h v h uδ← + −12. return D
A l f h ' l hAnalysis of Johnson's algorithm1 R B ll F d l h diff i1. Run Bellman-Ford to solve the difference constraints
h(v) − h(u) ≤ w(u, v), or determine that a negative-i h l iweight cycle exists.
Time = O(VE).2. Run Dijkstra's algorithm for each vertex u ∈ V[G].
Time = O(VElgV).3. For each (u, v) ∈ E[G], compute δ(u, v).
Time = O(E).
Total time = O(VElgV).Johnson's algorithm is particularly suitable for sparse graph.
All h h l hAll-pairs shortest-paths algorithmsAlgorithms Time Data structure
Brute-force (run Bellman- O(V2E ) Adjacency-list or (Ford once from each vertex) O(V2E ) j y
adjacency-matrix
Dynamic programming O(V4 ) Adjacency matrix onlyDynamic programming O(V4 ) Adjacency-matrix only
Improved dynamic O(V3lgV ) Adjacency matrix onlyProgramming O(V lgV ) Adjacency-matrix only
Floyd-Warshall algorithm O(V3 ) Adjacency-matrix onlyFloyd Warshall algorithm O(V ) Adjacency matrix only
Johnson algorithm O(VElgV) Adjacency-list or dj i
g O(VElgV) adjacency-matrix
l kFlow networks
u v12
Hangzhou Ningbo
u v
s t0 4 7Shanghai Hongkongs t
yx
1 4 7Shanghai Hongkong
yx14
Nanjin Liangyungang
l kFlow networksD fi i iDefinition.A flow network is a directed graph G = (V, E) with
di i i h d i d i ktwo distinguished vertices: a source s and a sink t. Each edge (u, v) ∈ E has a nonnegative capacity c(u, ) If ( ) E h ( ) 0v). If (u, v) ∉ E, then c(u, v) = 0.
l kFlow networksDefinitionDefinition.A positive flow on G is a function f: V × V →satisfying the following:satisfying the following:
Capacity constraint: For all u, v ∈ V, we require0 ≤ f( ) ≤ ( )0 ≤ f(u, v) ≤ c(u, v).
Skew symmetry: For all u, v ∈ V, we require
Flow conservation: For all u ∈ V − {s t} we require
f(u, v) = −f (v, u)
Flow conservation: For all u ∈ V {s, t}, we require( , ) 0
v Vf u v
∈
=∑v V∈
A fl kA flow on a networkCapacity
u v12/12
CapacityPositive flow
u v
s t0 /4
/7
s t
yx
1 1/
7/
yx11/14
Flow conservation:Flow into x is 8 + 4 = 12.Flow out of x is 1 + 11 = 12.
The value of this flow is 11 + 8 = 19.
fl blMaximum-flow problemM i fl blMaximum-flow problem.Given a flow network G, find a flow of maximum
l Gvalue on G.12/12
u v
t0 4 7
s t10
1/
7/
yx11/14
The maximum flow is 23.
CCutsDefinitionDefinition.A cut (S, T) of a flow network G = (V, E) is a partition of V s ch that ∈ S and t ∈ T If f is a flo on Gof V such that s ∈ S and t ∈ T. If f is a flow on G, then the flow across the cut is f(S, T).
Maximum flow in a network is bounded by the capacity of minimum cut of the network.of minimum cut of the network.
Wh ?Why?
C f fl kCuts of flow networks
u v12/12
u v
s t0 /4
/7
s t
yx
1 1/
7/
yx11/14
The net flow across this cut isf(u, v) + f(x, v) + f(x, y) = 12 + (−4) + 11 = 19.and its capacity isand its capacity isc(u, v) + c(x, v) = 12 + 14 = 26.
l llFlow cancellationWi h l f li i i fl i hWithout loss of generality, positive flow goes either from u to v, or from v to u, but not both.
u u Net flowfrom u to v
2/33/5 0/31/5from u to vin both cases is 1v v cases is 1.
The capacity constraint and flow conservation are preserved by this transformation.
d l kResidual networkDefinitionDefinition.Let f be a flow on G = (V, E). The residual networkG (V E ) is the graph with strictly positive residualGf (V, Ef ) is the graph with strictly positive residualcapacities
c (u v) = c(u v) f (u v) > 0cf (u, v) = c(u, v) – f (u, v) > 0.Edges in Ef admit more flow.
0/1
G
4
Gu v
3/5
G: u v
2
Gf :
A hAugmenting pathsDefinitionDefinition.Any path from s to t in Gf is an augmenting path in Gwith respect to f The flow value can be increasedwith respect to f. The flow value can be increased along an augmenting path p by
( ) i { ( )}( , )
( ) min { ( , )}f fu v pc p c u v
∈=
l f fl blExample of maximum-flow problem12
u v12
s t10 4 7
cf (p) i (16 12 9 14 4)yx
14= min(16, 12, 9, 14, 4)= 4
4/12u v
4/12
s t10 4 7
yx4/14
l f fl blExample of maximum-flow problem4/12
u v4/12
s t10 4 7
yx
8
4/14
u v8
4
s t
10
10 4 7
yx10
4
l f fl blExample of maximum-flow problem8
u v8
4
s t
10
10 4 7
cf (p) i (12 10 10 20)yx
10
4
= min(12, 10, 10, 7, 20)= 7
Total8 Total = 4 + 7 = 11
u v8
4
s t
3
3 11 7
yx3
11
l f fl blExample of maximum-flow problem88
u v4
s t
3
3 11 7
cf (p) i (13 11 8 13)yx
3
11
= min(13, 11, 8, 13)= 8
Total12 Total = 4 + 7 + 8= 19
u v12
s t
3
11 3 7
yx3
11
l f fl blExample of maximum-flow problem12
u v12
cf (p) i ( 4 )
s t
3
11 3 7
= min(5, 4, 5)= 4
Total
yx3
11
12 Total = 4 + 7 + 8 + 4= 23
u v12
s t
3
11 3 7
yx3
11
l f fl blExample of maximum-flow problem12
u v12
s t
3
11 3 7
yx3
11
12/12
The maximum flow= 23
u v12/12 23
s t10
1/
4
7/
7
yx11/14
d lk l hFord-Fulkerson algorithmFORD FULKERSON(G t)FORD-FULKERSON(G, s, t)1. for each edge (u, v) ∈ E[G]2 d f [ ] 02. do f [u, v] ← 0 3. f [v, u] ← 0 4 hil th i t th f t t i th id l4. while there exists a path p from s to t in the residual
network Gf5 d ( ) i { ( ) ( ) i i }5. do cf (p) ← min{cf (u, v): (u, v) is in p} 6. for each edge (u, v) in p7 d f [ ] f [ ] + ( )7. do f [u, v] ← f [u, v] + cf (p)8. f [v, u] ← − f [v, u]
Analysis of Ford-Fulkerson algorithmFi d h i id l k i O(V E) ifFind a path in a residual netork is O(V + E) if weuse either depth-first search or breadth-first search.f * denote the maximum flow found by the algorithm.Running time of Ford-Fulkerson algorithm is g gO(E| f *|).
d lk fl l hFord-Fulkerson max-flow algorithm
u u
s 1 t s
0/
1 t
v v
0
d lk fl l hFord-Fulkerson max-flow algorithm
u u
s0
/1 t s
1/
1 t
v
0
v
1
d lk fl l hFord-Fulkerson max-flow algorithm
u u
s1
/1 t s
0/
1 t
v
1
v
0
d lk fl l hFord-Fulkerson max-flow algorithm
u u
s0
/1 t s
1/
1 t
v
0
v
1
d lk fl l hFord-Fulkerson max-flow algorithm
u u
s1
/1 t s
0/
1 t
v
1
v
0
Running time is 10,000.
d d l hEdmonds-Karp algorithmEdmonds and Karp noticed that man people'sEdmonds and Karp noticed that many people's implementations of Ford-Fulkerson augment along a breadth first augmenting path: a shortest path in Gbreadth-first augmenting path: a shortest path in Gffrom s to t where each edge has weight 1. These implementations would always run relatively fastimplementations would always run relatively fast.Since a breadth-first augmenting path can be found in
( ) i h i l i hi h id d h fiO(V + E) time, their analysis, which provided the firstpolynomial-time bound on maximum flow, focuses on b di h b f fl ibounding the number of flow augmentations.Edmonds-Karp algorithm's running time is O(VE2).
Any question?y q s ?Xi i ZhXiaoqing Zheng
Fundan Universityy