dynamic1

25
Dynamic Programming Lecture 16

Upload: myalome

Post on 15-Jul-2015

399 views

Category:

Education


2 download

TRANSCRIPT

Page 1: Dynamic1

Dynamic Programming

Lecture 16

Page 2: Dynamic1

Dynamic Programming

Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value.

Page 3: Dynamic1

Dynamic Programming

Like divide and conquer, Dynamic Programming solves problems by combining solutions to sub problems.

Unlike divide and conquer, sub problems are not independent. Subproblems may share subsubproblems, However, solution to one subproblem may not affect the solutions

to other subproblems of the same problem. (More on this later.)

Dynamic Programming reduces computation by Solving subproblems in a bottom-up fashion. Storing solution to a subproblem the first time it is solved. Looking up the solution when subproblem is encountered again

Page 4: Dynamic1

Dynamic Programming

The development of a dynamic-programming algorithm can be broken into a sequence of four steps. 1.Characterize the structure of an optimal solution. 2.Recursively define the value of an optimal solution. 3.Compute the value of an optimal solution in a bottom-up fashion.4.Construct an optimal solution from computed information.

Page 5: Dynamic1

Matrix-chain Multiplication

Suppose we have a sequence or chain A1, A2, …, An of n matrices to be multiplied

That is, we want to compute the product A1A2…An

There are many possible ways (parenthesizations) to compute the product

Page 6: Dynamic1

Matrix-chain Multiplication Example: consider the chain A1, A2, A3, A4 of 4 matrices

Let us compute the product A1A2A3A4

There are 5 possible ways:1. (A1(A2(A3A4)))

2. (A1((A2A3)A4))

3. ((A1A2)(A3A4))

4. ((A1(A2A3))A4)

5. (((A1A2)A3)A4)

Page 7: Dynamic1

Algorithm to Multiply 2 Matrices

Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)

Result: Matrix Cp×r resulting from the product A·B

MATRIX-MULTIPLY(Ap×q , Bq×r)

1. for i ← 1 to p2. for j ← 1 to r3. C[i, j] ← 04. for k ← 1 to q5. C[i, j] ← C[i, j] + A[i, k] · B[k, j] 6. return C

Page 8: Dynamic1

Matrix-chain Multiplication Example: Consider three matrices A10×100, B100×5, and C5×50

There are 2 ways to parenthesize ((AB)C) = D10×5 · C5×50

AB ⇒ 10·100·5=5,000 scalar multiplications DC ⇒ 10·5·50 =2,500 scalar multiplications

(A(BC)) = A10×100 · E100×50

BC ⇒ 100·5·50=25,000 scalar multiplications AE ⇒ 10·100·50 =50,000 scalar multiplications

Page 9: Dynamic1

Matrix-chain Multiplication Problem Matrix-chain multiplication problem

Given a chain A1, A2, …, An of n matrices, where for i=1, 2, …, n, matrix Ai has dimension pi-1×pi

Parenthesize the product A1A2…An such that the total number of scalar multiplications is minimized

Brute force method of exhaustive search takes time exponential in n

Page 10: Dynamic1

Step 1: The structure of an optimal parenthesization

The optimal substructure of this problem is as follows.

Suppose that the optimal parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1. Then the parenthesization of the subchain AiAi+1…Ak within this optimal parenthesization of AiAi+1…Aj must be an optimal parenthesization of AiAi+1…Ak.

A similar observation holds for the parenthesization of the subchain Ak+1Ak+2…Aj in the optimal parenthesization of AiAi+1…Aj.

Page 11: Dynamic1

Step 2: A recursive solution Let m[i, j] be the minimum number of scalar

multiplications needed to compute the matrix AiAi+1…Aj; the cost of a cheapest way to compute A1A2…An would thus be m[1, n].

1

0[ , ] min{ [ , ] [ 1, ] }i k j

i k j

i jm i j m i k m k j p p p i j−≤ <

== + + + <

Page 12: Dynamic1

Step 3: Computing the optimal costsMATRIX-CHAIN-ORDER(p) 1 n←length[p] - 1 2 for i←1 to n 3 do m[i, i]←0 4 for l←2 to n 5 do for i←1 to n - l + 1 6 do j ←i + l-1 7 m[i, j]←∞8 for k←i to j - 1 9 do q←m[i, k] + m[k + 1, j] +pi-1pkpj 10 if q < m[i, j] 11 then m[i, j]←q 12 s[i, j]←k 13 return m and s It’s running time is O(n3).

Page 13: Dynamic1

matrix dimension ---------------------- A1 30 X 35 A2 35 X 15 A3 15 X 5 A4 5 X 10 A5 10 X 20 A6 20 X 25

Page 14: Dynamic1

Step 4: Constructing an optimal solution

PRINT-OPTIMAL-PARENS(s, i, ,j) 1 if j =i 2 then print “A”,i3 else print “(” 4 PRINT-OPTIMAL-PARENS(s, i, s[i, j]) 5 PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j) 6 print “)”

In the above example, the call PRINT-OPTIMAL-PARENS(s, 1, 6) computes the matrix-chain product according to the parenthesization

((A1(A2A3))((A4A5)A6)) .

Page 15: Dynamic1

Longest common subsequence

Formally, given a sequence X ={x1, x2, . . . , xm}, another sequence Z ={z1, z2, . . . , zk} is a subsequence of X if there exists a strictly increasing sequence i1, i2, . . . , ik of indices of X such that for all j = 1, 2, . . . , k, we have xij = zj. For example, Z ={B, C, D, B} is a subsequence of X ={A, B, C, B, D, A, B } with corresponding index sequence {2, 3, 5, 7}.

Page 16: Dynamic1

Given two sequences X and Y, we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y.

In the longest-common-subsequence problem, we are given two sequences X = {x1, x2, . . . , xm} and Y ={y1, y2, . . . , yn} and wish to find a maximum-length common subsequence of X and Y.

Page 17: Dynamic1

Step 1: Characterizing a longest common subsequence

Let X ={x1, x2, . . . , xm} and Y ={y1, y2, . . . , yn} be sequences, and let Z ={z1, z2, . . . , zk} be any LCS of X and Y.

1. If xm = yn, then zk = xm = yn and Zk-l is an LCS of Xm-l and Yn-l.

2. If xm≠yn, then zk≠xm implies that Z is an LCS of Xm-1 and Y.

3. If xm≠yn, then zk≠yn implies that Z is an LCS of X and Yn-l.

Page 18: Dynamic1

Step 2: A recursive solution to subproblems

Let us define c[i, j] to be the length of an LCS of the sequences Xi and Yj. The optimal substructure of the LCS problem gives the recursive formula

≠>−−=>+−−

===

ji

ji

yandxjijicjic

yandxjijic

orji

jic

0,]),1[],1,[max(

0,1]1,1[

000

],[

Page 19: Dynamic1

Step 3: Computing the length of an LCS

LCS-LENGTH(X,Y) 1 m←length[X] 2 n←length[Y] 3 for i←1 to m 4 do c[i,0]←0 5 for j←0 to n 6 do c[0, j]←0 7 for i←1 to m 8 do for j←1 to n 9 do if xi = yj 10 then c[i, j]←c[i - 1, j -1] + 1 11 b[i, j] ←” ”↖ 12 else if c[i - 1, j]←c[i, j - 1] 13 then c[i, j]←c[i - 1, j] 14 b[i, j] ←”↑” 15 else c[i, j]←c[i, j -1] 16 b[i, j] ←”←” 17 return c and b

Page 20: Dynamic1

x=ABCBDAB and y=BDCABA,

The running time of the procedure is O(mn).

Page 21: Dynamic1

Step 4: Constructing an LCS

PRINT-LCS(b,X,i,j) 1 if i = 0 or j = 0 2 then return 3 if b[i, j] = “ " ↖4 then PRINT-LCS(b,X,i - 1, j - 1) 5 print xi 6 elseif b[i,j] = “↑" 7 then PRINT-LCS(b,X,i - 1,j) 8 else PRINT-LCS(b,X,i, j - 1)

Page 22: Dynamic1

Elements of dynamic programming

For dynamic programming, Optimization problem must have following to be applicable

optimal substructure overlapping subproblems Memoization.

Page 23: Dynamic1

Optimal substructure

The first step in solving an optimization problem by dynamic programming is to characterize the structure of an optimal solution. We say that a problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Whenever a problem exhibits optimal substructure, it is a good clue that dynamic programming might apply.

Page 24: Dynamic1

Overlapping subproblems

When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems

Page 25: Dynamic1

Memoization

A memoized recursive algorithm maintains an entry in a table for the solution to each subproblem.

Each table entry initially contains a special value to indicate that the entry has yet to be filled in.

When the subproblem is first encountered during the execution of the recursive algorithm, its solution is computed and then stored in the table.

Each subsequent time that the subproblem is encountered, the value stored in the table is simply looked up and returned.