dynamic programming (ii)

38
1 Dynamic Programming (II) 2012/11/27

Upload: barid

Post on 22-Feb-2016

40 views

Category:

Documents


0 download

DESCRIPTION

Dynamic Programming (II). 2012/11/27. 15.1 Assembly-line scheduling . An automobile chassis enters each assembly line, has parts added to it at a number of stations, and a finished auto exits at the end of the line. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Dynamic Programming (II)

1

Dynamic Programming (II)

2012/11/27

Page 2: Dynamic Programming (II)

Chapter 15 P.2

15.1 Assembly-line scheduling

An automobile chassis enters each assembly line, has parts added to it at a number of stations, and a finished auto exits at the end of the line.

Each assembly line has n stations, numbered j = 1, 2,...,n. We denote the jth station on line i ( where i is 1 or 2) by Si,j. The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j).

Page 3: Dynamic Programming (II)

Chapter 15 P.3

The stations were built at different times and with different technologies, however, so that the time required at each station varies, even between stations at the same position on the two different lines. We denote the assembly time required at station Si,j by ai,j.

As the coming figure shows, a chassis enters station 1 of one of the assembly lines, and it progresses from each station to the next. There is also an entry time ei for the chassis to enter assembly line i and an exit time xi for the completed auto to exit assembly line i.

Page 4: Dynamic Programming (II)

Chapter 15 P.4

a manufacturing problem to find the fast way through a factory

Page 5: Dynamic Programming (II)

Chapter 15 P.5

Normally, once a chassis enters an assembly line, it passes through that line only. The time to go from one station to the next within the same assembly line is negligible.

Occasionally a special rush order comes in, and the customer wants the automobile to be manufactured as quickly as possible.

For the rush orders, the chassis still passes through the n stations in order, but the factory manager may switch the partially-completed auto from one assembly line to the other after any station.

Page 6: Dynamic Programming (II)

Chapter 15 P.6

The time to transfer a chassis away from assembly line i after having gone through station Sij is ti,j, where i = 1, 2 and j = 1, 2, ..., n-1 (since after the nth station, assembly is complete). The problem is to determine which stations to choose from line 1 and which to choose from line 2 in order to minimize the total time through the factory for one auto.

Page 7: Dynamic Programming (II)

Chapter 15 P.7

An instance of the assembly-line problem with costs

Page 8: Dynamic Programming (II)

Chapter 15 P.8

Step1 The structure of the fastest way through the factory

the fast way through station S1,j is either the fastest way through Station S1,j-1 and then directly

through station S1,j, or the fastest way through station S2,j-1, a transfer from line 2

to line 1, and then through station S1,j. Using symmetric reasoning, the fastest way through

station S2,j is either the fastest way through station S2,j-1 and then directly

through Station S2,j, or the fastest way through station S1,j-1, a transfer from line 1

to line 2, and then through Station S2,j.

Page 9: Dynamic Programming (II)

Chapter 15 P.9

Step 2: A recursive solution

)][,][min( 2211* xnfxnff

2if,1if

2if,1if

)]1[,]1[min(][

)]1[,]1[min(][

,21,11,22

1,222

,11,22,11

1,111

jjjj

atjfajf

aejf

atjfajf

aejf

jjj

jjj

Page 10: Dynamic Programming (II)

Chapter 15P.10

step 3: computing the fastest times Let ri(j)be the number of references made to fi[j] in a

recursive algorithm. r1(n)=r2(n)=1r1(j) = r2(j)=r1(j+1)+r2(j+1)

The total number of references to all fi[j] values is (2n).

We can do much better if we compute the fi[j] values in different order from the recursive way. Observe that for j 2, each value of fi[j] depends only on the values of f1[j-1] and f2[j-1].

Page 11: Dynamic Programming (II)

Chapter 15P.11

FASTEST-WAY procedureFASTEST-WAY(a, t, e, x, n)1 f1[1] e1 + a1,1

2 f2[1] e2 + a2,1

3 for j 2 to n4 do if f1[j-1] + a1,j f2[j-1] + t2,j-1 +a1,j

5 then f1[j] f1[j-1] + a1,j

6 l1[j] 17 else f1[j] f2[j-1] + t2,j-1 +a1,j

8 l1[j] 29 if f2[j-1] + a2, j f1[j-1] + t1,j-1 +a2,j

Page 12: Dynamic Programming (II)

Chapter 15P.12

10 then f2[j] f2[j – 1] + a2,j

11 l2[j] 2

12 else f2[j] f1[j – 1] + t1,j-1 + a2,j

13 l2[j] 1

14 if f1[n] + x1 f2[n] + x2

15 then f* = f1[n] + x1

16 l* = 117 else f* = f2[n] + x2

18 l* = 2

Page 13: Dynamic Programming (II)

Chapter 15P.13

step 4: constructing the fastest way through the factory

PRINT-STATIONS(l, n)1 i l*2 print “line” i “,station” n3 for j n downto 2

4 do i li[j]5 print “line” i “,station” j

– 1

output line 1, station 6line 2, station 5line 2, station 4line 1, station 3line 2, station 2line 1, station 1

Page 14: Dynamic Programming (II)

P.14

Optimal Binary search trees

cost:2.80 cost:2.75optimal!!

Page 15: Dynamic Programming (II)

P.15

expected cost

the expected cost of a search in T is

n

i

n

iii qp

1 0

1

n

ii

n

iiTiiT

i

n

i

n

iiTiiT

qdpk

qdpk

1 0

1 0

)(depth)(depth1

)1)(depth()1)(depth(T]in cost E[search

Page 16: Dynamic Programming (II)

P.16

For a given set of probabilities, our goal is to construct a binary search tree whose expected search is smallest. We call such a tree an optimal binary search tree.

Page 17: Dynamic Programming (II)

P.17

Step 1: The structure of an optimal binary search tree

Consider any subtree of a binary search tree. It must contain keys in a contiguous range ki, ...,kj, for some 1 i j n. In addition, a subtree that contains keys ki, ..., kj must also have as its leaves the dummy keys di-1, ..., dj.

If an optimal binary search tree T has a subtree T' containing keys ki, ..., kj, then this subtree T' must be optimal as well for the subproblem with keys ki, ..., kj and dummy keys di-1, ..., dj.

Page 18: Dynamic Programming (II)

P.18

Step 2: A recursive solution

. if)},(],1[]1,[{min

1,-ij if],[

),(

1

1

jijiwjrerie

qjie

qpjiw

jri

i

j

ill

j

ill

If the optimal tree containing ki,…,kj is rooted at kr, thene[i, j]=pr+(e[i, r-1]+w(i, r-1))+(e[r+1, j]+w(r+1, j))

w[i, j] = w[i, j – 1] + pj+qj

Page 19: Dynamic Programming (II)

P.19

Step 3:computing the expected search cost of an optimal binary search treeOPTIMAL-BST(p,q,n)1 for i 1 to n + 12 e[i, i – 1] qi-13 w[i, i – 1] qi-14 for l 1 to n5 for i 1 to n – l + 16 j i + l – 1 7 e[i, j] 8 w[i, j] w[i, j – 1] + pj+qj

Page 20: Dynamic Programming (II)

P.20

9 for r i to j10 t e[i, r –1]+e[r +1, j]+w[i, j]11 if t < e[i, j]12 then e[i, j] t13 root[i, j] r14 return e and root the OPTIMAL-BST procedure takes (n3)

Page 21: Dynamic Programming (II)

P.21

The table e[i,j], w[i,j], and root[i,j] computed by OPTIMAL-BST on the key distribution.

Page 22: Dynamic Programming (II)

22

0/1 knapsack problem n items, with weights w1, w2, , wn

values (profits) v1, v2, , vn

the capacity W to maximize

subject to W xi = 0 or 1, 1in (We let T={i| xi=1}S={1,…,n})

ni

iixv1

ni

iixw1

Page 23: Dynamic Programming (II)

23

Brute force solution The 0/1 knapsack problem is an

optimization problem.

We may try all 2n possible subsets of S to solve it.

Question: Any solution better than the brute-force?

Answer: Yes, the dynamic programming (DP)! We trade space for time.

Page 24: Dynamic Programming (II)

24

Table (Array) We construct an array V[0..n, 0..M]. The entry V[i, w], 1in, and 0wW, will

store the maximum (combined) value of any subset of items 1,2,…,i of the (combined) weight at most w.

If we can compute all the entries of this array, then the array entry V[n, W] will contain the maximum value of any subset of items 1,2,…,n that can fit into the knapsack of capacity W.

Page 25: Dynamic Programming (II)

25

Recursive relation Initial settings:

V[0, w]=0 for 0wW

V[i, w]=max(V[i-1, w], vi+V[i-1, w-wi])

for 0in and 0wW if wiw.

Otherwise, V[i, w]=V[i-1, w].

Page 26: Dynamic Programming (II)

26

Page 27: Dynamic Programming (II)

27

Page 28: Dynamic Programming (II)

28

Page 29: Dynamic Programming (II)

29

Page 30: Dynamic Programming (II)

30

Page 31: Dynamic Programming (II)

31

Page 32: Dynamic Programming (II)

32

0/1 knapsack problem in multistage representation n objects , weight W1, W2, ,Wn profit P1, P2, ,Pn

capacity M maximize

subject to M xi = 0 or 1, 1in E.g.

ni

iixP1

ni

iixW1

i Wi Pi 1 10 40 2 3 20 3 5 30

M=10

Page 33: Dynamic Programming (II)

33

0/1 knapsack problem in multistage representation Multi-stage graph:

S T

0

1 0

10

00

01

100

010

011

000

001

00

0

0

00

40

020

0

300

030

x1=1

x1=0

x2=0

x2=1

x2=0

x3=0

x3=1

x3=0

x3=1

x3=0

Page 34: Dynamic Programming (II)

34

The resource allocation problem

Given m resources, n projects with profit p(i, j) : i resources are allocated

to project j. The goal is to maximize the total

profit. E.g.

i: # of resources

Project j

1

2

3

1 2 8 9 2 5 6 7 3 4 4 4 4 2 4 5

Page 35: Dynamic Programming (II)

35

Multistage graph

(i, j) : the state where i resources are allocated to projects 1, 2, …, j

E.g. node H=(3, 2) : 3 resources are been allocated to projects 1, 2.

S T

6

0,1

1,1

2,1

3,1

0,2

1,2

2,2

3,2

0,3

1,3

2,3

3,3

A

7

6

44

4

B

C

D H

G

F

E I

J

K

L

0 5

8

9

0

0

0

0

5

5

5

0

0

0

04

4

4

42

2

0

The Resource Allocation Problem Described as a Multi-Stage Graph.

Page 36: Dynamic Programming (II)

36

Multistage graph Find the longest path from S to T :

S C H L T , 8+5=132 resources allocated to project 11 resource allocated to project 20 resource allocated to projects 3, 4

8 0 0 5

Page 37: Dynamic Programming (II)

Comment If a problem can be described by a

multistage graph then it can be solved by dynamic programming.

37

Page 38: Dynamic Programming (II)

Q&A38