umass lowell computer science 91.503 graduate analysis...

25
UMass Lowell Computer Science 91.503 Graduate Analysis of Algorithms Prof. Karen Daniels Fall, 2013 Lecture 3 Monday, 9/23/13 Amortized Analysis 1

Upload: hakhuong

Post on 29-Aug-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

UMass Lowell Computer Science 91.503Graduate Analysis of Algorithms

Prof. Karen DanielsFall, 2013

Lecture 3Monday, 9/23/13

Amortized Analysis

1

Overview

• Amortize: “To pay off a debt, usually by periodic payments” [Websters]

• Amortized Analysis:• “creative accounting” for operations• can show average cost of an operation is small (when averaged

over sequence of operations, not distribution of inputs) even though a single operation in the sequence is expensive

• result must hold for any sequence of these operations• no probability is involved (unlike average-case analysis)

• guarantee holds in worst-case • analysis method only; no effect on code operation

• Stack (computational geometry application)• Dynamic Table• Binary Counter (in book but not lecture)• Conga Line for Dynamic Closest Pair (computational geometry)

2

More Motivation

• Depth-First and Breadth-First Search• Dijkstra’s Single-Source Shortest Path• Fibonacci Heaps• Knuth-Morris-Pratt String Matching• Red-Black Tree Restructuring• …(see index)

3

Overview (continued)

• 3 ways to determine amortized cost of an operation that is part of a sequence of operations:• Aggregate Method

• find upper bound T(n) on total cost of sequence of n operations• amortized cost = average cost per operation = T(n)/n• same for all the (perhaps different types of) operations in the sequence

• Accounting Method• amortized cost can differ across types of operations• overcharge some operations early in sequence• store overcharge as “prepaid credit” on specific data structure objects

• Potential Method• amortized cost can differ across operations (as in accounting method)• overcharge some operations early in sequence (as in accounting method)• store overcharge as “potential energy” of data structure as a whole

• (unlike accounting method)4

Aggregate Method:Stack Operations

• Aggregate Method• find upper bound T(n) on total cost of sequence of n operations• amortized cost = average cost per operation = T(n)/n

• same for all the operations in the sequence

• Traditional Stack Operations• PUSH(S,x) pushes object x onto stack S• POP(S) pops top of stack S, returns popped object • O(1) time: consider cost as 1 for our discussion• Total actual cost of sequence of n PUSH/POP

operations = n• STACK-EMPTY(S) time in Θ(1)

5

Aggregate Method:Stack Operations (continued)

• New Stack Operation• MULTIPOP(S,k) pops top k elements off stack S

• pops entire stack if it has < k items

• MULTIPOP actual cost for stack containing s items: • Use cost =1 for each POP• Cost = min(s,k)• Worst-case cost in O(s) in O(n)

MULTIPOP(S,k)

1 while not STACK-EMPTY(S) and k = 0

2 POP(S)

3 k = k - 1

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.6

7

Aggregate Method:Stack Operations (continued)

Aggregate Method:Stack Operations (continued)

• Sequence of n PUSH, POP, MULTIPOP ops• initially empty stack • MULTIPOP worst-case O(n) O(n2) for sequence

• Aggregate method yields tighter upper bound• Sequence of n operations has O(n) worst-case cost

• Each item can be popped at most once for each push• # POP calls (including ones in MULTIPOP) <= #push calls

• <= n• Average cost of an operation = O(n)/n = O(1)

• = amortized cost of each operation• holds for PUSH, POP and MULTIPOP

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.8

Accounting Method

• Accounting Method• amortized cost can differ across operations• overcharge some operations early in sequence• store overcharge as “prepaid credit” on specific data structure

objects

• Let be actual cost of ith operation• Let be amortized cost of ith operation (what we charge)

• Total amortized cost of sequence of operations must be upper bound on total actual cost of sequence

• Total credit in data structure =• must be nonnegative for all n

∑∑==

≥n

ii

n

ii cc

11

ˆ

∑∑==

−n

ii

n

ii cc

11

ˆ

icic

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.9

Accounting Method:Stack Operations

• Paying for a sequence using amortized cost:• start with empty stack• PUSH of an item always precedes POP, MULTIPOP

• pay for PUSH & store 1 unit of credit • credit for each item pays for actual POP, MULTIPOP cost of that item• credit never “goes negative”

• total amortized cost of any sequence of n ops is in O(n)• amortized cost is upper bound on total actual cost

• PUSH 1 2• POP 1 0• MULTIPOP min(k,s) 0

Operation Actual Cost Assigned Amortized Cost

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.10

Potential Method

• Potential Method• amortized cost can differ across operations (as in accounting method)• overcharge some operations early in sequence (as in accounting method)• store overcharge as “potential energy” of data structure as a whole (unlike accounting method)

• Let ci be actual cost of i th operation• Let Di be data structure after applying i th operation• Let Φ(Di ) be potential associated with Di• Amortized cost of i th operation:

• Total amortized cost of n operations:

• Require: so total amortized cost is upper bound on total actual cost

• Since n might not be known in advance, guarantee “payment in advance” by requiring

)()(ˆ 1−Φ−Φ+= iiii DDcc

)()( 0DDn Φ≥Φ

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.

)()())()((ˆ 01

111

DDcDDcc n

n

iiii

n

ii

n

ii Φ−Φ+=Φ−Φ+= ∑∑∑

=−

==

terms telescope

)()( 0DDi Φ≥Φ11

Potential Method:Stack Operations

• Potential function value = number of items in stack• guarantees nonnegative potential after ith operation• Amortized operation costs (assuming stack has s items)

• PUSH:• potential difference= • amortized cost =

• MULTIPOP(S,k) pops k’=min(k,s) items off stack• potential difference= • amortized cost =

• POP amortized cost also = 0• Amortized cost O(1) total amortized cost of

sequence of n operations in O(n)

1)1()()( 1 =−+=Φ−Φ − ssDD ii

211)()(ˆ 1 =+=Φ−Φ+= −iiii DDcc

')()( 1 kDD ii −=Φ−Φ −

0'')()(ˆ 1 =−=Φ−Φ+= − kkDDcc iiii

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.12

Dynamic Tables:Overview

• Dynamic Table T:• array of slots

• Ignore implementation choices: stack, heap, hash table...• if too full, increase size & copy entries to T’• if too empty, decrease size & copy entries to T’

• Analyze dynamic table insert and delete • Actual expansion or contraction cost is large• Show amortized cost of insert or delete in O(1)

• Load factor α(T) = num[T]/size[T]• num[T] = number of items currently stored in table• empty table: α(T) = 1 (convention guaranteeing load factor can be

lower bounded by a useful constant)

• full table: α(T) = 1 source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.13

Dynamic Tables:Table (Expansion Only)

• Load factor bounds (double size when T is full):• Sequence of n inserts on initially empty table

• Worst-case cost of insert is in O(n)• Worst-case cost of sequence of n inserts is in O(n2)

LOOSE

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.

Double only when table is already full.

WHY?

14

“elementary” insertion

Dynamic Tables:Table Expansion (cont.)

• Aggregate Method:• ci= i if i-1 is exact power of 2

1 otherwise• total cost of n inserts =

Accounting Method: • charge cost = 3 for each element inserted• intuition for 3

• each item pays for 3 elementary insertions• inserting itself into current table• expansion: moving itself• expansion: moving another item that has already been moved

Amortized Analysis

⎣ ⎦nnnnc

n

j

jn

ii 322

lg

01=+<+≤ ∑∑

==

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.

count only elementary insertions

15

insertcopy

Dynamic Tables:Table Expansion (cont.)

• Potential Method:• Value of potential function Φ(Τ)

• 0 right after expansion (then becomes 2 why?)• builds to table size by time table is fullwhen

• always nonnegative, so sum of amortized costs of n inserts is upper bound on sum of actual costs

Amortized Analysis][][2)( TsizeTnumT −=Φ

3)2()2(1ˆ 111 =−−−+=Φ−Φ+= −−− iiiiiiii sizenumsizenumcc

• Amortized cost of ith insert• Φi = potential after ith operation • Case 1: insert does not cause expansion

• Case 2: insert causes expansion3)2()2(ˆ 111 =−−−+=Φ−Φ+= −−− iiiiiiiii sizenumsizenumnumcc

11 −− = ii sizenum 11 += −ii numnum 12 −= ii sizesizeuse these:

3 functions: sizei, numi, Φi

16

0)( =Φ T ][][2)(][2][ TsizeTnumTTnumTsize −=Φ⇒=

Dynamic Tables:Table Expansion & Contraction

• Load factor bounds:• (double size when T is full) (halve size when T is ¼ full (why ¼?)):

• DELETE pseudocode analogous to INSERT

• Potential Method:• Value of potential function Φ(Τ)

• = 0 for empty table• 0 right after expansion or contraction• builds as α(T) increases to 1 or decreases to ¼• always nonnegative, so sum of amortized costs of

n inserts is upper bound on sum of actual costs• = 0 when α(T)=1/2• = num[T] when α(T)=1• = num[T] when α(T)=1/4

Amortized Analysis2/1)(][][2 )( ≥−=Φ TifTsizeTnumT α2/1)(][2/][ <− TifTnumTsize α

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.

count elementary insertions & deletions

same as INSERT

17

Different from INSERT

motivation for choice of potential function: can pay for moving num[T] items

Dynamic Tables:Table Expansion & Contraction

Amortized Analysis Potential Method

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.18

motivation for choice of potential function

3 functions: sizei, numi, Φi

Dynamic Tables:Table Expansion & Contraction

Amortized Analysis

• Analyze cost of sequence of n inserts and/or deletes• Amortized cost of ith operation

• Case 1: INSERT • Case 1a: αi-1 >= ½. By previous insert analysis:

• Case 1b: αi-1 < ½ and αi < ½

• Case 1c: αi-1 < ½ and αi >= ½

3))2/(()2(1ˆ 111 ≤−−−+=Φ−Φ+= −−− iiiiiiii numsizesizenumcc

0))2/(())2/((1ˆ 111 =−−−+=Φ−Φ+= −−− iiiiiiii numsizenumsizecc

Potential Method

3ˆ ≤icholds whether or not table expands

no expansion (why?)

no expansion (why?)

2/1)(][][2)( ≥−=Φ TifTsizeTnumT α2/1)(][2/][ <− TifTnumTsize α

see derivation in textbook19

Dynamic Tables:Table Expansion & Contraction

Amortized Analysis• Amortized cost of ith operation (continued)

Potential Method

•Case 2: DELETE •Case 2a: αi-1 >= ½. •Case 2b: αi-1 < ½ and αi < ½

•Case 2c: αi-1 < ½ and αi < ½

no contraction

contraction

2)2/()2/(1ˆ 111 =−−−+=Φ−Φ+= −−− iiiiiiii numsizenumsizecc

1)2/()2/()1(ˆ 111 =−−−++=Φ−Φ+= −−− iiiiiiiii numsizenumsizenumcc

Conclusion: amortized cost of each operation is bounded above by a constant, so time for sequence of n operations is in O(n).

source: 91.503 textbook source: 91.503 textbook CormenCormen et al. et al.

textbook exercise

20

2/1)(][][2)( ≥−=Φ TifTsizeTnumT α2/1)(][2/][ <− TifTnumTsize α

Example: Dynamic Closest Pair

source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000.

S1

21

S2 S3

321 SSSS ∪∪=

Goal: Fast maintenance of closest pair in dynamic setting

Example: Dynamic Closest Pair (continued)

source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000.

S S S

22

Example: Dynamic Closest Pair (continued) Rules

nk log≤

source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000.

• Partition dynamic set S into subsets.• Each subset Si has an associated digraph Gi consisting of a set of

disjoint, directed paths.• Total number of edges in all graphs remains linear

• Combine and rebuild if number of edges reaches 2n.• Closest pair is always in some Gi.• Initially all points are in single set S1.• Operations:

• Create Gi for a subset Si.• Insert a point.• Delete a point.• Merge subsets until .

nk log≤

We use log base 2.23

Example: Dynamic Closest Pair (continued) Rules: Operations

nk log≤

source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000.

• Create Gi for a subset Si:• Select starting point (we choose leftmost (or higher one in case of a tie))• Iteratively extend the path P, selecting next vertex as:

• Case 1: nearest neighbor in S \ P if last point on path belongs to Si• Case 2: nearest neighbor in Si \ P if last point on path belongs to S \ Si

• Insert a point x:• Create new subset Sk+1={x}.• Merge subsets if necessary. • Create Gi for new or merged subsets.

• Delete a point x:• Create new subset Sk+1= all points y such that (y,x) is a directed edge in some Gi.• Remove x and adjacent edges from all Gi. (We also remove y from its subset.)• Merge subsets if necessary. • Create Gi for new or merged subsets.

• Merge subsets until :• Choose subsets Si and Sj to minimize size ratio |Sj|/ |Si|.

See handout for example.24

Example: Dynamic Closest Pair (continued) Potential Function

• Potential for a subset Si : Φi = n|Si|log|Si|.• Total potential Φ = n2logn - Σ Φi.• Paper proves this Theorem in Section 3:

• HW#3 contains a problem related to this paper.• Please read up through Section 3.• Later in the semester we will have sufficient background for

much of the remainder of the paper.

source: “Fast hierarchical clustering and other applications of dynamic closest pairs,” David Eppstein, Journal of Experimental Algorithmics, Vol. 5, August 2000.

Theorem: The data structure maintains the closest pair in S in O(n) space, amortized time O(nlogn) per insertion, and amortized time O(nlog2n) per deletion.

25