complexity

33
Runtime complexity ID1218 2011 Christian Schulte Johan Montelius Software and Computer Systems School of Information and Communication Technology KTH – Royal Institute of Technology Stockholm, Sweden

Upload: ism33

Post on 11-Dec-2015

219 views

Category:

Documents


3 download

DESCRIPTION

complexity12

TRANSCRIPT

Runtime complexity

ID1218 2011

Christian SchulteJohan Montelius

Software and Computer SystemsSchool of Information and Communication

TechnologyKTH – Royal Institute of Technology

Stockholm, Sweden

What Are the Questions?

Runtime efficiency what is it we are interested in? why are we interested? how to make statements on runtime and memory?

Clearly “faster” is better we have to know how fast our programs compute we have to know how much memory our programs

require we have to know how “good” a program is

Statements on Runtime

Run your program for some example data sets on some computer, make a statement

statement valid, if data set gets larger? statement valid, if executed on different

computer? statement valid, if executed on same computer

but slightly different configuration?

Testing is insufficient!

Runtime Guarantees

Testing can never yield a guarantee on runtime!

We need a form to express runtime guarantees

capture size of input data capture different computers/configurations

Runtime Guarantees

Express runtime in relation to input size

T(n) runtime function where n is size of input

for example integers argument lists length of list

Runtime Guarantees

We need to ignore marginal difference between runtime functions

difference for small input sizesonly consider for n ≥ n0

difference by constantc1 × T(n) same as c2 × T(n)

Asymptotic Complexity

Asymptotic complexity is such a runtime guarantee: best

upper boundon runtime

up to a constant factorfor sufficiently large inputs

Discuss runtime by using “big-oh” notation

Big-Oh Notation

Assume T(n) runtime, n size of input f(n) function on non-negative integers

T(n) is of O(f(n)) “T(n) is of order f(n)”iff T(n) ≤ c × f(n) for all n≥n0

for some c (whatever computer) for some n0 (sufficiently large)

Big-Oh Notation

T(n) is of O(f(n)) “T(n) is of order f(n)”iff T(n) ≤ c × f(n) for all n≥n0

for some c (whatever computer) for some n0 (sufficiently large)

c×n

n0 n

T(n)f(n) = n, c=2

Big-Oh Notation

T(n) is of O(f(n)) sometimes written as T(n) = O(f(n)) T(n) ∈ O(f(n))

Examples T(n) = 4n + 42 is O(n) T(n) = 5n + 42 is O(n) T(n) = 7n + 11 is O(n) T(n) = 4n + n3 is O(n3) T(n) = n100 + 2n + 8 is O(2n)

Big-Oh Notation

Often one just says for O(n) linear complexity O(n2) quadratic complexity O(n3) cubic complexity O(log n) logarithmic complexity O(2n) exponential complexity

Summary

Formulate statements on runtime asruntime guaranteesasymptotic complexity

for large input independent of computer

Expressed in big-oh notation abstracts away behavior of function for small

numbers abstracts away constants captures asymptotic behavior of functions

Determining Complexity

Approach

Take MiniErlang program Take execution time for each expression Give equations for runtime of functions Solve equations Determine asymptotic complexity

Simplification Here

We will only consider programs with one function

Generalization to many functions straightforward

complication: solving equation

Execution Times for Expressions

Give inductive definition based on function definition and structure of expressions

Function definition pattern matching and guards

Simple expression values and list construction

More involved expression function call recursive function call leads to recursive equation often called: recurrence equation

Execution Time: T(E)

ValueT(V) = cvalue

List constructionT([E1|E2]) = ccons + T(E1) + T(E2)

Time T(E) needed for executing expression E

how MiniErlang machine executes E

Execution Time: T(E)

ValueT(V) = c

List constructionT([E1|E2]) = c + T(E1) + T(E2)

Time T(E) needed for executing expression E

how MiniErlang machine executes E

to ease notation, just forget subscript

Function Call

For a function F define a functionTF(n) for its runtime

and determine size of input for call to F input for a function

Function Call

T(F(E1, ..., Ek)) = ccall + T(E1) + ... + T(Ek)

+ TF(size(IF({1, ..., k})))

input arguments IF({1, ..., k}) input arguments for F

size size(IF({1, ..., k}))size of input for F

Function Definition

Assume function F defined by clausesH1 -> B1; …; Hk -> Bk.

TF(n) = cselect + max { T(B1), …, T(Bk) }

Example: app/2

app([],Ys)

-> Ys;

app([X|Xr],Ys)

-> [X|app(Xr,Ys)].

What do we want to compute

Tapp(n) Knowledge needed

input argument first argument size function length of list

Computing Tapp =Ta

let’s use the whiteboard…

Append: Recurrence Equation Analysis yields

Tapp(0) = c1Tapp(n) = c2 + Tapp(n-1)

Solution to recurrence is Tapp(n) = c1 + c2 × n

Asymptotic complexityTapp(n) is of O(n) “linear complexity”

Recurrence Equations

Analysis in general yields a system T(n) defined in terms of

T(m1), …, T(mk)

for m1, …, mk < n

T(0), T(1), … values for certain n

Possibilities solve recurrence equation (difficult in general) lookup asymptotic complexity for common case

Common Recurrence Equations

T(n) Asymptotic Complexity

c + T(n–1) O(n)

c1 + c2×n + T(n–1) O(n2)

c + T(n/2) O(log n)

c1 + c2×n + T(n/2) O(n)

c + 2T(n/2) O(n)

c + 2T(n-1) O(2n)

c1 + c2×n + 2T(n/2) O(n log n)

Example: Naïve Reverse

rev([]) -> [];rev([X|Xr]) -> app(rev(Xr),[X]).

Analysis yieldsTrev(0) = c1Trev(n) = c2 + Tapp(n-1) + Trev(n-1)= c2 + c3(n-1) + Trev(n-1)

Asymptotic complexityTrev(n) is of O(n2) “quadratic complexity”

What Is to be Done?

Size functions and input functions Recurrence equation Finding asymptotic complexity

…understanding of programs might be necessary…

Obs!

Making computations iterative can change runtime!

Can improve:Naïve reverse reverse

O(n2) O(n)

Can be worse:appending lists appending lists

O(n) O(n2)

Judging Asymptotic Complexity What does it mean?

good/bad? even optimal?

Consider size of computed output if size is of same order: perfect? otherwise:

O(n log n) is very good (optimal for sorting) check book on algorithms

Very difficult problem!

Hard Problems

Computer science is tough business Most optimization, combinatorial

problems are hard (non-tractable) problems

Graph coloring minimal number of colors no connected node has same color used in: compilation, frequency allocation, …

Travelling salesman

Hard Problems

NP problems: easy (polynomial complexity) for checking whether a certain candidate is a solution to problem

much harder: finding a solution

NP-complete problems: as hard as any other NP-complete problem

strong suspicion: no polynomial algorithm for finding a solution exists

examples: graph coloring, satisfiability testing, cryptography, …

Summary: Runtime Efficiency Asymptotic complexity gives runtime

guarantee abstract from constants for all possible input given in Big-Oh notation

Computing asymptotic complexity

Think about meaning of asymptotic complexity