1 tcss 342, winter 2005 lecture notes course overview, review of math concepts, algorithm analysis...

Post on 21-Dec-2015

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

TCSS 342, Winter 2005Lecture Notes

Course Overview,Review of Math Concepts,

Algorithm Analysis and Big-Oh Notation

Weiss book, Chapter 5, pp. 147-181

2

Course objectives

(broad) prepare you to be a good software engineer

(specific) learn basic data structures and algorithms data structures – how data is organized algorithms – unambiguous sequence of steps

to compute something

3

Software design goals

What are some goals one should have for good software?

4

Course content data structures algorithms

data structures + algorithms = programs

algorithm analysis – determining how long an algorithm will take to solve a problem

Who cares? Aren't computers fast enough and getting faster?

5

Given an array of 1,000,000 integers,find the maximum integer in the array.

Now suppose we are asked to find the kth largest element (The Selection Problem)

…0 1 2 999,999

An example

6

candidate solution 1 sort the entire array (from small to large),

using Java's Arrays.sort() pick out the (1,000,000 – k)th element

candidate solution 2 sort the first k elements for each of the remaining 1,000,000 – k

elements, keep the k largest in an array pick out the smallest of the k survivors

Candidate solutions

7

Is either solution good? Is there a better solution?

What makes a solution "better" than another? Is it entirely based on runtime?

How would you go about determining which solution is better? could code them, test them could somehow make predictions and

analysis of each solution, without coding

8

Why algorithm analysis?

as computers get faster and problem sizes get bigger, analysis will become more important

The difference between good and bad algorithms will get bigger

being able to analyze algorithms will help us identify good ones without having to program them and test them first

9

Why data structures?

when programming, you are an engineer engineers have a bag of tools and tricks –

and the knowledge of which tool is the right one for a given problem

Examples: arrays, lists, stacks, queues, trees, hash tables, heaps, graphs

10

Development practices modular (flexible) code appropriate commenting of code

each method needs a comment explaining its parameters and its behavior

writing code to match a rigid specification being able to choose the right data

structures to solve a variety of programming problems

using an integrated development environment (IDE)

incremental development

11

Math background: exponents

Exponents XY = "X to the Yth power";

X multiplied by itself Y times

Some useful identities XA XB = XA+B

XA / XB = XA-B

(XA)B = XAB

XN+XN = 2XN

2N+2N = 2N+1

12

Logarithms Logarithms

definition: XA = B if and only if logX B = A intuition: logX B means "the power X must be

raised to, to get B"

a logarithm with no base implies base 2log B means log2 B

Examples log2 16 = 4 (because 24 = 16) log10 1000 = 3 (because 103 = 1000)

13

log AB = log A + log B Proof: (let's write it together!)

log A/B = log A – log B log (AB) = B log A

example:log432 = (log2 32) / (log2 4) = 5 / 2

1,0,,log

loglog ACBA

A

BB

C

CA

Logarithms, continued

14

Series

for some expression Expr (possibly containing i ), means the sum of all values of Expr with each value of i between j and k inclusive

Example: = (2(0) + 1) + (2(1) + 1) + (2(2) + 1)

+ (2(3) + 1) + (2(4) + 1)= 1 + 3 + 5 + 7 + 9= 25

Arithmetic series

k

ji

Expr

4

0

12i

i

15

Series identities sum from 1 through N inclusive

is there an intuition for this identity? sum of all numbers from 1 to N

1 + 2 + 3 + ... + (N-2) + (N-1) + N

how many terms are in this sum? Can we rearrange them?

2

)1(

1

NNi

N

i

16

Series

sum of powers of 2

0 + 1 + 2 + 4 + 8 + 16 + 32 = 64 - 1 think about binary representation of

numbers...

sum of powers of any number a ("Geometric progression" for a>0, a ≠1)

a

aa

NN

i

i

1

1 1

0

122 1

0

NN

i

i

17

Algorithm performance How to determine how much time an algorithm

A uses to solve problem X ? Depends on input; use input size "N" as

parameter Determine function f(N) representing cost

empirical analysis: code it and use timer running on many inputs

algorithm analysis: Analyze steps of algorithm, estimating amount of work each step takes

18

Typically use a simple model for basic operation costs

RAM (Random Access Machine) model RAM model has all the basic operations:

+, -, *, / , =, comparisons fixed sized integers (e.g., 32-bit) infinite memory All basic operations take exactly one time

unit (one CPU instruction) to execute

RAM model

19

Critique of the model Strengths:

simple easier to prove things about the model than

the real machine can estimate algorithm behavior on any

hardware/software

Weaknesses: not all operations take the same amount of

time in a real machine does not account for page faults, disk

accesses, limited memory, floating point math, etc

20

model real world

Idea: useful statements using the model translate into useful statements about real computers

Why use models?

21

Relative rates of growth most algorithms' runtime can be

expressed as a function of the input size N

rate of growth: measure of how quickly the graph of a function rises

goal: distinguish between fast- and slow-growing functions we only care about very large input sizes

(for small sizes, most any algorithm is fast enough)

this helps us discover which algorithms will run more quickly or slowly, for large input sizes

22

Growth rate example

Consider these graphs of functions.Perhaps each one represents an

algorithm:n3 + 2n2

100n2 + 1000

• Which growsfaster?

23

Growth rate example

• How about now?

24

Defn: T(N) = O(f(N))if there exist positive constants c , n0 such that: T(N) c · f(N) for all N n0

idea: We are concerned with how the function grows when N is large. We are not picky about constant factors: coarse distinctions among functions

Lingo: "T(N) grows no faster than f(N)."

Big-Oh notation

25

Examples

n = O(2n) ?

2n = O(n) ?

n = O(n2) ?

n2 = O(n) ?

n = O(1) ?

100 = O(n) ?

10 log n = O(n) ?

214n + 34 = O(2n2 + 8n) ?

26

pick tightest bound. If f(N) = 5N, then:f(N) = O(N5)f(N) = O(N3)f(N) = O(N) preferredf(N) = O(N log N)

ignore constant factors and low order terms T(N) = O(N), not T(N) = O(5N) T(N) = O(N3), not T(N) = O(N3 + N2 + N log N) Bad style: f(N) O(g(N)) Wrong: f(N) O(g(N))

Preferred big-Oh usage

27

Big-Oh of selected functions

28

Ten-fold processor speedup

29

Defn: T(N) = (g(N)) if there are positive constants c and n0 such that T(N) c g(N) for all N n0

Lingo: "T(N) grows no slower than g(N)." Defn: T(N) = (h(N)) if and only if T(N) =

O(h(N)) and T(N) = (h(N)). Big-Oh, Omega, and Theta establish a

relative order among all functions of N

Big omega, theta

30

Defn: T(N) = o(p(N))if T(N) = O(p(N)) and T(N) (p(N))

notation intuition

O (Big-Oh) (Big-Omega) (Theta) =

o (little-Oh) <

Intuition, little-Oh

31

Fact: If f(N) = O(g(N)), then g(N) = (f(N)).

Proof: Suppose f(N) = O(g(N)).Then there exist constants c and n0 such that f(N) c g(N) for all N n0

Then g(N) (1/c) f(N) for all N n0, and so g(N) = (f(N))

More about asymptotics

32

T(N) = O(f(N)) f(N) is an upper bound on T(N) T(N) grows no faster than f(N)

T(N) = (g(N)) g(N) is a lower bound on T(N) T(N) grows at least as fast as g(N)

T(N) = o(h(N)) T(N) grows strictly slower than h(N)

More terminology

33

Facts about big-Oh

If T1(N) = O(f(N)) and T2(N) = O(g(N)), then T1(N) + T2(N) = O(f(N) + g(N)) T1(N) * T2(N) = O(f(N) * g(N))

If T(N) is a polynomial of degree k, then:T(N) = (Nk) example: 17n3 + 2n2 + 4n + 1 = (n3)

logk N = O(N), for any constant k

34

Algebraex. f(N) = N / log N

g(N) = log Nsame as asking which grows faster, N or log 2 N

Evaluate:

)(

)(lim

Ng

NfN

limit is Big-Oh relation

0 f(N) = o(g(N))

c 0 f(N) = (g(N))

g(N) = o(f(N))

no limit no relation

Techniques

35

L'Hôpital's rule:If and , then

example: f(N) = N, g(N) = log NUse L'Hôpital's rulef'(N) = 1, g'(N) = 1/N g(N) = o(f(N))

)(lim NfN

)(lim NgN

)('

)('lim

)(

)(lim

Ng

Nf

Ng

NfNN

Techniques, cont'd

36

for (int i = 0; i < n; i += c) // O(n) statement(s); Adding to the loop counter means that the loop runtime

grows linearly when compared to its maximum value n.

for (int i = 0; i < n; i *= c) // O(log n) statement(s); Multiplying the loop counter means that the maximum

value n must grow exponentially to linearly increase the loop runtime; therefore, it is logarithmic.

for (int i = 0; i < n * n; i += c) // O(n2) statement(s); The loop maximum is n2, so the runtime is quadratic.

Program loop runtimes

37

for (int i = 0; i < n; i += c) // O(n2)

for (int j = 0; j < n; i += c)

statement; Nesting loops multiplies their runtimes.

for (int i = 0; i < n; i += c)

statement;

for (int i = 0; i < n; i += c) // O(n log n)

for (int j = 0; j < n; i *= c)

statement; Loops in sequence add together their runtimes, which

means the loop set with the larger runtime dominates.

More loop runtimes

38

Maximum subsequence sum

The maximum contiguous subsequence sum problem:

Given a sequence of integers A0, A1, ..., An -

1,

find the maximum value of for any integers 0 (i, j) < n.

(This sum is zero if all numbers in the sequence are negative.)

j

ikkA

39

First algorithm (brute force)

try all possible combinations of subsequences

// implement togetherfunction maxSubsequence(array[]): max sum = 0

for each starting index i, for each ending index j, add up the sum from Ai to Aj

if this sum is bigger than max, max sum = this sum

return max sum

What is the runtime (Big-Oh) of this algorithm? How could it be improved?

40

still try all possible combinations, but don't redundantly add the sums

key observation:

in other words, we don't need to throw away partial sums

can we use this information to remove one of the loops from our algorithm?

// implement togetherfunction maxSubsequence2(array[]):

What is the runtime (Big-Oh) of this new algorithm? Can it still be improved further?

1j

ikkj

j

ikk AAA

Second algorithm (improved)

41

Third algorithm (improved!)

must avoid trying all possible combinations; to do this, we must find a way to broadly eliminate many potential combinations from consideration

Claim #1: A subsequence with a negative sum cannot be the start of the maximum-sum subsequence.

42

Claim #1, more formally: If Ai, j is a subsequence such that ,

then there is no q such that Ai,q is the maximum-sum subsequence.

Proof: (do it together in class)

Can this help us produce a better algorithm?

0

j

ikkA

Third algorithm, continued

43

Claim #2: when examining subsequences left - to - right, for some starting index i, if Ai,j becomes the first subsequence starting with i ,

such that

Then no part of Ai,j can be part of the maximum-sum subsequence.

(Why is this a stronger claim than Claim #1?)

Proof: (do it together in class)

0

j

ikkA

Third algorithm, continued

44

Third algorithm, continued

These figures show the possible contents of Ai,j

45

Can we eliminate another loop from our algorithm?

// implement togetherfunction maxSubsequence3(array[]):

What is its runtime (Big-Oh)?

Is there an even better algorithm than this third algorithm? Can you make a strong argument about why or why not?

Third algorithm, continued

46

Express the running time as f(N), where N is the size of the input worst case: your enemy gets to pick the

input average case: need to assume a probability

distribution on the inputs amortized: your enemy gets to pick the

inputs/operations, but you only have to guarantee speed over a large number of operations

Kinds of runtime analysis

47

References

Weiss book, Chapter 5, pp. 147-181

top related