chapter 10 a algorithm efficiency. © 2004 pearson addison-wesley. all rights reserved 10 a-2...

Post on 01-Jan-2016

223 Views

Category:

Documents

6 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Chapter 10 A

Algorithm Efficiency

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-2

Determining the Efficiency of Algorithms

• Analysis of algorithms– Provides tools for contrasting the efficiency of

different methods of solution

• A comparison of algorithms– Should focus of significant differences in

efficiency– Should not consider reductions in computing

costs due to clever coding tricks

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-3

Determining the Efficiency of Algorithms

• Three difficulties with comparing programs instead of algorithms– How are the algorithms coded?– What computer should you use?– What data should the programs use?

• Algorithm analysis should be independent of – Specific implementations– Computers– Data

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-4

The Execution Time of Algorithms

• Counting an algorithm's operations is a way to access its efficiency– An algorithm’s execution time is related to the

number of operations it requires– Examples

• Traversal of a linked list

• The Towers of Hanoi

• Nested Loops

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-5

Algorithm Growth Rates

• An algorithm’s time requirements can be measured as a function of the problem size

• An algorithm’s growth rate– Enables the comparison of one algorithm with another

– ExamplesAlgorithm A requires time proportional to n2

Algorithm B requires time proportional to n

• Algorithm efficiency is typically a concern for large problems only

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-6

Algorithm Growth Rates

Figure 10.1Figure 10.1

Time requirements as a function of the problem size n

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-7

Order-of-Magnitude Analysis and Big O Notation

• Definition of the order of an algorithmAlgorithm A is order f(n) – denoted O(f(n)) – if constants k and n0 exist such that A requires no more than k * f(n) time units to solve a problem of size n ≥ n0

• Growth-rate function– A mathematical function used to specify an algorithm’s

order in terms of the size of the problem

• Big O notation– A notation that uses the capital letter O to specify an

algorithm’s order– Example: O(f(n))

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-8

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-9

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-10

Order-of-Magnitude Analysis and Big O Notation

Figure 10.3aFigure 10.3a

A comparison of growth-rate functions: a) in tabular form

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-11

Order-of-Magnitude Analysis and Big O Notation

Figure 10.3bFigure 10.3b

A comparison of growth-rate functions: b) in graphical form

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-12

Order-of-Magnitude Analysis and Big O Notation

• Order of growth of some common functionsO(1) < O(log2n) < O(n) < O(n * log2n) < O(n2) < O(n3) < O(2n)

• Properties of growth-rate functions– You can ignore low-order terms– You can ignore a multiplicative constant in the

high-order term– O(f(n)) + O(g(n)) = O(f(n) + g(n))

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-13

Order-of-Magnitude Analysis and Big O Notation

• Worst-case and average-case analyses– An algorithm can require different times to

solve different problems of the same size• Worst-case analysis

– A determination of the maximum amount of time that an algorithm requires to solve problems of size n

• Average-case analysis– A determination of the average amount of time that an

algorithm requires to solve problems of size n

Algorithm Growth Rate Analysis

14

Sequential Statements– If the orders of code segments p1 and p2 are f1(n) and f2(n), respectively, then the order of p1;p2 = O(f1(n) + f2(n)) = O(max(f1(n), f2(n)))

For Loops (simple ones)– Use to figure out total number of iterations

– Shortcut

While Loops– No particular way, might be able to count

Recursion– Recurrence relation

15

Important Recurrences Constant operation reduces problem size by one

Tn = Tn-1 + cSolution: Tn = O(n) linear

Examples: find largest version 2 and linear search find_max2(myArray, n) // finds largest value in myArray of

size n if (n == 1) return myArray[0]; x = find_max2(myArray, n-1); if (myArray[n-1] > x) return myArray[n-1]; else return x;

linear_search(myArray, n, key) // returns index of key if found in myArray of size n, -1 otherwise if (n == 0) return -1; index = linear_search(myArray, n-1, key); if (index != -1) return index; if (myArray[n-1] == key) return n-1; else return -1;

16

Important Recurrences A pass through input reduces problem size by one

Tn = Tn-1 + n (or Tn = Tn-1 + (n – 1))

Solution: Tn = O(n 2) quadratic Example: insertion sort insertion_sort(myArray, n) // sort myArray of size n

if (n == 1) return; insertion_sort(myArray, n-1) find the right position to insert myArray[n-1] // it takes O(n)

Constant operation divides problem into two subproblems of half-size Tn = 2Tn/2 + c

Solution: Tn = O(n) linear Example: find largest version 1 find_max1(myArray, i, j) // find largest value in myArray[i .. j]

if (i == j) return myArray[i]; x = find_max2(myArray, i, (i+j)/2); y = find_max2(myArray, (i+j)/2+1, j); return (x > y)? x : y;

17

Important Recurrences Constant operation reduces problem

size by half Tn = Tn/2 + c

Solution: Tn = O(lg n) logarithmic Example: binary search

A pass through input divides problem into 2 subproblems of input size reduced by half Tn = 2Tn/2 + n

Solution: Tn = O(n lg n) n log n Example: merge sort

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-18

Keeping Your Perspective

• Throughout the course of an analysis, keep in mind that you are interested only in significant differences in efficiency

• When choosing an ADT’s implementation, consider how frequently particular ADT operations occur in a given application

• Some seldom-used but critical operations must be efficient

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-19

Keeping Your Perspective

• If the problem size is always small, you can probably ignore an algorithm’s efficiency

• Weigh the trade-offs between an algorithm’s time requirements and its memory requirements

• Compare algorithms for both style and efficiency

• Order-of-magnitude analysis focuses on large problems

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-20

The Efficiency of Searching Algorithms

• Sequential search– Strategy

• Look at each item in the data collection in turn, beginning with the first one

• Stop when– You find the desired item

– You reach the end of the data collection

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-21

The Efficiency of Searching Algorithms

• Sequential search– Efficiency

• Worst case: O(n)

• Average case: O(n)

• Best case: O(1)

© 2004 Pearson Addison-Wesley. All rights reserved 10 A-22

The Efficiency of Searching Algorithms

• Binary search– Strategy

• To search a sorted array for a particular item– Repeatedly divide the array in half

– Determine which half the item must be in, if it is indeed present, and discard the other half

– Efficiency• Worst case: O(log2n)

• For large arrays, the binary search has an enormous advantage over a sequential search

top related