m.tech-daa

Upload: tanyagoswami11

Post on 14-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 M.Tech-DAA

    1/51

    1

    Algorithm types

    Simple recursive algorithms Divide and conquer algorithms

    Dynamic programming algorithms

    Greedy algorithms Backtracking algorithms

    Branch and bound algorithms

    Randomized algorithms

  • 7/27/2019 M.Tech-DAA

    2/51

    Recursive Algorithm

    Definition

    An algorithm that calls itself

    Approach

    1. Solve small problem directly2. Simplify large problem into 1 or more smaller subproblem(s)

    & solve recursively

    3. Calculate solution from solution(s) for subproblem

  • 7/27/2019 M.Tech-DAA

    3/51

    Divide-and-Conquer

    The most-well known algorithm design strategy:

    1. Divide instance of problem into two or more independent

    smaller instances

    1. Solve smaller instances recursively

    2. Obtain solution to original (larger) instance by combining

    these solutions

  • 7/27/2019 M.Tech-DAA

    4/51

    Divide-and-Conquer

    Binary Search

    Merge Sort

    Quick Sorts

    Stranssens Matrix Multiplication

    Convex Hull

  • 7/27/2019 M.Tech-DAA

    5/51

    Dynamic Programming

    The principle of optimality applies if the optimal solution to aproblem always contains optimal solutions to all subproblems

    Dynamic Programming is an algorithm design technique foroptimization problems: often minimizing or maximizing.

    Like divide and conquer, DP solves problems by combiningsolutions to subproblems.

    Unlike divide and conquer, subproblems are not independent.

    Subproblems may share subsubproblems,

    However, solution to one subproblem may not affect thesolutions to other subproblems of the same problem.

  • 7/27/2019 M.Tech-DAA

    6/51

    Dynamic Programming

    DP reduces computation by

    Solving subproblems in a bottom-up fashion.

    Storing solution to a subproblem the first time it is solved.

    Looking up the solution when subproblem is encounteredagain.

    Examples :

    Assembly line scheduling

    Matrix chain multiplication

    Longest common subsequence

    Optimal binary search trees

    Knapsack Problem

    Shortest Paths

  • 7/27/2019 M.Tech-DAA

    7/51

    Steps in Dynamic Programming

    1. Characterize structure of an optimal solution.

    2. Define value of optimal solution recursively.

    3. Compute optimal solution values either top-down with

    caching or bottom-up in a table.

    4. Construct an optimal solution from computed values.

  • 7/27/2019 M.Tech-DAA

    8/51

    Comparison with divide-and-

    conquer Divide-and-conquer algorithms split a problem into separate

    subproblems, solve the subproblems, and combine the results

    for a solution to the original problem

    Example: Quicksort

    Example: Mergesort

    Example: Binary search

    Divide-and-conquer algorithms can be thought of as top-down

    algorithms In contrast, a dynamic programming algorithm proceeds by

    solving small problems, then combining them to find the

    solution to larger problems

    Dynamic programming can be thought of as bottom-up

  • 7/27/2019 M.Tech-DAA

    9/51

    Greedy Algorithms

    Similar to dynamic programming, but simpler approach

    Also used for optimization problems

    Idea: When we have a choice to make, make the one that looksbest right now

    Make a locally optimal choice in hope of getting a globally

    optimal solution Greedy algorithms dont always yield an optimal solution

    When the problem has certain general characteristics, greedy

    algorithms give optimal solutions

  • 7/27/2019 M.Tech-DAA

    10/51

    Greedy Algorithms

    Minimum spanning trees

    Shortest path

    Knapsack problem

    Task Scheduling Activity selection problem

  • 7/27/2019 M.Tech-DAA

    11/51

    Dynamic Programming vs.

    Greedy Algorithms Dynamic programming

    We make a choice at each step

    The choice depends on solutions to subproblems

    Bottom up solution, from smaller to larger subproblems

    Greedy algorithm

    Make the greedy choice and THEN

    Solve the subproblem arising after the choice is made

    The choice we make may depend on previous choices, but

    not on solutions to subproblems

    Top down solution, problems decrease in size

  • 7/27/2019 M.Tech-DAA

    12/51

    Backtracking Algorithm

    Is used to solve problems for which a sequence of objects is tobe selected from a set such that the sequence satisfies some

    constraint.

    Traverses the state Tree using a depth-first search with

    pruning.

    Performs a depth-first traversal of a tree.

    Continues until it reaches a node that is non-viable or non-

    promising.

    Prunes the sub tree rooted at this node and continues thedepth-first traversal of the tree.

    This gives a significant advantage over an exhaustive search of

    the tree for the average problem.

    Worst case: Algorithm tries every path, traversing the entiresearch space as in an exhaustive search.

  • 7/27/2019 M.Tech-DAA

    13/51

    Backtracking Algorithm

    Travelling Salesman Problem

    Graph Coloring

    n-Queen Problem

    Hamiltonian Cycles Sum of subsets

    Knapsack problem

  • 7/27/2019 M.Tech-DAA

    14/51

    Branch and Bound

    Branch and bound (BB) is a general search method, especially

    in discrete optimization.

    There is a way to split the solution space (branch)

    There is a way to predict a lower bound for a class of solutions.There is also a way to find a upper bound of an optimal

    solution. If the lower bound of a solution exceeds the upper

    bound, this solution cannot be optimal and thus we should

    terminate the branching associated with this solution.

    divide it into subproblems. The algorithm is applied

    recursively to the subproblems. The search proceeds until all

    nodes have been solved or pruned.

  • 7/27/2019 M.Tech-DAA

    15/51

    Branch and Bound

    Backtracking uses a depth-first search with pruning, the

    branch and bound algorithm uses a breadth-first search with

    pruning

    Branch and bound uses a queue as an auxiliary data structure

    In many types of problems, branch and bound is faster than

    backtracking, due to the use of a breadth-first search instead of

    a depth-first search

    The worst case scenario is the same, as it will still visit every

    node in the tree

  • 7/27/2019 M.Tech-DAA

    16/51

    Branch and Bound

    Travelling Salesman Problem

    Graph Coloring

    n-Queen Problem

    Hamiltonian Cycles Sum of subsets

    Knapsack problem

    Assignment problem

  • 7/27/2019 M.Tech-DAA

    17/51

    Randomized algorithms

    A randomized algorithm is just one that depends on random

    numbers for its operation

    These are randomized algorithms:

    Using random numbers to help find a solution to a problem Using random numbers to improve a solution to a problem

    These are related topics:

    Getting or generating random numbers

    Generating random data for testing (or other) purposes

  • 7/27/2019 M.Tech-DAA

    18/51

    The 0/1 knapsack problem using

    Backtracking Approach Suppose that K = 16 and n = 4, and we have the following set

    of objects ordered by their value density.

    i pi wi pi/wi

    1 $45 3 $15

    2 $30 5 $ 6

    3 $45 9 $ 5

    4 $10 5 $ 2

  • 7/27/2019 M.Tech-DAA

    19/51

    TotalSize = currentSize + size of remaining objects that can be

    fully placed

    bound (maximum potential value) = currentValue + value

    of remaining objects fully placed +(K - totalSize) * (value

    density of item that is partially placed)

    for a node at level i in the state space tree (the first i items have

    been considered for selection) and for the kth object as the one

    that will not completely fit into the remaining space in the

    knapsack, these formulae can be written:

    When the bound of a node is less than or equal to the current

    maximum value, or adding the current item to the node causesthe size of the contents to exceed the capacity of the knapsack,

    the subtrees rooted at that node are pruned, and the traversal

    backtracks to the previous parent in the state

  • 7/27/2019 M.Tech-DAA

    20/51

    k-1

    totalSize = currentSize + wj

    j=i+1

    k-1

    bound = currentValue + pj + (K - totalSize) * (pk/wk)

    j=i+1 For the root node, currentSize = 0, currentValue = 0

    totalSize = 0 + s1 + s2 = 0 + 3 + 5 = 8

    bound = 0 + p1 + p2 + (K - totalSize) * (p3/w3) = 0 + $45

    + $30 + (16 - 8) * ($5) = $75 + $40 = $115

  • 7/27/2019 M.Tech-DAA

    21/51

  • 7/27/2019 M.Tech-DAA

    22/51

  • 7/27/2019 M.Tech-DAA

    23/51

    23

    The 0/1 knapsack problem using

    B&B Positive integer P1, P2, , Pn (profit)

    W1, W2, , Wn (weight)

    M (capacity)

    maximize P Xi ii

    n

    1

    subject to W X Mi ii

    n

    1 X

    i = 0 or 1, i =1, , n.

    The problem is modified:

    minimize

    P Xi i

    i

    n

    1

  • 7/27/2019 M.Tech-DAA

    24/51

    24

    How to find the upper bound?

    Ans: by quickly finding a feasible solution in a

    greedy manner: starting from the smallest

    available i, scanning towards the largest is

    until M is exceeded. The upper bound can becalculated.

  • 7/27/2019 M.Tech-DAA

    25/51

    25

    The 0/1 knapsack problem

    E.g. n = 6, M = 34

    A feasible solution: X1 = 1, X2 = 1, X3 = 0, X4 = 0,

    X5 = 0, X6 = 0

    -(P1+P2) = -16 (upper bound)

    Any solution higher than -16 can not be an optimal solution.

    i 1 2 3 4 5 6

    Pi 6 10 4 5 6 4

    Wi 10 19 8 10 12 8

    (Pi/Wi Pi+1/Wi+1)

  • 7/27/2019 M.Tech-DAA

    26/51

    26

    How to find the lower bound?

    Ans: by relaxing our restriction from Xi = 0 or 1 to 0

    Xi 1 (knapsack problem)

    Let P Xi ii

    n

    1be an optimal solution for 0/1

    knapsack problem and

    P Xii

    n

    i

    1

    be an optimal

    solution for fractional knapsack problem. Let

    Y=

    P Xi ii

    n

    1

    , Y =

    P Xii

    n

    i

    1

    .

    Y Y

  • 7/27/2019 M.Tech-DAA

    27/51

    27

    The knapsack problem

    We can use the greedy method to find an optimal solution forknapsack problem.

    For example, for the state of X1=1 and X2=1, we have

    X1 = 1, X2 =1, X3 = (34-10-19)/8=5/8, X4 = 0, X5 = 0, X6 =0-(P1+P2+5/8P3) = -18.5 (lower bound)

    -18 is our lower bound. (only consider integers)

  • 7/27/2019 M.Tech-DAA

    28/51

    28

    How to expand the tree?

    By the best-first search scheme

    That is, by expanding the node with the best

    lower bound. If two nodes have the same

    lower bounds, expand the node with the

    lower upper bound.

  • 7/27/2019 M.Tech-DAA

    29/51

    29 0/1 Knapsack Problem Solved by Branch-and-Bound Strategy

  • 7/27/2019 M.Tech-DAA

    30/51

    30

    Node 2 is terminated because its lower bound

    is equal to the upper bound of node 14.

    Nodes 16, 18 and others are terminated

    because the local lower bound is equal to the

    local upper bound.

    (lower bound optimal solution upper

    bound)

  • 7/27/2019 M.Tech-DAA

    31/51

    The assignment problem

    We want to assign n people to n jobs so that

    the total cost of the assignment is as small as

    possible (lower bound)

  • 7/27/2019 M.Tech-DAA

    32/51

    Select one element in each row of the cost matrix Cso that:

    no two selected elements are in the same column; and

    the sum is minimizedFor example:

    Job 1 Job 2 Job 3 Job 4

    Person a 9 2 7 8

    Person b 6 4 3 7Person c 5 8 1 8

    Person d 7 6 9 4

    Lower bound: Any solution to this problem will have

    total cost of at least:

    Example: The assignment problem

    sum of the smallest element in each row =

    10

  • 7/27/2019 M.Tech-DAA

    33/51

    Assignment problem: lower bounds

  • 7/27/2019 M.Tech-DAA

    34/51

    State-space levels 0, 1, 2

  • 7/27/2019 M.Tech-DAA

    35/51

    Complete state-space

  • 7/27/2019 M.Tech-DAA

    36/51

    Basic Matrix Multiplication

    Suppose we want to multiply two matrices of sizeN

    xN: for exampleA xB = C.

    C11 = a11b11 + a12b21

    C12

    = a11

    b12

    + a12

    b22

    C21

    = a21

    b11

    + a22

    b21

    C22 = a21b12 + a22b22 2x2 matrix multiplication can be

    accomplished in 8 multiplication.(2log28 =23)

  • 7/27/2019 M.Tech-DAA

    37/51

    Strassenss Matrix Multiplication

    Strassen showed that 2x2 matrix multiplication can be

    accomplished in 7 multiplication and 18 additions or

    subtractions. .(2log27 =22.807)

    This reduce can be done by Divide and Conquer

    Approach.

  • 7/27/2019 M.Tech-DAA

    38/51

    Divide-and-Conquer

    Divide-and conqueris a general algorithm designparadigm:

    Divide: divide the input data Sin two or more disjointsubsets S1, S2,

    Recur: solve the subproblems recursively

    Conquer: combine the solutions forS1,S2, , into asolution forS

    The base case for the recursion are subproblems of

    constant size Analysis can be done using recurrence equations

  • 7/27/2019 M.Tech-DAA

    39/51

    Strassenss Matrix Multiplication

    P1 = (A11+ A22)(B11+B22)P2

    = (A21

    + A22

    ) * B11

    P3

    = A11

    * (B12

    - B22

    )

    P4 = A22 * (B21 - B11)

    P5

    = (A11

    + A12

    ) * B22

    P6

    = (A21

    - A11

    ) * (B11

    + B12

    )

    P7 = (A12 - A22) * (B21 + B22)

    C11 = P1 + P4 - P5 + P7C12

    = P3

    + P5

    C21

    = P2

    + P4

    C22 = P1 + P3 - P2 + P6

  • 7/27/2019 M.Tech-DAA

    40/51

    Convex Hull

    The ever-present structure in computational geometry

    Used to construct other structures

    Useful in many applications: robot motion planning, shape

    analysis etc.

    one of the early success stories in computational geometry

    that sparked interest among Computer Scientists by the

    invention of O(nlogn) algorithm rather than a O(n^3)

    algorithm.

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    41/51

    Preliminaries and definitions

    Intuitive definition

    Given a set S= {P0, P1, ..., Pn-1} ofn points in the plane,

    the convex hull H(S) is the smallest convex polygon in the planethat contains all of the points ofS.

    Imagine nails pounded halfway into the plane at the points ofS.

    The convex hull corresponds to a rubber band stretched around them.

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    42/51

    Preliminaries and definitions

    Convex polygon

    A polygon is convexiff for any two points in the polygon

    (interior

    boundary) the segment connecting the points isentirely within the polygon.

    convex

    not convex

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    43/51

    Quickhull

    Quicksort

    The Quickhull algorithm is based on the Quicksort algorithm.

    Recall how quicksort operates: at each level of recursion,

    an array of numbers to be sorted is partitioned into two subarrays,such that each term of the first (left) subarray is not larger

    than each term of the second (right) subarray.

    LEFT RIGHT

    Two pointers to the array cells (LEFT and RIGHT)

    initially point to the opposite extreme ends of the array.

    LEFT and RIGHT move towards each other, one cell at a time.

    At any given time, one pointer is moving and one is not.

    If the numbers pointed to by LEFT and RIGHT violate

    the desired sort order, they are swapped, then the moving pointer

    is halted and the halted pointer becomes the moving pointer.

    When the two pointers point to the same cell, the array is split

    at that cell and the process recurses on the subarrays.

    Quicksort runs in expected O(Nlog N) time,

    if the subarrays are well balanced,

    but can require as much as O(N2) time in the worst case.

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    44/51

    Quickhull

    Quickhull overview

    Quickhull operates in a similar manner.It recursively partitions the point set S,

    so as to find the convex hull for each subset.

    The hull at each level of the recursion is formed by

    concatenating the hulls found at the next level down.

    S

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    45/51

    Quickhull

    Initial partition

    The initial partition ofSis determined by a line L

    through the points l, rSwith the smallest and largest abscissa.S(1)Sis the subset ofSon or above L .S(2)Sis the subset ofSon or below L .Note that {S(1), S(2)} is not a strict partition ofS,

    as S(1)S(2) {l,r}. This is not a difficulty.The idea now is to construct hulls H(S(1)) and H(S(2)),

    then concatenate them to get H(S).

    The process is the same forS(1) and S(2), we considerS(1).

    l

    r

    L

    S(1)

    S(2)

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    46/51

    Quickhull

    Finding the apex

    Find the point hS(1) such that(1) triangle h lrhas the maximum area of all triangles {p lr: pS(1)},and if there are > 1 triangles with maximum area,(2) the one where angle h lris maximum.

    This condition ensures that hH(S). Why?Construct a line parallel to line L through h, call it L .There will be no points ofS(1) (orS) above L , by condition (1).There may be other points on L , but hwill be the leftmost,by condition (2),hence it is not a convex combination of any two points of S.

    hH(S).Apex hcan be found in O(N) time by checking each point ofS(1).

    l

    r

    L

    S(1)

    L

    h

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    47/51

    Quickhull

    Partitioning the point setConstruct two directed lines, L 1 from lto h, and L 2 from hto r.

    Each point ofS(1)

    is classified relative to L 1 and L 2(e.g., point-line classification).

    No point can be to the left of both L 1 and L 2.

    Points to the right of both are not in H(S),

    as they are within triangle h lr,

    and are eliminated from further consideration.

    Points left ofL 1 are S(1,1).Points left ofL 2 are S

    (1,2).

    l

    r

    L eliminatedL1

    L2

    S(1,1)S

    (1,2)

    h

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    48/51

    Quickhull

    RecursionThe process recurses on S(1,1) and are S(1,2).

    (set, left endpoint, right endpoint)

    (S(),l,r)

    (S(,1),l,h) (S(,2),h,r)

    The recursion continues until S() has 0 points,

    i.e., all internal points have been eliminated,

    which implies that segment lris an edge ofH(S).

    L

    l

    r

    L1

    L2

    S(1,2,2)

    h

    S(1,2)

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    49/51

    Quickhull

    Geometric primitives

    The geometric primitives used by this algorithm are:1. Point-line classification

    2. Area of a triangle

    Both of these require O(1) time.

    Convex hulls

  • 7/27/2019 M.Tech-DAA

    50/51

    Quickhull

    General function

    Sis assumed to have at least 2 elements

    (the recursion ends otherwise).

    FURTHEST(S, l, r) is a function, not given here,that finds the apex point has previously defined.

    The operator || denotes list concatentation.

    Procedure QUICKHULL returns an ordered list of points.

    1 procedure QUICKHULL(S, l, r)

    2 begin

    3 if S= {l, r} then

    4 return (l, r) /* l ris an edge of H(S) */5 else

    6 h= FURTHEST(S, l, r)

    7 S(1) = pSpis on or left of line lh8 S(2) = pSpis on or left of line hr9 return QUICKHULL(S(1), l, h) ||

    (QUICKHULL(S(2), h, r) - h)

    10 end11 end

    Initial call

    1 begin

    2 l0 = (x0, y0) /* point ofSwith smallest abscissa */

    3 r0 = (x0, y0 - )4 result = QUICKHULL(S, l0, r0) - r0

    /*The point r0 is eliminated from the final list*/5 end

    Convex hulls

    Q

  • 7/27/2019 M.Tech-DAA

    51/51

    Quickhull

    Analysis

    Worst case time: O(N2

    )Expected time: O(Nlog N)

    Storage: O(N2)

    At each level of the recursion, partitioning Sinto S(1) and S(2)

    requires O(N) time. IfS(1) and S(2) were guaranteed to have

    a size equal to a fixed portion ofS, and this held at each level,

    the worst case time would be O(Nlog N).

    However, those criteria do not apply;

    S(1) and S(2) may have size in O(N) (they are not balanced).

    Hence the worst case time is O(N2),O(N) at each ofO(N) levels of recursion.

    The same applies to storage.