(20-21)-search method.pdf

Upload: leksremesh

Post on 07-Jul-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/18/2019 (20-21)-search method.pdf

    1/29

    Optimization

    MEL 806

    Thermal System Simulation (2-0-2)

    Dr. Prabal Talukdar  Associate Professor 

    Department of Mechanical Engineering

      e

  • 8/18/2019 (20-21)-search method.pdf

    2/29

    Introduction• In the preceding lectures, we focused our attention on obtaining a

    workable, feasible, or acceptable design of a system. Such a design

    satisfies the requirements for the given application, without violating

    any imposed constraints. A system fabricated or assembled

    ecause o s es gn s expec e o per orm e appropr a e as s

    for which the effort was undertaken.

    • However, the design would generally not be the best design, where

    e e n on o es s ase on cos , per ormance, e c ency or

    some other such measure.

    • In actual practice, we are usually interested in obtaining the best

    qua y or per ormance per un cos , w accep a e env ronmen aeffects. This brings in the concept of optimization, which minimizes

    or maximizes quantities and characteristics of particular interest to a

    .

  • 8/18/2019 (20-21)-search method.pdf

    3/29

    UNCONSTRAINED SEARCH WITH MULTIPLE

    VARIABLES

    Let us now consider the search for an optimal

    design when the system is governed by two ormore independent variables.

    However, the complexity of the problem rises

    ,

    therefore, attention is generally directed at the most

    important variables, usually restricting these to two

    or three.

  • 8/18/2019 (20-21)-search method.pdf

    4/29

    • ,

    be well characterized in terms of two or threepredominant variables.

    • Examples of this include the length and diameter

    of a heat exchanger, fluid flow rate andevaporator temperature in a refrigeration

    system, and so on.

  • 8/18/2019 (20-21)-search method.pdf

    5/29

    approach to the optimum design, a

    or lines of constant values of the objective

    .

  • 8/18/2019 (20-21)-search method.pdf

    6/29

    Lattice Search Method

    Lattice search method in a two-variable space.

  • 8/18/2019 (20-21)-search method.pdf

    7/29

    Univariate Search•

    objective function with respect to one variable ata time. Therefore, the multivariable problem is

    reduced to a series of single-variable

    optimization problems, with the processconverg ng o e op mum as e var a es are

    alternated

  • 8/18/2019 (20-21)-search method.pdf

    8/29

    Graphical presentation

    Various steps in the univariate search method.

  • 8/18/2019 (20-21)-search method.pdf

    9/29

    The method A starting point is chosen based on available information on the system

    .

    First, one of the variables, say x, is held constant and the function is

    optimized with respect to the other variable y.

    Point A represents the optimum thus obtained. Then y is held constant

    at the value at point A and the function is optimized with respect to x to

    obtain the optimum given by point B.

     Again, x is held constant at the value at point B and y is varied to obtain

    the optimum, given by point C.

    This process is continued, alternating the variable, which is changedwhile keeping the others constant, until the optimum is attained.

    This is indicated b the chan e in the ob ective function from one ste

    to the next, becoming less than a chosen convergence criterion or

    tolerance

  • 8/18/2019 (20-21)-search method.pdf

    10/29

  • 8/18/2019 (20-21)-search method.pdf

    11/29

    If y is kept constant, the value of x at the optimum is given by

    Similarly, if x is held constant, the value of y at the optimum is given by

    let us choose x = y = 0.5 as the starting point.

    First x is held constant and y is varied to obtain an optimum value

    of U. Then y is held constant and x is varied to obtain an optimum

    value of U. In both cases, the recedin e uations are used.

  • 8/18/2019 (20-21)-search method.pdf

    12/29

    Calculations

    x y u

    0.5 1.632993 9.839626

    1.944161 0.828139 5.598794

    2.437957 0.739531 5.427791

    . . .

    2.547644 0.723436 5.422363

    2.550314 0.723057 5.422359

    2.55076 0.722994 5.422359

    2.550834 0.722983 5.422359

    2.550847 0.722982 5.422359

  • 8/18/2019 (20-21)-search method.pdf

    13/29

    Steepest Ascent/Descent Method• The stee est ascent/descent method is a ver efficient

    search method for multivariable optimization and iswidely used for a variety of applications, including

    .

    • It is a hill-climbing technique in that it attempts to move

    toward the peak, for maximizing the objective function, ortoward the valley, for minimizing the objective function,

    over the shortest possible path.

    case and steepest descent in the latter.

  • 8/18/2019 (20-21)-search method.pdf

    14/29

    • At each step, starting with the initial trial point, the

    rec on n w c e o ec ve unc on c anges a e

    greatest rate is chosen for moving the location of theoint, which re resents the desi n on the multivariable

    space.

    Steepest ascent method, shown in terms of (a) the climb toward the peakof a hill and (b) in terms of constant U contours.

  • 8/18/2019 (20-21)-search method.pdf

    15/29

    • It was shown that the radient vector is normal toU∇

    the constant U contour line in a two-variable space, tothe constant U surface in a three-variable space, and so

    .

    • Since the normal direction represents the shortest

    distance between two contour lines, the direction of thegradient vector is the direction in which U changes at

    the greatest rate.U∇

      ,written as

  • 8/18/2019 (20-21)-search method.pdf

    16/29

    • At each trial oint the radient vector is determined and

    the search is moved along this vector, the direction beingchosen so that U increases if a maximum is sought, or U

    .

    • The direction represented by the gradient vector is given

    by the relationship between the changes in theindependent variables. Denoting these by Δx1, Δx2 , ---

    Δxn , we have

  • 8/18/2019 (20-21)-search method.pdf

    17/29

    First approach• Choose a startin oint. Select Δx. Calculate the

    derivatives.• Decide the direction of movement, i.e., whether Δx is

    pos ve or nega ve. a cu a e y. a n e new va ues

    of x, y, and U.

    • Calculate the derivatives a ain at this oint. Re eat 

    previous steps to attain new point.

    • This procedure is continued until the change in the

    var a es e ween wo consecu ve era ons s w n adesired convergence criterion.

  • 8/18/2019 (20-21)-search method.pdf

    18/29

  • 8/18/2019 (20-21)-search method.pdf

    19/29

    Example Problem•

    before and apply the two approaches just

    method to obtain the minimum cost U.

  • 8/18/2019 (20-21)-search method.pdf

    20/29

    • The startin oint is taken as x = = 0.5. The results

    obtained for different values ofΔ

    x are

  • 8/18/2019 (20-21)-search method.pdf

    21/29

  • 8/18/2019 (20-21)-search method.pdf

    22/29

    Multivariable Constrained

    optimization, which is much more involved thanthe various unconstrained optimization cases

    considered thus far.

    • The number of independent variables must belarger than the number of equality constraints;

    otherwise, these constraints may simply be used

    possible

  • 8/18/2019 (20-21)-search method.pdf

    23/29

    Penalt Function Method• The basic a roach of this method is to convert the

    constrained problem into an unconstrained one byconstructing a composite function using the objective

    • Let us consider the optimization problem given by the

    equations

    • The composite function, also known as the penaltyfunction, may be formulated in many different ways.

  • 8/18/2019 (20-21)-search method.pdf

    24/29

    • If a maximum in U is being sought, a new objective

    function is defined as

    • Here the r’s are scalar quantities that vary the

    importance given to the various constraints and are

    .

    • They may all be taken as equal or different.

    • Higher values may be taken for the constraints that are

    critical and smaller values for those that are not as

    important.

  • 8/18/2019 (20-21)-search method.pdf

    25/29

    • If the penalty parameters are all taken as zero, the constraints

    have no effect on the solution and therefore the constraints 

    are not satisfied.

    • On the other hand, if these parameters are taken as large, the

    constraints are satisfied but the conver ence to the o timum is 

    slow.

    • Therefore, by varying the penalty parameters we can vary the

    rate of conver ence and the effect of the different constraints 

    on the solution.

    • The general approach is to start with small values of the

    enalt arameters and raduall increase these as the G’s

    which represent the constraints, become small.• This implies going gradually and systematically from an

    unconstrained roblem to a constrained one.

  • 8/18/2019 (20-21)-search method.pdf

    26/29

     

    different values of the penalty parameter r.

  • 8/18/2019 (20-21)-search method.pdf

    27/29

  • 8/18/2019 (20-21)-search method.pdf

    28/29

    Example Problem

    In a two-component system, the cost is the objective function given by the

    ,

    where x and y represent the specifications of the two components. Thesevariables are also linked by mass conservation to yield the constraint

    G(x, y) = xy -12 = 0

    Solve this problem by the penalty function method to obtain minimum cost.

    The new objective function V(x, y), consisting of the objective function ande cons ra n , s e ne as

     

    the optimum. An exhaustive search can be used because of the simplicityof the method and the given functions.

      , ,

    large, the constraints are satisfied, but the convergence is slow.

  • 8/18/2019 (20-21)-search method.pdf

    29/29

    We may also derive x and y in terms of the penalty parameter r, by

    ,

    expressions to zero, as

    See the spreadsheet