[american institute of aeronautics and astronautics 19th aiaa applied aerodynamics conference -...

10
(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization. A01 -25253 Seattle, WA and Exhibit 16-19 April 2001 AIAA-2001-1520 A CONVEX SET BASED OPTIMIZATION ALGORITHM FOR STRUCTURAL RELIABILITY ANALYSIS M.A. Elseifi*, M.R. Khalessi 1 , H-Z Lin* Unipass Technologies, Inc. Irvine, California Abstract In certain applications of structural reliability, it is required to find points on the limit-state surface with minimal distance to the origin of the standard normal space. This problem can be formulated as a constrained optimization problem and several algorithms can be used in the solution. In this paper, a convex set approach is used along with a penalty method for the solution of this constrained minimization problem. Several problems are solved to demonstrate the generality, robustness, and efficiency of the new method. 1.0 Introduction Probabilistic methods attempt to model the variabilities of a given system parameters with random variables, resulting in a realistic assessment of the reliability of a system. Reliability is defined as the probabilistic measure of assurance of performance of a design in its intended environment [1]. Various methods have been proposed for the calculation of the reliability of a system. The efficiency of any particular method depends on the ability to calculate the probability of failure accurately using the minimum number of limit state function evaluations. In the current structural component reliability analysis methods, the limit-state function, g, defines the failure boundary separating the failure region from the safe region. Given the joint probability density function of all the random variables (expressed in a vector form x) as f(x), the probability of failure of the system can be expressed as: P f = where Q is the failure region g(x) < 0 . Except for very special cases, the multifold integral in Equation (1) cannot be evaluated explicitly. Alternatively, Monte Carlo simulation procedures can be applied to evaluate the probability of failure at a higher cost due to the large number of limit state function calculations are usually required. In the most widely used reliability methods [2], approximations are made in the space of standard, uncorrelated, normal variates u , obtained from a transformation of the basic variables X, U = T(x} (2) where the transformation T depends on the distributions of the basic variables .X [3]. In the standard space u, the limit-state surface is defined as: (3) This limit-state surface is now approximated using first or second-order surfaces to get an approximate estimation for the exact probability of failure P, . These surfaces are fitted to the exact limit-state surface at point(s) with minimal distance to the origin in the standard normal space, known as the design point(s) or the Most Copyright © 2001 by the American Institute of Aeronautics and Astronautics. All rights reserved. Principal Investigator, Member AIAA f CPDO, Member AIAA *CRDO, Member AIAA 1 American Institute of Aeronautics and Astronautics

Upload: h-z

Post on 15-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

A01 -25253 Seattle, WA and Exhibit16-19 April 2001

AIAA-2001-1520

A CONVEX SET BASED OPTIMIZATION ALGORITHM FOR STRUCTURALRELIABILITY ANALYSIS

M.A. Elseifi*, M.R. Khalessi1, H-Z Lin*Unipass Technologies, Inc.

Irvine, California

Abstract

In certain applications of structural reliability, it is required to find points on the limit-state surface withminimal distance to the origin of the standard normal space. This problem can be formulated as a constrainedoptimization problem and several algorithms can be used in the solution. In this paper, a convex set approach isused along with a penalty method for the solution of this constrained minimization problem. Several problemsare solved to demonstrate the generality, robustness, and efficiency of the new method.

1.0 Introduction

Probabilistic methods attempt to model thevariabilities of a given system parameters withrandom variables, resulting in a realisticassessment of the reliability of a system.Reliability is defined as the probabilistic measureof assurance of performance of a design in itsintended environment [1]. Various methods havebeen proposed for the calculation of the reliabilityof a system. The efficiency of any particularmethod depends on the ability to calculate theprobability of failure accurately using theminimum number of limit state functionevaluations.

In the current structural component reliabilityanalysis methods, the limit-state function, g,defines the failure boundary separating the failureregion from the safe region. Given the jointprobability density function of all the randomvariables (expressed in a vector form x) asf(x), the probability of failure of the systemcan be expressed as:

Pf =

where Q is the failure region g(x) < 0 . Except

for very special cases, the multifold integral inEquation (1) cannot be evaluated explicitly.Alternatively, Monte Carlo simulation procedurescan be applied to evaluate the probability offailure at a higher cost due to the large number oflimit state function calculations are usuallyrequired.

In the most widely used reliability methods [2],approximations are made in the space of standard,uncorrelated, normal variates u , obtained from atransformation of the basic variables X,

U = T(x} (2)

where the transformation T depends on thedistributions of the basic variables .X [3]. In thestandard space u, the limit-state surface isdefined as:

(3)

This limit-state surface is now approximatedusing first or second-order surfaces to get anapproximate estimation for the exact probabilityof failure P, . These surfaces are fitted to theexact limit-state surface at point(s) with minimaldistance to the origin in the standard normalspace, known as the design point(s) or the Most

Copyright © 2001 by the American Institute of Aeronautics and Astronautics. All rights reserved.Principal Investigator, Member AIAA

f CPDO, Member AIAA*CRDO, Member AIAA

1

American Institute of Aeronautics and Astronautics

Page 2: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

Probable Point(s), denoted as MPP. In the firstorder reliability method (FORM), the limit-statesurface is replaced with its tangent hyperplane atthe MPP and the first-order approximation toP- is given as:

Pfl = (4)

where O(.) is the standard normal cumulativeprobability, and (3, known as the reliabilityindex, is the distance from the origin to the MPPin the standard normal space, as shown in Figure(1). Better approximations can be obtained byfitting a second-order surface at the MPP(SORM) or by multiple fitting at the locallyminimal distance points [2].

Although the concept of the above approximatereliability method is simple, it may not be easy tofind minimal distance points on the limit-statesurface. This is because in real applications thenumber of random variables (size of x) can bevery large and the performance function (used toformulate the limit-state function) is oftendifficult to compute, as it may requirecomputational routines as finite element analysis,eigenvalue solutions, or numerical integrations.

The determination of the MPP requires thesolution of the following constrained problem:

Minimize:Subject to:

Almost all existing iterative algorithms have thesame structure: Starting at a point uk, one

determines a direction of search, dk and thensearches for a new point along this direction,

conditions [4]. The main difference betweenvarious optimization algorithms is the methodused in identifying the search direction. Adetailed discussion of the most popular methodsis presented in [5]. The choice of a minimizationalgorithm is generally not dominated by theobjective function since it is clear from theprevious formulation that it is convex, purequadratic and has a smooth continuous gradient.However for the constraint, the value of g(u) isusually difficult to obtain for two reasons, first,the inverse transformation between the X-spaceand the U-Space is not available in closed form,and thus the limit-state function g(u)is not anexplicit function of U . Second, even when theinverse transformation is available in closed form,the limit-state function g(x) may not be anexplicit function of X. For example, g(x}maybe defined in terms of stresses, which themselvesare implicit function of the loads (defined as thebasic random variables). Thus, generally, finitedifference algorithms are used for thecomputation of the gradient vector Vg(u),which can require overwhelming computationalwork. Obviously, algorithms that require thecomputation of the Hessian matrixV2g"(w) should be considered impractical forthis problem.

In this paper, a new first-order technique ispresented based on convex sets. Contrary to someof the most popular methods discussed before,this new method is not based on linear search, anddoes not start by identifying a search direction.This in addition to the fact that it does not requirea feasible starting point, that can be very hard tofind in this problem (due to the equalityconstraint), rather, the process can be started fromany point whether in the safe or unsafe domain.

«*+l =; (5)

which minimizes the objective function. Althoughthe new point is the optimal point along dk, it isgenerally not the optimal point of the entirefeasible set. Hence, at the new point this processis repeated until the point satisfies the optimality

2.0 The Unconstrained MinimizationProblem:

In this section, the new method is introduced inthe context of unconstrained minimization. Themethod is then extended into a penalty approachfor the constrained minimization problem.

American Institute of Aeronautics and Astronautics

Page 3: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

2.1 Convex Sets in Optimization Problems

Using convex sets in optimization is an iterativeprocedure. In each iteration, the objectivefunction to be minimized is linearized about thecenter of a convex region. This convex regionrepresents a temporary constraint defined in away to lead to the convergence to the minimum ofthe original optimization problem.

Consider the following unconstrainedminimization problem:

Minimize: P(u)

The suggested iterative procedure for the solutionof this problem starts by identifying any startingpoint; define this point as ui. The value of theobjective function at this point is identified as/3i. The objective now is to find a new point

ui+l satisfying the condition:

and the choice of the starting values for theseoptimization parameters will be given later in thispaper. An intermediate minimization problem cannow be formulated as:

For all possible deviations du from the startingpoint ui contained in the ellipsoidal convex set

defined by Z(pC,Cd) , it is required to determinethe point with the minimum value for j3

(identified as /3 ). This problem can be writtenin a mathematical form as:

P* (a, 8) = mmdS€Z(a -} (/?(1) + <pTdu} (9)

where, the vector (p is the gradient vector usedin Equation (7) and defined as:

(10)

Assume that the two successive points are relatedby the equation:

u - u + du (6)

Thus J3i+l can be expressed in a linear form in

terms of /3t and ui as:

Equation (9) calls for finding the minimum of thelinear functional (p1du on the convex setZ(a,(D) . Based on the inherent properties of aconvex set, this extreme value will occur on theset of extreme points of the ensemble Z, i.e. theboundary of the ellipsoid [6], which is thecollection of vectors c = (cl9--9CN) in thefollowing set:

(7) (ii)

In the present work, the deviation du from ui isassumed to vary on the following ellipsoidal set:

rr2du(8)

where the size parameter CC and the semiaxes6)p- ••,(QN vary from iteration to iteration as themethod converges to the optimum. A moredetailed discussion on the effect of this variation

Thus the minimum distance in Equation (9)becomes

/?* (a, »)=minc-eC(a;(g) (/?(1) +(pTc) (12)

Define Q as an TV x TV diagonal matrix whosen diagonal element is 1 / C0n . Then, as seenfrom Equation (12), it is required to minimize

T —*•(p C subject to the constraint:

American Institute of Aeronautics and Astronautics

Page 4: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

(13)

The method of Lagrangian multipliers is usedhere.Define the Hamiltonian as:

(14)

where y is a constant multiplier whose valuemust be determined. For an extremum it isrequired that the derivative of the Hamiltonianvanish:

(15)dc

Thus,J_2r (16)

Substituting this into the constraint, Equation (13)yields the following expression for the multiplier:

1(17)

Back substituting for y in (17), we find that theextremum deviation vector c is:

(20)

This process is repeated untill convergence to theunconstrained minimum.

The iterative procedure for identifying theunconstrained minimum can be summarized asfollows:

1- Identify a starting point ui

2- Determine the values for the parametersa and G) delimiting the ellipsoidal set.

3- Calculate the gradient vector (p at the

starting point ui.

4- Find a new approximation ui+l using

equations (18) and (20).

5- Check convergence (3i+l — (3t < £ .

6- If not converged go to step 2 andcontinue the process.

2.2 Example for Unconstrained Minimization

Consider the unconstrained minimization of:

= ± (18)

Thus, the minimum objective function over theellipsoidal set is given by:

(19)

To find the point on the ellipsoidal surface thatcorresponds to the minimum calculated inEquation (19), it is required to check only twopoints given by:

where u is a two dimensional vector.

The previously introduced procedure is applied tothis problem. Notice that this is the sameobjective function as the one used in theconstrained reliability problem (the distance fromthe origin). It is obvious that the unconstrainedminimum of this objective function is equal tozero. Figure (2) shows the objective functioncontours along with the minimization stepsneeded.

The starting point used was: uo = (-10,5)

The size of the ellipsoid: a = 0.3

American Institute of Aeronautics and Astronautics

Page 5: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

The semiaxes lengths:

It is clear that the size of the ellipsoid wasassumed constant while the semiaxes lengthswere assumed inversely proportional to theiteration point coordinates. The efficiency of theprocedure (speed of convergence) depends greatlyon the choice of a and 5). Further studies arerequired for the accurate tuning of the method. Todemonstrate the previous statement, consider re-solving the same problem but with

a = 0.1

The convergence history is shown in Figure (3). Itis obvious that the number of iterations wasgreatly reduced by reducing the value of OC from0.3 to 0.1.

In the next section, the constrained minimizationproblem is addressed by using the previouslydescribed unconstrained minimization procedurealong with a penalty function approach.

3.0 The Constrained MinimizationProblem

In this section the convex-set based unconstrainedminimization procedure introduced in theprevious section is combined with a penaltymethod for the solution of the constrainedminimization problem described in Section (1).Penalty methods are a class of optimizationalgorithms, which transform a constrainedproblem into an unconstrained one by adding apenalty term cP(u)to the original objectivefunction, where c is a positive penalty parameterand P(u)is a penalty function which satisfiesP(u) = 0 in the feasible region and P(u ) > 0elsewhere. Thus the unconstrained problem canbe formulated as:

Minimize:

Where:

As the penalty parameter c approaches infinity,the minimization process will force the solution to

satisfy g(u) = 0 and minimize Vw Usimultaneously, which is the solution of theconstrained minimization problem described inSection (1). Since realistically it is impossible forc to reach infinity, the solution of theunconstrained formulation can only provide agood approximation for the constrained minimumifc is sufficiently large.

Once the penalty problem is formulated, one canuse the previously described unconstrainedminimization technique to solve the constrainedproblem. The only difference between the penaltyproblem and general unconstrained problem isthat c varies throughout the solution procedure.The value of c is usually chosen to be a smallnumber at the beginning of the minimization andthen increased in subsequent steps.

Consider now the following constrainedminimization problem, previously described inSection (1):

Minimize:Subject to:

For demonstration purposes it is assumed that theproblem is two-dimensional with the limit-statefunction in standard normal space given by:

g(u} = uv +4u2 -10

Figure (4), shows the objective function contoursalong with the equality constraint. The problem isnow re-formulated as an unconstrainedminimization problem as:

Minimize:

q(u) = 4u2 -10)2

q(u) =^JuTu + cg(u) 2

where the penalty parameter c is assumed to startwith a value of 0.01 and is increased by 15% aftereach minimization step. The contours of the newunconstrained objective function are shown inFigure (5). As the penalty parameter is increased,the constraint becomes more dominant and thecombined objective function contours become

American Institute of Aeronautics and Astronautics

Page 6: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

less related to the constrained problem contours.The unconstrained problem is now minimizedusing the convex-set based technique along withthe same parameters values used in Section (2).Figure (6), shows the obtained history ofconvergence. Once again, it is important toreiterate that the efficiency of the techniquedepends mainly on the ellipsoid definitionparameters; a detailed discussion of the effects ofthese parameters on the overall performance ofthe technique is beyond the scope of this paper.To demonstrate the ability of the procedure toconverge, regardless of the choice of the startingpoint (feasible, safe or failed), the same problemis solved again, this time however it is started inthe failed domain. The convergence history isshown in Figure (7).

4.0 Conclusions and Recommendations

In certain applications of reliability calculations,it is required to find points on the limit-statesurface with minimal distance to the origin of thestandard normal space. This problem can beformulated as a constrained optimization programand can be solved by several standard algorithms.In this paper, a new iterative procedure has beendeveloped based on the inherent properties ofconvex sets. Contrary to the most popularminimization algorithms, the new method doesnot contain a linear search step. And unlike somecommonly used search optimization algorithms(e.g. the gradient projection method) it does notrequire the identification of a feasible point beforethe start of the procedure. Identifying such a pointcan be tedious due to the equality constraint. Theproposed technique can be started from anywherein the design space, which adds generality androbustness to the procedure. Moreover, only thegradient vector is required for the procedure,which makes it suitable for the optimizationproblem faced in reliability calculations.

Methods in Structural Safety, pp. 66-84, Prentice-Hall, Englewood Cliffs, New Jersy, 1986.

3 - M. Hohenbichler and R. Rackwitz, "Non-Normal Dependent Vectors in Structural Safety",Journal of Engineering Mechanics Division, Vol.107, No. EM6, pp. 1227-1238, ASCE, December1981.

4 - R. T. Haftka, and Z. Giirdal, Elements ojStructural Optimization: 1992, Kluwer AcademicPublishers,.

5 - P.L Liu, A. D Kiureghian, "OptimizationAlgorithms for Structural Reliability Analysis",Report No. UCB/SESM-86/09, University ofCalifornia, Berkley, July 1986.

6 - Ben-Haim, Y. and Elishakoff, I., 1990,Convex Models of Uncertainty in AppliedMechanics, Elsevier Science Publishers.

References:

1 - Ang, A. H-S, and Tang, W. H "ProbabilityConcepts in Engineering Planning and Design"Vol. 1, John Wiley and Sons, N. Y.

2 - H. O. Madsen, S. Krank, and N. C. Lind,

American Institute of Aeronautics and Astronautics

Page 7: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

Figure (1): Illustration of the MPP identification problem parameters in standard normal space

10

-5

-10-10 -5 10

Figure (2): Objective function contours and convergence history for CC — 0.3.

American Institute of Aeronautics and Astronautics

Page 8: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

10

-5

-10-10 -5 10

Figure (3): Objective function contours and convergence history for CC = 0.1

10

-5

-10-10 -5 10

Figure (4): Objective function contours along with the equality constraint.

American Institute of Aeronautics and Astronautics

Page 9: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

10

-5

-10-10 -5 10

Figure (5): Objective function contours of the equivalent unconstrained problem.

10

-5

-10-10 -5 10

Figure (6): Convergence history for the constrained problem.

American Institute of Aeronautics and Astronautics

Page 10: [American Institute of Aeronautics and Astronautics 19th AIAA Applied Aerodynamics Conference - Anaheim,CA,U.S.A. (11 June 2001 - 14 June 2001)] 19th AIAA Applied Aerodynamics Conference

(c)2001 American Institute of Aeronautics & Astronautics or Published with Permission of Author(s) and/or Author(s)' Sponsoring Organization.

10

-5

-10-10 -5 10

Figure (7): Convergence history for the constrained problem (started in the failure domain).

10American Institute of Aeronautics and Astronautics