directed optimization on pareto frontier

23
American Institute of Aeronautics and Astronautics 1 Directed Optimization on Pareto Frontier Vladimir Sevastyanov 1 eArtius, Inc., Irvine, CA 92614, US EXTENDED ABSTRACT New multi-objective optimization technology is presented which considers Pareto frontier as a search space for finding Pareto optimal solutions that meet the user’s preferences. Typically, 80-90% of points evaluated by new optimization algorithms are Pareto optimal, and the majority of them are located in the user’s area of interest on the Pareto frontier. In contrast, conventional optimization techniques search for Pareto optimal solutions in the entire domain, which increases computational effort by orders of magnitude. New optimization technology is represented by two new algorithms: Multi-Gradient Pathfinder (MGP), and Hybrid Multi-Gradient Pathfinder (HMGP) (patent pending). MGP is a pure gradient-based algorithm; it starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows improving a subset of objective functions with higher priority. HMGP is a hybrid of a gradient-based technique and genetic algorithms (GA); it works similarly to MGP, but in addition, searches for dominating Pareto frontiers. HMGP is designed to find the global Pareto frontier and the best Pareto optimal points on this frontier with respect to preferable objectives. Both algorithms are designed for optimizing very expensive models, and are able to optimize models ranging from a few to thousands of design variables. 1. Introduction ow computational efficiency and low scalability of current multi-objective optimization algorithms are the biggest obstacles in the design optimization practice, which cause engineers to artificially reduce the number of design variables, and to rely on brute-force methods such as parallelization of optimization algorithms. In our opinion, the main reasons for low computational efficiency and low scalability are: (a) searching for optimal solutions in an entire design space while the search space can be reduced; (b) attempts to maximize the diversity of optimization results, and to cover the entire Pareto frontier while the user needs a small part of it; (c) absence of efficient algorithms for estimating gradients. In this paper we discuss in detail the limiting issues in current design optimization technologies, and offer a breakthrough optimization technology which improves algorithms efficiency by orders of magnitude, and provides equal optimization efficiency for a large variety of task dimensions ranging from a few to thousands of design variables. A. Searching the Entire Design Space Multi-objective optimization algorithms can be classified in the following way: algorithms based on uniformly distributed sequences such as Monte Carlo algorithm, genetic algorithms, and gradient based algorithms. The algorithms have different levels of convergence and computational efficiency, which can be characterized, for instance, by the ratio between the total number of model evaluations and the number of Pareto optimal points found. However, most of the algorithms of multi-objective optimization perform a search for Pareto optimal points in the entire design space. 1 Chief Executive Officer L

Upload: eartius-inc

Post on 14-Dec-2014

519 views

Category:

Documents


0 download

DESCRIPTION

New multi-objective optimization technology is presented which considers Pareto frontier as a search space for finding Pareto optimal solutions that meet the user’s preferences. Typically, 80-90% of points evaluated by new optimization algorithms are Pareto optimal, and the majority of them are located in the user’s area of interest on the Pareto frontier. In contrast, conventional optimization techniques search for Pareto optimal solutions in the entire domain, which increases computational effort by orders of magnitude. New optimization technology is represented by two new algorithms: Multi-Gradient Pathfinder (MGP), and Hybrid Multi-Gradient Pathfinder (HMGP) (patent pending). MGP is a pure gradient-based algorithm; it starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows improving a subset of objective functions with higher priority. HMGP is a hybrid of a gradient-based technique and genetic algorithms (GA); it works similarly to MGP, but in addition, searches for dominating Pareto frontiers. HMGP is designed to find the global Pareto frontier and the best Pareto optimal points on this frontier with respect to preferable objectives. Both algorithms are designed for optimizing very expensive models, and are able to optimize models ranging from a few to thousands of design variables.

TRANSCRIPT

Page 1: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

1

Directed Optimization on Pareto Frontier

Vladimir Sevastyanov1 eArtius, Inc., Irvine, CA 92614, US

EXTENDED ABSTRACT

New multi-objective optimization technology is presented which considers Pareto frontier as a search space for finding Pareto optimal solutions that meet the user’s preferences. Typically, 80-90% of points evaluated by new optimization algorithms are Pareto optimal, and the majority of them are located in the user’s area of interest on the Pareto frontier. In contrast, conventional optimization techniques search for Pareto optimal solutions in the entire domain, which increases computational effort by orders of magnitude. New optimization technology is represented by two new algorithms: Multi-Gradient Pathfinder (MGP), and Hybrid Multi-Gradient Pathfinder (HMGP) (patent pending). MGP is a pure gradient-based algorithm; it starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows improving a subset of objective functions with higher priority. HMGP is a hybrid of a gradient-based technique and genetic algorithms (GA); it works similarly to MGP, but in addition, searches for dominating Pareto frontiers. HMGP is designed to find the global Pareto frontier and the best Pareto optimal points on this frontier with respect to preferable objectives. Both algorithms are designed for optimizing very expensive models, and are able to optimize models ranging from a few to thousands of design variables.

1. Introduction ow computational efficiency and low scalability of current multi-objective optimization algorithms are the biggest obstacles in the design optimization practice, which cause engineers to artificially reduce the number of design

variables, and to rely on brute-force methods such as parallelization of optimization algorithms. In our opinion, the main reasons for low computational efficiency and low scalability are: (a) searching for

optimal solutions in an entire design space while the search space can be reduced; (b) attempts to maximize the diversity of optimization results, and to cover the entire Pareto frontier while the user needs a small part of it; (c) absence of efficient algorithms for estimating gradients.

In this paper we discuss in detail the limiting issues in current design optimization technologies, and offer a breakthrough optimization technology which improves algorithms efficiency by orders of magnitude, and provides equal optimization efficiency for a large variety of task dimensions ranging from a few to thousands of design variables.

A. Searching the Entire Design Space Multi-objective optimization algorithms can be classified in the following way: algorithms based on uniformly

distributed sequences such as Monte Carlo algorithm, genetic algorithms, and gradient based algorithms. The algorithms have different levels of convergence and computational efficiency, which can be characterized, for instance, by the ratio between the total number of model evaluations and the number of Pareto optimal points found. However, most of the algorithms of multi-objective optimization perform a search for Pareto optimal points in the entire design space.

1 Chief Executive Officer

L

Page 2: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

2

Let’s consider the benchmark (1) with two objective functions and two design variables.

(1) The following FIG.1 and FIG.2 illustrate results of two multi-objective optimization algorithms: Monte Carlo

and HMGE.

FIG.1A FIG.1B

FIG. 1 Results of the Monte Carlo optimization algorithm for the benchmark (1). The algorithm is based on Sobol points (a uniformly distributed sequence). 8192 uniformly distributed points cover the design space

evenly and completely (see small yellow markers on FIG.1B), and then Pareto filter was applied. Only 3% (258 out of 8192) of evaluated points are Pareto optimal.

FIG. 2 Results of HMGE optimization algorithm for the benchmark (1). 35% (89 out of 251) of evaluated

points are Pareto optimal. eArtius HMGE algorithm provides high convergence, and does not need to explore the design space as evenly

as the Monte Carlo algorithm does. Thus, it has improved the ratio between the numbers of Pareto optimal and evaluated points from 3% to 35% compared to the Monte Carlo algorithm. But HMGE still needs to iterate through the entire design space towards Pareto frontier, and spend 65% of evaluations for transitional points (see yellow points on FIG.2).

As can be seen on FIG.1 and FIG.2, Pareto frontier for the benchmark (1) can be determined as a straight line x2=0. The rest of the design space in this task does not have any Pareto optimal points and should be avoided throughout the optimization search.

Now, let us consider another benchmark problem (2) with three objective functions and three design variables.

22;10)3sin(1.01

21

11222

11

≤≤−≤≤⋅⋅−−+=

=

xxxxxfMinimize

xfMinimize

π

Page 3: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

3

(2)

FIG.3A FIG.3B

FIG.3 shows the optimization results found by HMGE algorithm for the benchmark problem (2). HMGE

algorithm has found 2225 Pareto optimal points out of 3500 model evaluations. Pareto frontier is visualized by green markers representing Pareto optimal points.

As follows from FIG.3B, Pareto frontier for the benchmark (2) belongs to the plane x3=1. Again, the rest of the

design space does not have any Pareto optimal points. Conventional optimization algorithms perform a search in the entire design space (see FIG.1 and FIG.2);

typically, the algorithms iterate from initial points towards Pareto optimal points, and spend a significant number of model evaluations for the iterations, which in turn, reduces the algorithms’ computational efficiency.

FIG.1B and FIG.2B show that Pareto frontier has a smaller dimension compared to the design space (a line on a plane on FIG.2B and a plane in 3D volume on FIG.3B). Reducing the search space by avoiding areas that do not contain Pareto optimal points would improve the efficiency of optimization algorithms. The biggest question is how to reduce the search space.

As follows from general considerations and from FIG.1-FIG.3, Pareto frontier is the best search space possible because of two reasons:

• Pareto frontier always has lower dimension compared to the entire design space; • Pareto frontier contains all Pareto optimal points, and nothing else. Our goal is to develop a multi-objective optimization algorithm which performs a search on Pareto frontier.

B. Approximation of the Entire Pareto Frontier One of the important features required from current multi-objective optimization algorithms is the ability to

cover the entire Pareto frontier as uniformly as possible. Thus, a conventional approach in multi-objective optimization assumes finding an accurate enough approximation of the entire Pareto frontier.

However, approximating a Pareto frontier can be a resource-consuming task because of the “curse of dimensionality” phenomenon [1]. According to the phenomenon, adding extra dimensions to design space requires an exponential increase in the number of Pareto optimal points to maintain the same distance between neighboring optimal points in the design space. For example, 100 evenly-spaced sample points in a unit interval have no more than 0.01 distances between points. In the 10-dimensional unit hypercube a lattice with a spacing of 0.01 between neighboring points would require 1020 sample points [1].

Now, let’s consider two 3-objective optimization tasks with 3 and 30 design variables. Three hundred evenly distributed Pareto optimal points look just as equally spaced in the criteria space for both tasks. However, the

15.010

65.00)2/sin()2/cos()1(3)2/cos()2/cos()1(3

3

2

1

2132

2131

≤≤≤≤≤≤

⋅⋅⋅⋅+−=⋅⋅⋅⋅+−=

xxx

xxxfMinimizexxxfMinimize

ππππ

Page 4: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

4

distance between neighboring Pareto optimal points in the design space is much larger for the high dimensional task. As follows from the above example, the high-dimensional task requires an overwhelmingly large number of uniformly distributed Pareto optimal points to maintain the same distance between neighboring points in the design space.

The above consideration can be illustrated by comparing FIG.4A and FIG.4B. FIG.4A FIG.4B

FIG.4A shows Pareto optimal points for the benchmark (1). The green points are distributed along the

straight line x2=0. FIG.4B shows Pareto optimal points for the benchmark (2). All the points are distributed over x1-x2

graph, and belong to the plane x3=1 – see FIG.3B. We can notice that the distance between neighboring Pareto optimal points in the design space is approximately

the same on FIG.4A and FIG.4B. However, the one-dimensional Pareto frontier is covered by 89 points (green markers), while the two-dimensional Pareto frontier is covered by 2225 Pareto optimal points, which is 25 times more resource-consuming.

The increase in the number of design variables causes the distance between neighboring points in the design

space to increase exponentially. Thus, the ability of conventional optimization algorithms to find desired trade-offs is very low for tasks with more than 3-5 design variables.

In order to improve the efficiency of multi-objective optimization, and resolve the “curse of dimensionality” issue, we need to avoid approximating the entire Pareto frontier, and instead perform a search in the user’s area of interest on the Pareto frontier.

Searching in the area of interest on Pareto frontier does not require approximating the area by uniformly distributed Pareto optimal points; it assumes to move along the Pareto frontier from a given point towards a desired point with a minimum number of steps. Such an algorithm would efficiently resolve the “curse of dimensionality” issue because it does not attempt to cover the entire Pareto frontier.

C. Computationally Expensive Estimation of Gradients Gradients estimation is required for all gradient-based optimization algorithms. However, existent methods of

gradients estimation have considerable limitations related to task dimensions. The most straight forward finite difference method requires N+1 model evaluations (where N is the number of design variables) to estimate gradients on each step of an optimization process.

Simulation models can be computationally expensive, and take hours and even days for a single model evaluation. Thus, the necessity to evaluate a model N+1 times just to perform a single optimization step is a fundamental obstacle in the design optimization practice. This does not allow for simulation models with more than 5-10 design variables to be optimized efficiently.

Page 5: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

5

In order to avoid this obstacle, Response Surface Method (RSM) was invented. The most common RSM approach in design optimization allows to build global approximations (surrogate

models) based on sample points generated by DOE (Design of Experiments) methods. The surrogate models are computationally inexpensive to evaluate, and can be used to substitute expensive simulation models for further optimization. Hence, RSM helps to avoid a direct estimation of gradients for computationally expensive models.

All commercial design optimization tools have impressive libraries of DOE and RSM algorithms, which indicate the importance of the RSM algorithms in current design optimization technology. However, RSM is also limited by task dimension because high-dimensional tasks require a large number of sample points, which is related to the same “curse of dimensionality” issue. According to [1], adding extra dimensions to the design space requires an exponential increase in the number of sample points necessary to build an adequate global surrogate model.

This is a strong limitation for all known response surface approaches causing engineers to artificially reduce optimization task dimension by assigning constant values to the most of design variables. Also, the quality of surrogate models developed by RSM algorithms is very low for highly non-linear simulation models.

In order to address this issue, eArtius has developed Dynamically Dimensioned Response Surface Method (DDRSM) [2] which successfully resolved the “curse of dimensionality” problem. DDRSM builds local approximations and estimates gradients based on a few (5-7) model evaluations regardless of task dimension. This allows for optimization tasks to be solved efficiently with dimensions ranging from a few to thousands of design variables.

Two new multi-objective optimization algorithms performing directed optimization on Pareto frontier have been developed at eArtius:

• Multi-Gradient Pathfinder (MGP) is a pure gradient-based algorithm, which steps along Pareto frontier from a given Pareto optimal point to a desired Pareto optimal point.

• Hybrid Multi-Gradient Pathfinder (HMGP) is a hybrid algorithm combining a gradient-based technique with GA techniques. It works similarly to MGP, but in addition to gradient-based steps, it uses GA techniques to search for dominating Pareto frontiers or other disjoint areas on Pareto frontier.

Since both mentioned optimization algorithms use Multi-Gradient Analysis (MGA) and DDRSM for gradient estimation, let us consider MGA and DDRSM first, and then we’ll discuss the MGP and HMGP optimization algorithms.

2. Multi-Gradient Analysis Any traditional gradient-based optimization method comprises sequential steps from an initial point to an

optimal point. Each step improves the current point with respect to the objective function. The most important element of such an algorithm is determining the direction for the next step. Traditional gradient-based algorithms use the fact that the gradient of the objective function indicates the direction of the steepest increase of the objective function but what if several objective functions need to be optimized? In this case we need to find a point improving all objective functions simultaneously. The following diagrams (see FIG.5) illustrate graphically how MGA determines the area of simultaneous improvement for all objective functions. It is illustrated for the simplest multi-objective optimization task with two independent variables and two objective functions that need to be maximized.

FIG. 5A FIG. 5B FIG. 5C

FIG. 5A illustrates how the gradient G1 and the line L1 (G1 is perpendicular to L1) help to split the sub-region into the area of increased values A1 and the area of decreased values for the first objective function;

FIG. 1B similarly illustrates splitting the sub-region for the second objective function;

Page 6: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

6

FIG. 1C illustrates that the Area of Simultaneous Increasing (ASI) for both objective functions F1 and F2 is equal to the intersection of areas A1 and A2: A1∩A2.

The main problem of the Multi-Gradient Analysis is to find a point SIAX ∈' , which guarantees that the point

0X will be improved by the point 'X with respect to all objective functions. MGA is illustrated with two objective functions on FIG.5, but it works in the same way with any reasonable

number of objective functions and any number of design variables. The MGA pseudo-code:

1 Begin 2 Input initial point X*. 3 Evaluate criteria gradients on X*. 4 Determine ASI for all criteria. 5 Determine the direction of simultaneous improvement for all objectives for the next step. 6 Determine the length of the step. 5 Perform the step, and evaluate new point X’ belonging to ASI. 7 If X’ dominates X* then report improved point X’ and go to 10. 8 If X’ does not dominate X* then report X* as Pareto optimal point. 10 End

MGA can be implemented in a number of different ways. Some of them are discussed in [2, 3]. Actually, the

same technique is widely used for constrained gradient-based optimization with a single objective function [4]. However, there are no efficient algorithms based on this technique for multi-objective optimization.

Since MGA technique results in an improved point it can be used as an element in any multi-objective optimization algorithm. eArtius has developed four multi-objective optimization algorithms so far, and all of them employ MGA technique.

MGP and HMGP optimization algorithms use DDRSM for gradient estimation. Let us consider DDRSM first, and then we’ll discuss both algorithms in detail.

3. Dynamically Dimensioned Response Surface Method DDRSM (patent pending) is based on a realistic assumption that most of real life design problems have a few

significant design variables, and the rest of the design variables are not significant. Based on this assumption, DDRSM estimates the most significant projections of gradients for each output variable, and on each optimization step.

In order to do that DDRSM generates 5-7 sample points in the current sub-region, and uses the points to recognize the most significant design variables for each objective function. Then DDRSM builds local approximations which are utilized to estimate the gradients.

Since an approximation does not include non-significant variables, the estimated gradient has only projections that correspond to significant variables. All other projections of the gradient are equal to zero. Ignoring non-significant variables slightly reduces the accuracy, but allows estimating gradients by the price of 5-7 evaluations for tasks of practically any dimension.

DDRSM recognizes the most significant design variables for each output variable (objective functions and constraints) individually. Thus, each output variable has its own list of significant variables that will be included in its approximating function. Also, DDRSM recognizes significant variables repeatedly on each optimization step, each time when an optimization algorithm needs to estimate gradients. This is important because the topology of objective functions and constraints can diverge in different parts of the design space, which requires frequent re-estimation of local importance for each design variable throughout the optimization process.

As follows from the above explanation, DDRSM dynamically reduces the task dimension in each sub-region, and does it independently for each output variable by ignoring non-significant design variables. The same variable can be critically important for one of the objective functions in the current sub-region, and not significant for other

Page 7: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

7

objective functions and constraints. Later, in a different sub-region, the situation with significant design variable lists can be very different.

DDRSM combines elements of response surface methods and methods of sensitivity analysis. Thus, it makes sense to compare DDRSM with traditional sensitivity analysis approach.

State of the art sensitivity analysis tools are designed to be used before starting an optimization process. Thus, engineers are forced to determine a single static list of significant variables for all objective and constraint functions based on their variations in the entire design space. After the sensitivity analysis is completed, all non-significant design variables get a constant value, and never get changed over the optimization process.

The above approach gives satisfactory results for tasks with a small number of output variables, and has difficulties when the number of constraint and objective functions is large.

Generally speaking, each output variable has its own topology, its own level of non-linearity, and its own list of significant variables. The same design variable can be significant for some of the output variables, and non-significant for other ones. Thus, it is difficult or even impossible to determine a list of design variables those are equally significant for dozens and hundreds of output variables. Also, traditional sensitivity analysis technology requires too many sample points for a large number of design variables. This reduces the usefulness of the approach for high dimensional tasks.

A detailed explanation of DDRSM algorithm can be found in [2]. Now let us consider directed optimization on Pareto frontier, and how it is implemented in the MGP algorithm.

4. Multi-Gradient Pathfinder Algorithm Multi-Gradient Pathfinder (MGP) is the first multi-objective optimization algorithm which implements the idea

of directed optimization on Pareto frontier based on the user’s preferences. Directed optimization on Pareto frontier means that a search algorithm steps along Pareto frontier from a given

initial Pareto optimal point towards a desired Pareto optimal point. The search algorithm is supposed to stay on Pareto frontier throughout the optimization process until the desired Pareto optimal point will be reached. Then all (or most) of the evaluated points will also be Pareto optimal.

Moving along Pareto frontier improves some objectives and compromises other ones. This consideration gives a clue to how directed optimization needs to be organized to become beneficial for users. In fact, it is enough to formulate which objective functions are preferable, and need to be improved first and foremost. This formulates a goal for the directed search on Pareto frontier.

In the case of L=2 objective functions Pareto frontier is a line in the objective space. Thus, MGP algorithm has only two directions to choose from: to improve 1st or 2nd objective function.

In the case of L>2 objective functions Pareto frontier is a multi-dimensional surface, and the algorithm has an infinite number of directions to move from a given point along the surface. In any case, the user needs to determine which direction to move based on his preferences.

Based on the above considerations, the task of directed optimization on Pareto frontier can be formulated in the following way:

(3) Where nS ℜ⊂ is a design (parameter) space, is a subset of the design space X, which belongs to

Pareto frontier; m – the number of non-preferable objective functions F(X), and n – the number of preferable objective functions P(X) determining the direction of the move (directed search) on Pareto frontier. L=m+n – the total number of objective functions. Pareto frontier is determined by both sets of objectives F(X) and P(X).

nn

j

Tn

Tm

SXxxxX

kjXqtosubject

XPXPXPXPMinimize

XFXFXFXFMinimize

XPFX

XPFX

ℜ⊂∈=

=≤

=+

=

};,...,,{

,...2,1;0)(:

)](),...,(),([)(

)](),...,(),([)(

21

21

21

XXPF ∈

Page 8: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

8

Operator Minimize+ applied to P(X) means that it is required to find the best points on Pareto frontier with respect to the preferable objectives P(X).

How MGP operates: First of all, the user needs to determine which objective(s) are preferable (more important) for him. In this way

the user indicates his area of interest on the Pareto frontier. MGP starts from a given Pareto optimal point and performs a required number of steps along Pareto frontier in

a direction of simultaneous improvement of preferable objectives. On each step MGP solves two tasks (see FIG.5, green and blue arrows):

• Improves preferable objectives’ values; • Maintains a short distance from the current point to Pareto frontier. It is important to note that if a given initial point is not Pareto optimal, then MGP works exactly as MGE

algorithm. It approaches Pareto frontier first, and then starts stepping along the Pareto frontier in the direction determined by preferable objectives.

FIG.6 illustrates the basic idea of MGP algorithm for the case when both objective functions F1 and F2 need to be minimized and F2 is considered as a preferable objective.

On the first half-step MGP steps in a direction of improvement of the preferable objective – see green arrows on

FIG.6. On the second half-step MGP steps in a direction of simultaneous improvement of ALL objectives—see blue arrows, and in this way maintains a short distance to Pareto frontier. Then MGP starts the next step from the newly found Pareto optimal point.

Main features of MGP algorithm are explained in the following pseudo-code: 1 Begin 2 Input initial Pareto optimal point X* and required number of steps N. 3 i=1. 4 Declare current point: Xc= X

*. 5 Evaluate gradients of all objective functions on Xc. 6 Determine ASI(1) for preferable objectives. 7 Make a step in ASI(1) improving only preferable objectives. 8 Determine ASI(2) for ALL objectives. 9 Make a step in ASI(2) improving ALL objectives; the resulting Pareto point is X

**. 10 If i < N then declare current point Xc= X

**; i=i+1; go to 5. 11 Report all the solutions found. 12 End

F1

F2

Page 9: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

9

The abbreviations ASI(1) and ASI(2) in the above pseudo-code stand for Area of Simultaneous Improvement

(ASI) for preferable objectives and all objectives correspondingly (see FIG.5A-5C). The following FIG.7 illustrates MGP results for the benchmark (1). Objective function f2 is assigned by the

user as a preferable objective. MGP starts from the point {x1=0; x2=0}, and steps along Pareto frontier until it is covered completely.

FIG. 7 shows all points evaluated by MGP algorithm for the benchmark task (1). Green points are Pareto optimal, and yellow points are transitional. All points evaluated by MGP are located on Pareto frontier or on

a short distance from Pareto frontier (see yellow points.)

The benchmark problem (1) and FIG.7 illustrate that in the case of two objective functions, MGP is able to start from one end of Pareto frontier, and cover it completely to the other end. In this case MGP successfully solves the task of approximating the entire Pareto frontier. This can be done with different step sizes, which determine the distance between neighboring points in the design space.

Now let us consider the benchmark problem (2) with three objectives and three design variables, and see how MGP can be used when Pareto frontier is a surface in a multi-dimensional space.

The following FIG.8 illustrates MGP results for the benchmark problem (2), which has Pareto frontier as a plane surface in the design space. Pareto optimal points represented by small green markers visualize the Pareto frontier as a plane surface in the design space.

FIG. 8 shows Pareto optimal points found by MGP algorithm for the benchmark task (2). MGP started optimization three times from the same point {x1=1; x2=1; x3=1}, but with different

preferable objectives. Green trajectory of Pareto optimal points is corresponded with the preferable objective f3; red—with f1; blue—with the preferable objectives f1 and f3. Light-green small markers visualize Pareto

frontier.

Page 10: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

10

In order to illustrate the capability of the MGP algorithm to navigate along Pareto frontier in different directions we performed optimization three times starting from the same initial point, but with different preferable objectives. Thus, preferable objective f3 causes MGP to move down and minimize f3. In the case of preferable objectives f1 and f3, MGP moves along Pareto frontier in the direction of minimizing both f1 and f3. The left diagram on FIG.7 shows that all three trajectories belong to the Pareto surface in the criteria space. The right diagram shows that all three trajectories belong to the plane surface x3=1 which represents Pareto frontier in the design space. With this task MGP never evaluated a single transitional point which does not belong to the Pareto frontier, or located on a noticeable distance from it.

All evaluated points (optimal and non-optimal) are visualized on FIG. 7 and FIG.8, and we can make a few observations confirming that MGP performs directed optimization on Pareto frontier:

(a) MGP algorithm performs search exclusively on Pareto frontier, and only in the user’s area of interest; only a few of the evaluated points are non-Pareto optimal.

(b) The direction of movement along Pareto frontier depends on the selection of preferable objectives, as expected. The green trajectory clearly indicates improvement of f3, the red trajectory indicates improvement of f1, and the blue trajectory indicates simultaneous improvement of f2 and f3;

(c) MGP is extremely efficient. The majority of evaluated points are Pareto optimal.

5. Benchmark problems for MGP algorithm In this study, three state of the art multi-objective optimization algorithms have been compared to the proposed

HMGP algorithm. The algorithms Pointer, NSGA-II, and AMGA are developed by a leading company of the Process Integration and Design Optimization (PIDO) market. These commercial algorithms represent the highest level of optimization technology developed by the best companies and are currently available in the PIDO market.

NSGA-II and AMGA are pure multi-objective optimization algorithms that are suitable to compare with MGP. Pointer is a more questionable algorithm in regards to multi-objective optimization because it works as an automatic optimization engine that controls four different optimization algorithms, and only one of them is a true multi-objective algorithm. Clearly, three other algorithms in Pointer use a weighted sum method for solving multi-objective optimization tasks. Thus, Pointer is not the most suitable algorithm to compare with other multi-objective techniques. However, Pointer is a great optimization tool, and it is widely used for multi-objective optimization in engineering practice. Therefore, comparing Pointer with MGP algorithm on a number of multi-objective optimization benchmark problems makes practical sense.

For the algorithms AMGA, NSGA-II, Pointer, and MGP only default parameter values have been used to make sure that all algorithms are in equal conditions.

The following benchmarks have been collected to demonstrate the ability of MGP algorithm to work with very different topologies of Pareto frontier. MGP is able to recognize the topology and find a way along the Pareto frontier based on multi-gradient analysis, and allow for the improvement of preferable objective(s).

FIG.9-FIG.17 visualize all evaluated points in the criteria space and the design space, which allows for one to see the sequence of steps and all of the details of the optimization process. Pareto optimal points are visualized in green and transitional points, which were used for gradients estimation and other intermediate operations—in red.

Preferred objectives are indicated as ‘Minimize+’ or ‘Maximize+’ in optimization task formulations. The sign ‘+’ means that the objective is preferable.

TP7 - Constrained Pareto Front The following benchmark (4) is a sample of a constrained multi-objective optimization problem. FIG.9 shows a

constrained Pareto front found by MGP algorithm. (4)

]5;0[],1;1.0[;019

;069:

;/)1(;

21

122

121

122

11

∈∈>−⋅+−=

>−⋅+=

+=+=

xxxxg

xxgtoSubject

xxFMinimizexFMinimize

Page 11: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

11

FIG.9 The shape of the constrained Pareto frontier is not smooth for the benchmark problem (4). However, MGP is able to move from the beginning to the end of the Pareto frontier. 76 Pareto optimal points

have been found out of 279 model evaluations. TP8 - Disjointed Pareto Front

(5)

FIG.10 Benchmark problem (5) has a disjoint Pareto frontier, which is a complication for other multi-objective optimization algorithms. MGP was able to start from one end of the Pareto frontier, and find a way

to another disjoint part of the Pareto frontier based on multi-gradient analysis. The path from one disjoint segment to another is visualized by yellow markers. 63 Pareto optimal points have been found out of 356

model evaluations. TP9 - Non-Convex Pareto Front (6)

]4;4[,])2/1()2/1(exp[1

])2/1()2/1(exp[1

21

22

212

22

211

−∈+−+−−=+

−−−−−=

xxxxFMinimize

xxFMinimize

],[,)cos(5.0)sin(2)cos()sin(5.1)cos(5.1)sin()cos(2)sin(5.0

)2cos(5.0)2sin(2)1cos()1sin(5.1)2cos(5.1)2sin()1cos(2)1sin(5.0

)1()3(1

)()(1

21

22112

22111

2

1

22

212

222

2111

ππ−∈⋅−⋅+−⋅=⋅−+⋅−⋅=

⋅−⋅+−⋅=⋅−+⋅−⋅=

++++=+

++++=

xxxxxxBxxxxB

AA

xxFMinimize

BABAFMinimize

Page 12: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

12

FIG.11 Benchmark problem (6) has a non-convex Pareto frontier, which is a complication for traditional optimization algorithms based on gradients and scalarization techniques. MGP has found 36 evenly

distributed Pareto optimal points out of 85 model evaluations. A finite difference method was used to estimate gradients.

TP10 - Disjoint Pareto Front (7)

The benchmark (7) has a disjoint Pareto frontier with four separate segments. MGP was able to start from one

end of the Pareto frontier, and find a way to all other disjoint parts of the Pareto frontier based on multi-gradient analysis. The path from one disjoint segment to another is visualized by red markers on FIG.12A.

FIG.12A MGP algorithm has started from the initial point (see blue marker), and stepped along the Pareto frontier to the end point. Last segment of the Pareto frontier consists from a single point, and MGP

has found it. 62 Pareto optimal points have been found out of 614 model evaluations.

]5;5[,,)sin(58.0||)sin(58.0||)sin(58.0||

]2.0exp[10]2.0exp[10

321

333

322

3112

23

22

22

211

−∈⋅+⋅+⋅+⋅+⋅+⋅=

+−⋅−+−⋅−=+

xxxxxxxxxFMinimize

xxxxFMinimize

Page 13: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

13

FIG.12B visualizes only Pareto optimal points, which allows one to see all four segments of the Pareto frontier in the design space.

TP5 – Multi-Modal Pareto front TP5 benchmark problem (8) is a challenging task because it has dozens of Pareto frontiers and five disjoint

segments of global Pareto frontier. MGP results for this benchmark problem will be compared to the results of the algorithms Pointer, NSGA-II and AMGA.

(8) Global Pareto frontier for the benchmark (8) belongs to the straight line {x1=0…1, x2=x3=…=x10=0}. As can be seen from FIG.13, MGP has found global Pareto optimal points very accurately. All transitional

points (red markers) are located on the distance 0.0001 from optimal points, which was used by the finite difference method to estimate gradients.

( )]1;0[][

)10sin(//1

10)],4cos(...)4cos()4[cos(10)...()1(101

111

3222

322

2

11

⋅−−=

=+++⋅−++++−⋅+=

⋅==+

XFgFgFh

nxxxxxxng

hgFMinimizexFMinimize

nn

π

πππ

Page 14: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

14

FIG.13 shows all points evaluated by MGP algorithm based on the finite difference method for gradient estimation. MGP has found 33 Pareto optimal points (green markers) out of 805 model evaluations. Red

markers indicate transitional points from two categories: (a) the points that have been evaluated for gradients estimation; (b) the points that belong to dominated Pareto frontiers.

Two approaches of gradient estimation have been used for the TP5 benchmark problem. FIG.13 shows

optimization results found by MGP employed the finite difference method for gradients estimation. FIG.14 and FIG.15 show optimization results found by MGP-RS (MGP algorithm with response surface method DDRSM used to estimate gradients.)

Both MGP and MGP-RS have found global Pareto frontier, and covered it accurately and evenly. However, MGP-RS spent 805/185=4.3 times less model evaluations because DDRSM does not have to perform N+1 model evaluations for each gradient estimation.

MGP-RS has not compromised accuracy because DDRSM was able to recognize that x1 is the only significant design variable for this task. Also, DDRSM helped MGP algorithm to determine the optimal value for all other design variables: x2=x3=…=x10=0.

FIG.14A FIG.14B

FIG. 14 shows all points evaluated by MGP algorithm. MGP has found 24 Pareto optimal points (green markers) out of 185 evaluations. Red markers indicate transitional points which have been evaluated for

gradients estimation.

Page 15: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

15

The multi-objective optimization task (8) was solved by MGP algorithm with the use of DDRSM for gradients estimation. The majority of red points on FIG.13 were generated to build local DDRSM approximations, and estimate gradients. The range of red points for most of the design variables is relatively small: [0, 0.028]. It can be clearly seen on FIG.13B for the design variable x10, and on most of diagrams on FIG.14. It means that all the points evaluated by MGP algorithm are located in a small area around global Pareto frontier. As result, MGP has spent 185 model evaluations, and covered all five segments of global Pareto frontier.

Pay attention to the green marker located on the FIG.14 diagrams in the position where x2=x3=…x10=0, and x1=0…1. It confirms that MGP has found exact global Pareto optimal solutions.

FIG. 15 shows all points evaluated by MGP algorithm and the algorithms Pointer, NSGA-II, AMGA for multi-objective optimization. MGP has spent 185 evaluations, and covered all five segments of the global Pareto frontier. Each of the other optimization algorithms spent 2000 model evaluations with noticeably worse results: NSGA-II was able to approach 3 of 5 segments on the global Pareto frontier. AMGA and

Pointer have not found a single Pareto optimal solution. In contrast with MGP, the algorithms Pointer, NSGA-II, and AMGA performed their search in the entire design

space where each design variable is ranging from 0 to 1 (see the diagrams x1-x10, x2-x3, x4-x5, x6-x7, x8-x9 on FIG.15). Despite significant computational efforts (each algorithm performed 2000 model evaluations) Pointer and AMGA have not found a single Pareto optimal point, and NSGA-II approached just 3 of 5 segments of the Pareto frontier. This can be seen on the left-top diagram of FIG.15.

The algorithms Pointer, NSGA-II, and AMGA represent state of the art design optimization technology in the field of multi-objective optimization. These algorithms illustrate how optimization search is performed in the design space by current technologies (see FIG.15.)

Page 16: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

16

ZDT2 – Multiple Pareto frontiers (9)

FIG.16A FIG.16B

FIG. 16 compares optimization results for MGP algorithm with results of the algorithms Pointer, NSGA-II and AMGA. All points evaluated by each optimization algorithm are visualized. Green markers on

FIG.16B show global Pareto frontier in the design space. F1 was assigned as the preferable objective for MGP algorithm. MGP started from the initial point

X={1,0,0,…,0}, and was stepping along the Pareto frontier (see FIG.16B) from one end to another. MGP has found 18 optimal points out of 38 model evaluations. All the points belong to the global Pareto frontier, and cover it evenly and completely. MGP has used DDRSM to estimate gradients. Red markers show all the points calculated by DDRSM to build local approximations and to estimate gradients. As can be seen from FIG.16B, all transitional points evaluated by DDRSM are located in a narrow area along the global Pareto frontier.

In contrast, each of the other algorithms spent 1500 evaluations. Pointer was able to find several Pareto optimal points in the area of maximum F1 values. NSGA-II algorithm has approached the global Pareto frontier in the area of low F1 values. AMGA algorithm was not able to even approach the global Pareto frontier after 1500 model evaluations.

The previous observations confirm that directed optimization on Pareto frontier is significantly more efficient and accurate when compared to searching the entire design space. MGP algorithm spent 40 times less model evaluations compared with other and covered entire Pareto frontier while other algorithms still need an additional 2000-3000 model evaluations to accurately solve the optimization task for the benchmark problem ZDT2.

6. Hybrid Multi-Gradient Pathfinder Algorithm Hybrid Multi-Gradient Pathfinder (HMGP) algorithm is a new multi-objective optimization algorithm which

combines elements of MGP algorithm with elements of genetic algorithms (GA). The main idea of the HMGP algorithm is the following: HMGP steps along a Pareto frontier in a way similar to

MGP, but periodically performs a GA-based iteration with random mutation based on archived Pareto optimal points. If a random mutation brings a dominating point then the point is declared as the current point, and HMGP

30;,..1,101

91

1

2

21

2

11

==≤≤

⎥⎦

⎤⎢⎣

⎡−

+=

⎥⎥⎦

⎢⎢⎣

⎡⎟⎟⎠

⎞⎜⎜⎝

⎛−⋅=

=+

∑=

nnix

xn

g

gFgFMinimize

xFMinimize

i

n

ii

Page 17: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

17

makes the next gradient-based step from the point. Essentially, HMGP shifts to the dominating Pareto frontier as soon as it finds the first dominating point belonging to the frontier, and continues stepping along the dominating Pareto frontier. If the task has multiple Pareto frontiers then HMGP sequentially steps from one Pareto frontier to another until it finds the global Pareto frontier. HMGP stops when it finds the best point on the global Pareto front with respect to preferable objective(s), or when the maximum number of model evaluations is exceeded.

Since HMGP performs both (a) searches for dominating Pareto frontiers in the entire design space, and (b) a directed gradient-based search on a local Pareto frontier, we need to reflect it in the optimization task formulation (10):

(10)

Multiobjective optimization task formulations (3) and (10) are identical with the exception of the search space

formulation: XPF in (3) and the entire design space X in (10). Let us consider the HMGP algorithm in detail. Similar to GA algorithms, HMGP keeps all evaluated points in an archive. The points in the archive are sorted

by the value of the utility function (11), and the best point is used as an initial point for gradient-based steps in a direction of simultaneous improvement for n preferable objectives.

(11) Where all preferable objective functions need to be normalized. The GA-based part of the HMGP algorithm periodically performs a random mutation based on archived

solutions found on previous iterations. HMGP (a) finds all the 1st rank points in the archive with respect to all objectives, (b) applies the crowding distance method to the 1st rank points to find an initial point for random mutation, and (c) performs the random mutation. This allows HMGP to find a dominating or global Pareto frontier.

The random mutation element of the algorithm transforms local optimization MGP algorithm into a global optimization algorithm, which is efficient in finding the global Pareto frontier, and disjoint parts of Pareto frontier.

The HMGP algorithm comprises the following steps:

1 Begin 2 Generate required number of initial points X1,…,XN using Latin hypercube sampling 3 Add newly calculated points to archive 4 Sort solutions by preferred objectives using the utility function (11); select the best solution with respect to preferred objectives as initial point for MGA steps 5 Improve the solution by MGA step with respect to preferred objectives 6 Extract points of first rank (with respect to all objectives) from the archive, and apply to them the crowding distance method; select in this way the best initial solution 7 Improve the solution by MGA step with respect to ALL objectives 8 Apply the random mutation operator to obtain a solution with probability equal to t 9 Extract m points of first rank (with respect to all objectives) from the archive and produce k children by applying the SBX crossover operator 10 Apply the random mutation operator to each child with probability equal to t 11 If the stop criterion is not satisfied then go to 3 12 Report all the solutions found 13 End

nPPPU +++= ...21

+bestP

nn

j

TnX

TmX

SXxxxX

kjXqtosubject

XPXPXPXPMinimize

XFXFXFXFMinimize

ℜ⊂∈=

=≤

=+

=

};,...,,{

,...2,1;0)(:

)](),...,(),([)(

)](),...,(),([)(

21

21

21

Page 18: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

18

The HMGP algorithm can use two exit conditions: a) maximum number of model evaluations has been exceeded b) the best solution with respect to preferable objectives could not be improved over the last N iterations.

The first exit condition can be used to stop HMGP after the best solution with respect to preferred objectives has been found. In this case HMGP can be stopped before the maximum allowable number of evaluations is exceeded.

The second exit condition can be used if the user wants to not only find the best solution, but to cover the entire Pareto front, and agrees to spend the maximum allowable number of model evaluations. If for some reason further improvement of the best solution (with respect to preferable objectives) is impossible then HMGP starts working similarly to HMGE, and covers the entire Pareto frontier evenly.

Similar to the MGP algorithm, HMGP estimates gradients based on the Dynamically Dimensioned Response Surface Method (DDRSM) [2].

7. Benchmark problems for HMGP algorithm HMGP is designed to efficiently find global Pareto frontiers for tasks with multiple Pareto frontiers. Thus, we

collected a few benchmarks that are challenging for multi-objective optimization algorithms because they have dozens of Pareto frontiers and 30 design variables. HMGP optimization results will be compared to the results of commercial optimization algorithms Pointer, NSGA-II, and AMGA that represent state of the art current design optimization technology.

Benchmark ZDT1 with multiple convex Pareto frontiers The optimization task formulation used is as follows: (12)

FIG.18 Results of HMGP, Pointer, NSGA-II and AMGA after 400 evaluations. All points evaluated by each algorithm are visualized on the charts. HMGP has found and evenly covered global

Pareto frontier. Other algorithms could not even approach the global Pareto frontier after 400 evaluations.

30;,..1,101

91

1

2

12

11

==≤≤−

+=

⎥⎦

⎤⎢⎣

⎡−=+

=

∑=

nnix

xn

g

gFgFMinimize

xFMinimize

i

n

ii

Page 19: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

19

A B

FIG.19 Results of HMGP after 400 evaluations, and results of Pointer, NSGA-II and AMGA after 1000 evaluations. All points evaluated by each algorithm are visualized on the charts.

HMGP has found and evenly covered the global Pareto frontier. Other algorithms are closer to the global Pareto

frontier when compared with FIG.18, but still cannot show good results. Pointer has found a few Pareto optimal points corresponding to low values of the objective F1. NSGA-II and AMGA could not approach the global Pareto frontier.

Vertical lines on FIG.19B indicate that Pointer, NSGA-II and AMGA got stuck on local Pareto frontiers, and spent too many model evaluations trying to cover evenly local frontiers. This indicates that the algorithms have little ability to not only find dominating frontiers, but to move on them quickly. This reduces the algorithms’ efficiency.

In contrast, HMGP (see green markers on FIG.19) has not created vertical lines corresponding to local Pareto frontiers. This means that HMGP was able to discover dominating Pareto frontiers and the global Pareto frontier quickly, and has avoided collecting local Pareto optimal points. Therefore, 400 evaluations were enough for HMGP to find and fill in the global Pareto frontier.

Benchmark problem ZDT2 with multiple concave Pareto frontiers The optimization task formulation used is as follows: (13) ZDT2 benchmark (13) has 30 design variables and dozens of local Pareto fronts.

30;,..1,101

91

1

2

2

12

11

==≤≤

⎥⎦

⎤⎢⎣

⎡−

+=

⎥⎥⎦

⎢⎢⎣

⎡⎟⎟⎠

⎞⎜⎜⎝

⎛−⋅=+

=

∑=

nnix

xn

g

gFgFMinimize

xFMinimize

i

n

ii

Page 20: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

20

FIG. 20A shows Pareto optimal points found by HMGP algorithm for the benchmark task (13). By the price of 600 evaluations HMGP found the exact global Pareto front, and covered the front completely. HMGP started from the blue initial point (see FIG.20A), and sequentially found several local Pareto frontiers.

Fragments of local Pareto frontiers parallel to the green front can be seen on FIG. 20B in red. At the very end of the optimization session HMGP found the global Pareto frontier, and covered it from the beginning to the end.

FIG.20B Results of HMGP after 400 evaluations, and results of Pointer after 1200 evaluations, NSGA-II and AMGA -- after 1500 evaluations.

All points evaluated by each algorithm are visualized. HMGP has found and evenly covered global Pareto frontier. Pointer has found a few Pareto optimal points corresponding to low values of F1. NSGA-II and AMGA could not approach the global Pareto frontier after 1500 model evaluations.

Benchmark problem ZDT3 with multiple disjoint Pareto frontiers The optimization task formulation used is as follows:

Page 21: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

21

(14)

FIG.21 Results of HMGP after 800 evaluations, and results of Pointer, NSGA-II and AMGA after 1500 evaluations. Only Pareto optimal points and 1st rank points are visualized on the charts.

HMGP has found and covered all five disjoint segments of global Pareto frontier. Pointer has covered only three

of five segments of the Pareto frontier. NSGA-II and AMGA were not able to approach the global Pareto frontier. The optimization results exposed on the diagrams FIG.19-FIG.21 confirm that HMGP algorithm consistently

shows better efficiency and accuracy compared with Pointer, NSGA-II and AMGA optimization algorithms.

8. eArtius Design Optimization Tool eArtius has developed a commercial product Pareto Explorer, which is a multi-objective optimization and

design environment combining a process integration platform with sophisticated, superior optimization algorithms, and powerful post-processing capabilities.

Pareto Explorer 2010 implements the described above optimization algorithms, and provides a complete set of functionality necessary for a design optimization tool:

• Intuitive and easy to use Graphical User Interface; advanced IDE paradigm similar to Microsoft Developer Studio 2010 (see FIG.22);

• Interactive 2D/3D graphics based on OpenGL technology; • Graphical visualization of optimization process in real time; • Process integration functionality; • Statistical Analysis tools embedded in the system; • Design of Experiments techniques; • Response Surface Modeling; • Pre- and post-processing of design information; • Data import and export.

30;,..1,101

91

)10sin(1

2

111

2

11

==≤≤−

+=

⎥⎦

⎤⎢⎣

⎡−−⋅=+

=

∑=

nnix

xn

g

FgF

gFgFMinimize

xFMinimize

i

n

ii

π

Page 22: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

22

All the diagrams included in this paper are generated by Pareto Explorer 2010. The diagrams give an idea about the quality of data visualization, the ability to compare different datasets, and a flexible control over the diagrams appearance.

FIG. 22 shows a screenshot of Pareto Explorer main window.

In addition to the design optimization environment implemented in Pareto Explorer, eArtius provides all the described algorithms as plug-ins for Noesis OPTIMUS, ESTECO modeFrontier, and Simulia Isight design optimization environments.

Additional information about eArtius products and design optimization technology can be found at www.eartius.com.

9. Conclusion A new concept of directed optimization on Pareto frontier is introduced, and Multi-Gradient Pathfinder (MGP)

algorithm is developed based on this concept. According to the concept, MGP performs optimization search directly on Pareto frontier in a preferred direction determined by the user’s preferences. This allows the following:

(a) Avoiding a search in the areas that do not contain Pareto optimal points; as result, 80-95% of evaluated points are Pareto optimal;

(b) Performing a search for the best optimal solutions only in the user’s area of interest and dramatically reducing computational effort;

(c) Precise approachment to a desired solution on Pareto frontier instead of inaccurate approachment typical of GAs and other conventional optimization techniques.

MGP has unparalleled efficiency because of the (a)-(c) reasons explained above, and also because of an increased control over the optimization process given to the user. For instance, MGP is able to perform a number of steps determined by the user, and then stop. In this mode, the user can precisely find a desirable improvement for the best known design by the price of just 10-15 evaluations! Thus, MGP can be used for optimization of extremely computationally expensive simulation models taking hours and even days for a single evaluation. Obviously, MGP

Page 23: Directed Optimization on Pareto Frontier

American Institute of Aeronautics and Astronautics

23

is good for fast models as well. Hybrid Multi-Gradient Pathfinder (HMGP) algorithm is also developed based on the same concept of directed

optimization on Pareto frontier. HMGP employs a gradient-based technique, and behaves similarly to MGP. But in addition, HMGP employs GA technique to search for dominating Pareto fronts. HMGP starts gradient-based steps along dominating Pareto front as soon as the first dominating Pareto optimal point is found by GA-based part of the algorithm. HMGP is very efficient in finding the global Pareto frontier, and in finding the best point on it with respect to preferable objectives.

Both MGP and HMGP algorithms employ eArtius response surface method DDRSM [2], which allows efficient optimizing models with dozens and hundreds of design variables.

Comparison of HMGP with state of the art commercial multi-objective optimization algorithms NSGA-II, AMGA, and Pointer on a number of challenging benchmarks has shown that HMGP finds global Pareto frontiers 2-10 times faster. This allows to avoid using DOE and surrogate models for global approximation, and instead apply HMGP directly for the optimization of computationally expensive simulation models.

HMGP is the best choice for solving global multi-objective optimization tasks for simulation models with moderate estimation time when 200-500 model evaluations are considered as a reasonable number of model evaluations for finding global Pareto optimal solutions.

10. References 1. Marler, R. T., and Arora, J. S. (2004), "Survey of Multi-objective Optimization Methods for Engineering", Structural and Multidisciplinary Optimization, 26, 6, 369-395. 2. Vladimir Sevastyanov, Oleg Shaposhnikov Gradient-based Methods for Multi-Objective Optimization. Patent Application Serial No. 11/116,503 filed April 28, 2005. 3. US Patent # 7,593,834, 2009. Lev Levitan, Vladimir Sevastyanov. The Exclusion of Regions Method for Multi-Objective Optimization. 4. Vanderplaats, Garret N. 1984. Numerical Optimization Techniques for Engineering Design: With Applications, McGraw Hill Series in Mechanical Engineering. 5. Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.