38-57s

Upload: cacalau123

Post on 04-Jun-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 38-57S

    1/20

    International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004 38

    Evolutionary Algorithms for Multi-Criterion

    Optimization: A SurveyAshish Ghosh

    Machine Intelligence Unit, Indian Statistical Institute, Kolkata-108, India.

    Satchidananda Dehuri

    Department of Computer Science & Engineering,Silicon Institute of Technology, BBSR-24, India.

    Abstract:In this paper, we review some of the most popular evolutionary algorithms and a systematic comparison among them.

    Then we show its importance in single and multi-objective optimization problems. Thereafter focuses some of the multi-objective

    evolutionary algorithms, which are currently being used by many researchers, and merits & demerits of multi-objectiveevolutionary algorithms (MOEAs). Finally, the future trends in this area and some possible paths of further research are

    addressed.

    Keywords: Evolutionary algorithms, multi-criterion optimization, Pareto-optimal solution, dominated and non-dominatedsolution.

    Received:March 01, 2004| Revised: February 03, 2005| Accepted:March 10, 2005

    1. Introduction

    Evolutionary techniques have been used for the purposeof single-objective optimization for more than threedecades [1]. But gradually people discovered that manyreal world problems are naturally posed as multi-objective. Now a days multi-objective optimization is nodoubt a very popular topic for both researchers andengineers. But still there are many open questions in thisarea. In fact there is not even a universally accepteddefinition of optimum as in single objectiveoptimization, which makes it difficult to even compareresults of one method to another, because normally thedecision about what the best answer is corresponds tothe so called (human) decision-maker.

    Since multi-criterion optimization requiressimultaneous optimization of multiple often competing orconflicting criteria (of objectives), the solution to such

    problems is usually computed by combining them into asingle criterion optimization problem. But the resultingsolution to the single objective optimization problem isusually subjective to the parameter settings chosen by theuser [2], [3]. Moreover, since usually a classicaloptimization method is used, only one solution(hopefully a Pareto-optimal solution) can be found in onesimulation run. So, in-order to find multiple Pareto-optimal solutions, evolutionary algorithms are the bestchoice, because it deals with a population of solutions. Itallows to finding an entire set of Pareto-optimal solutions

    in a single run of the algorithm. In addition to this,evolutionary algorithms are less susceptible to the shapeor continuity of the Pareto front.

    The rest of the paper is organized as follows. Section2 gives a brief description of evolutionary algorithm.Section 3 addresses the key concept of multi-objectiveevolutionary algorithm. The different approaches ofMOEA and their merits and demerits are the subject ofsection 4. A discussion of the future perspectives isgiven in section 5. Section 6 concludes the article. Someapplications of multi-objective evolutionary algorithmsare also indicated.

    2. Evolutionary AlgorithmsEvolution is in essence a two-step process of randomvariation and selection [4]. It can be modeledmathematically as ]))[((]1[ txvstx =+ , where the

    population at time t, denoted as ][tx , is operated on byrandom variation v, and selection to give rise to a new

    population ]1[ +tx . The process of evolution can bemodeled algorithmically and simulated on a computer.The modeled algorithm is known as evolutionaryalgorithms (EAs). Evolutionary computation is the fieldthat studies the properties of these algorithms and similar

    procedures for simulating evolution on a computer. Thesubject is still relatively new and represents an effort to

    bring together researchers who have been working inthese and other closely related fields.

  • 8/13/2019 38-57S

    2/20

    39 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    In essence, the EAs are used to describe computer-based problem solving algorithms that use computationalmodels of evolutionary processes as key elements in theirdesign and implementation. A variety of evolutionaryalgorithms have been evolved during the last 30 years.

    The major ones are i) Genetic algorithms, ii)Evolutionstrategies, iii) Evolutionary programming and iv)Genetic programming. Each of these constitutes adifferent approach, however they are inspired by thesame principle of natural evolution. They all share acommon conceptual base of simulating the evolution ofindividualstructures via processes of selection,mutation,and crossover. More precisely, EAs maintain a

    populationof structures that evolve according to rules ofselection and other operators, which are referred to asgenetic operators, such as crossoverand mutation. Let ushave a discussion about all these evolutionary algorithms.

    2.1. Genetic AlgorithmsA genetic algorithm developed by J.H. Holland(1975)[5][1] is a model of machine learning, whichderives its behavior from a metaphor of the processes ofevolutionin nature. GAs are executed iteratively on a setof coded chromosomes, called a population, with three

    basic genetic operators: selection, crossover andmutation. Each member of the population is called achromosome (or individual) and is represented by astring. GAs use only the objective function informationand probabilistic transition rules for genetic operations.The primary operator of GAs is the crossover. The figure1shows the basic flow diagram of a GA:

    2.2. Evolutionary Strategy

    Evolutionary strategies (ES) were developed in Germanyby I.Rechenberg and H.P.Schwefel [6-8]. It imitatesmutation, selection and recombination by using normallydistributed mutation, a deterministic mechanism forselection (which chooses the best set of offspring

    individuals for the next generation) and a broadrepertoire of recombination operators. The primaryoperator of the ES is mutation. There are two variants ofselection commonly used in ES. In the elitist variant, the parents individual for new generation are selectedfrom the set of both parents and offspring at oldgeneration. This is calledplus strategy(+).

    If parent individuals do not take part in the selection,then individuals of the next generation are selectedfrom offspring, which is called comma strategy anddenoted as (,). The notation ( ,+ ) is used tosubsume both selection schemes. Shows the basic

    principle of ES.

    AlgorithmBegin

    t = 0initialize Ptevaluate Pt

    while (termination condition not satisfied) dobegin

    t = t+1select Ptfrom Pt-1mutate Ptevaluate Pt

    endend

    2.3 Evolutionary ProgrammingFogel (1960) proposed a stochastic optimizationstrategysimilar to GAs called evolutionary programming (EP)[9].EP is also similar to evolutionary strategies (ESs). Like

    both ES and GAs, EP is a useful method of optimizationwhen other techniques such as gradient descent or directanalytical discovery are not possible. Combinatorial and

    real-valued

    function optimization,

    in which theoptimization surface or fitness landscape is "rugged",possessing many locally optimal solutions, are wellsuited for evolutionary programming. The basic EPmethod involves 3 steps (repeat until a certain number ofiterations is exceeded or an acceptable solution isobtained): Generate the initial population of solutions

    randomly. Each solution is replicated into a new population.

    Each of these offspring solutions are mutatedaccording to a distribution of mutation types,

    ranging from minor to extreme with a continuumof mutation types. The severity of mutation isFigure 1. Basic structure of genetic algorithm

    yes

    no

    Intialize o ulation

    Fitness assi nment

    selection

    crossover

    mutation

    output

    Is perform-ance good

  • 8/13/2019 38-57S

    3/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 40

    judged on the basis of the functional changeimposed on the parents.

    Each offspring solution is assessed by computingit's fitness. Typically, a stochastic tournament isheld to determine N solutions to be retained for the

    Population of solutions, although this isoccasionally performed deterministically. There isno requirement that the Population size be heldconstant, however, not that only a single offspringis generated from each parent.

    It should be pointed out that EP typically does not useany crossover as a genetic operator.

    2.4 Genetic programmingGenetic programming (GP) was introduced by John Koza(1992)[10-12]. GP is the extension of the genetic model

    of learning into the space of programs. That is, theobjects that constitute the populationare not fixed-lengthcharacter strings that encode possible solutions to the

    problem at hand, they are programs that, when executed,are the candidate solutions to the problem. These

    programs are expressed in genetic programming as parsetrees, rather than as lines of code. Thus, for example, thesimple program cba *+ would be represented as:

    or to be precise, as suitable data structures linkedtogether to achieve this effect. Because this is a verysimple thing to do in the programming language LISP,many genetic programmers tend to use LISP. There arestraightforward methods to implement GP using a non-LISP programming environment also.

    The programs in the Population are composed ofelements from the function set and the terminal set,which are typically fixed sets of symbols selected to beappropriate to the solution of problems in the domain of

    interest.In GP the crossover operation is implemented by

    taking randomly selected subtrees in the individual's(selected according to fitness) and exchanging them. Itshould be noted that GP usually does not use anymutation as in genetic operator.

    3. Multi-objective evolutionary algorithmsMulti-objective optimization methods as the namesuggests, deal with finding optimal solutions to problemshaving multiple objectives [13]. So in this type of

    problems the user is never satisfied by finding onesolution that is optimum with respect to a single criterion.

    The principle of a multi-criterion optimizationprocedure is different from that of a single criterionoptimization. In a single criterion optimization the maingoal is to find the global optimal solutions. However, in amulti-criterion optimization problem, there is more than

    one objective function, each of which may have adifferent individual optimal solution. If there is asufficient difference in the optimal solutionscorresponding to different objectives then we say that theobjective functions are conflicting to each other. Multi-criterion optimization with such conflicting objectivefunctions gives rise to a set of optimal solutions, insteadof one optimal solution. The reason for the optimality ofmany solutions is that no one can be considered to be

    better than any other with respect to all other objectivefunctions. These optimal solutions have a special namecalled Pareto-optimal solutions following the name of an

    economist Vilfredo Pareto. He stated in 1896 a conceptaccording to his name, known as Pareto optimality.

    The concept is that the solution to a multi-objectiveoptimization problem is normally not a single value but

    instead a set of values also called the Pareto set. Letus illustrate thePareto optimal solution with the time &space complexity of an algorithm shown in the followingfigure. In this problem we have to minimize both time aswell as space complexity.

    The point p represents a solution, which has a minimal

    time, but the space complexity is high. On the other hand,the point r represents a solution with high timecomplexity but minimum space complexity. Considering

    both objectives, no solution is optimal. So in this case wecant say that solution p is better than r. In fact, thereexist many such solutions like q also belong to thePareto optimal set and one cant sort the solution

    according to the performance metrics considering both

    +/ \

    a */ \

    b c

    q

    p

    Figure 2. Pareto optimal solutions

    Time

    Pareto-optimal front

    t

    sSpace

    r

  • 8/13/2019 38-57S

    4/20

    41 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    objectives. All the solutions, on the curve, are known asPareto-optimal solutions.

    From the figure-2 it is clear that there existsolutions, which does not belong to the Pareto optimalset, such as a solution like t. The reason why t does

    not belong to Pareto optimal set is obvious. If wecompare t with solution q, 't is not better than qwith respect to any of the objectives. So we can say thatt is a dominatedsolution or inferiorsolution.

    Now we are clear about the optimality concept ofmulti-criterion. Based on the above discussions, we firstdefine conditions for a solution to become dominatedwith respect to another solution and then presentconditions for a set of solutions to become a Pareto-optimal set.

    Let us consider a problem having m objectives (say

    miif ,.....,3,2,1, = and m >1). Any two solutions

    )1(

    u and

    )2(u

    (having t decision variables each) can haveone of two possibilities-one dominates the other or none

    dominates the other. A solution)1(

    u is said to dominate

    the other solution)2(

    u , if the following conditions aretrue:

    1. The solution )1(u is no worse (say the operator denotes worse and } denotes better) than

    )2(u in all

    objectives, or .....,3,2,1),)2(

    ())1(

    ( miuifuif =

    2. The solution )1(u is strictly better than )2(u in atleast one objective, or )

    )2(()

    )1(( uifuif { for at

    least one, i {1,2,3,.. , m }.

    If any of the above condition is violated, the solution)1(

    u does not dominate the solution)2(

    u . If)1(

    u

    dominates the solution)2(

    u , then we can also say that)2(

    u is dominated by)1(

    u , or)1(

    u is non dominated by

    )2(u , or simply between the two solutions, )1(u is thenon-dominated solution.

    The following definitions ensure whether a set ofsolutions belongs to a local or global Pareto-optimal set:

    Local Pareto-optimal setIf for every member u in a set S, no solution v

    satisfying vu , where is a small positive

    number, which dominates any member in the set S, thenthe solutions belonging to the set S constitute a localPareto-optimal set. The definition says that if we

    perturbing the solution u in a small amount then the

    resultant solution v dominates any member of that setthen the set is called local Pareto optimal set

    Global Pareto-optimal setIf there exits no solution in the search space whichdominates any member in the set S, then the solutions

    belonging to the set S constitute a global Pareto-optimalset.

    Difference between non-dominated set & a

    Pareto-optimal setA non-dominated setis defined in the context of a sampleof the search space (need not be the entire search space).In a sample of search points, solutions that are notdominated (according to the previous definition) by anyother solution in the sample space constitute the non-

    dominated set. A Pareto-optimal set is a non-dominatedset, when the sample is the entire search space. Thelocation of the Pareto optimal set in the search space issometimes loosely called as the Pareto optimal region.

    From the above discussions, we observe that thereare primarily two goals that a multi-criterion optimizationalgorithm must try to achieve:

    1.Guide the search towards the global Pareto-optimal region, and

    2. Maintain population diversity in the Pareto-

    optimal front.

    The first task is a natural goal of any optimization

    algorithm. The second task is unique to multi-criterionoptimization.Multi-criterion optimization is not a new field of

    research and application in the context of classicaloptimization. The weighted sum approach, -perturbationmethod, Goal programming, Tchybeshev method, min-max method and others are all popular methods oftenused in practice [14]. The core of these algorithms, is aclassical optimizer, which can at best, find a singleoptimal solution in one simulation. In solving multi-criterion optimization problems, they have to be usedmany times, hopefully finding a different Pareto-optimal

    solution each time. Moreover, these classical methodshave difficulties with problems having non-convexsearch spaces.

    Evolutionary algorithms (EAs) are a natural choicefor solving multi-criterion optimization problems becauseof their population-based nature. A number of Pareto-optimal solutions can, in principle, be captured in an EA

    population, thereby allowing a user to find multiplePareto-optimal solutions in one simulation run. Althoughthe possibility of using EAs to solve multi-objectiveoptimization problems was proposed in the seventies.David Schaffer first implemented Vector evaluatedgenetic algorithm (VEGA) in the year 1984. There waslukewarm interest for a decade, but the major popularity

  • 8/13/2019 38-57S

    5/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 42

    of the field began in 1993 following a suggestion byDavid Goldberg based on the use of the non-dominationconcept and a diversity- preserving mechanism. Thereare various multi-criterions EAs proposed so far, bydifferent authors.

    4. Different Approaches of MOEAFollowing popular approaches have been used bydifferent researchers for multi-objective optimization,each one having its merits and demerits. Weighted-sum-Approach (Using randomly generated

    weights and Elitism) Schaffers Vector Evaluated Genetic Algorithm

    (VEGA) Fonseca and Flemings Multi-Objective Genetic

    Algorithm (MOGA)

    Horn, Nafploitis, and Goldbergs Niched ParetoGenetic Algorithm (NPGA) Zitzler and Thieles Strength Pareto Evolutionary

    Algorithm (SPEA) Srinivas and Debs Non-dominated Sorting Genetic

    Algorithm (NSGA) Vector-optimized evolution strategy (VOES) Weight-based genetic algorithm (WBGA)

    Sharing function approach Vector evaluated approach

    Predator-prey evolution strategy (PPES) Rudolphs Elitist multi-objective evolutionary

    algorithm (REMOEA) Elitist non-dominated sorting genetic algorithm

    (ENSGA) Distance-Based Pareto genetic algorithm (DBPGA) Thermodynamical genetic algorithm (TGA) Pareto-archived evolution strategy (PAES)4.1. Weighted-Sum ApproachThis approach was proposed by Ishibuchi and Murata in1996[15]. Here they used randomly generated weightsand elitism. Let us see how this method works with the

    help of the following algorithm. Suppose there are m objective functions miif ,....,3,2,1, = . Our task is to

    optimize (i.e. maximize) all these objective.

    Algorithm1. Generate the initial population randomly.2. Compute the value of the m objectives for each

    individual in the population. Then determine whichare the non-dominated solutions and store them in aseparate population that we will call external todistinguish it from the current population, which we

    call current.

    3. If n represents the number of non-dominatedsolutions in external and p is the size of current,

    then we select np pairs of parents using the

    following procedure:

    The fitness function used for each individual is)(....)(22)(11)( xmfmwxfwxfwxf ++= Ge

    nerate m random number in the interval [0, 1] i.e.

    mrrr ,....,2,1 . The weight values are determined as

    follow

    ( )mrrrir

    iw +++=

    ...21

    mi ,....,3,2,1= . This ensure that all 0iw ,

    mi ...,3,2,1= and that 1,...2,1 =mwww .

    Select a parent with probability( )

    =currentx currentfxf

    currentfxfxp

    )()(

    )()()(

    min

    min ,

    where minf is the minimum fitness in the current

    population.

    4. Apply crossover to the selected np pairs ofparents. Apply mutation to the new solutionsgenerated by crossover.

    5. Randomly select n solutions from external. Thenadd the selected solutions to the np solutionsgenerated in the previous step to construct a

    population of sizep .

    6. Stop if a pre-specified stopping criterion is satisfied(e.g., the pre-specified maximum number ofgenerations has been reached). Otherwise, return tostep 2.

    Merits & DemeritsThe advantage of this method is, its computationalefficiency, and its suitability to generate a strongly non-

    dominated solution that can be used as an initial solutionfor other techniques.

    Its main demerit is the difficulty to determine theweights that can appropriately scale the objectives whenwe do not have enough information about the problem,

    particularly if we consider that any optimal pointobtained will be a function of such weights. Still moreimportant is the fact that this approach does not generate

    proper Pareto-optimal solutions in the presence of non-convex search spaces regardless of the weights used.However the incorporation of elitism is an important

    aspect of this method.

  • 8/13/2019 38-57S

    6/20

    43 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    4.2. Schaffers Vector Evaluated Genetic

    Algorithm (VEGA )David Schaffer (1984)[16-18] extended GrefenstettesGENESIS program [Grefenstette 1984] to include

    multiple objective functions. Schaffers approach was touse an extension of the simple genetic algorithm (SGA)that he called the vector evaluated genetic algorithm(VEGA), and that differed from the SGA in selectiononly. This operator was modified so that at eachgeneration a number of sub-populations were generated

    by performing proportional selection according to eachobjective in turn. Thus for a problem with k objectives,k sub-populations of size N/k would be generated(assuming a total population size of N). These sub-

    populations would be shuffled together to obtain a newpopulation of size N, on which the GA would apply, the

    crossover and mutation operators in the usual way.Figure 3 shows the structural representation of thisprocess. Schaffer realized that the solutions generated byhis system were non-dominated in a local sense, becausetheir non-dominance was limited to the current

    population, and while a local dominated individual is alsoglobally dominated, the converse is not true [Schaffer1985]. An individual who is dominated in one generationmay become dominated by an individual who emerges ina later generation. Also he noted a problem that ingenetics is known as speciation(i.e. we could have theevolution of species within the population which excel

    on different aspects of performance). This problemarises because this technique selects individuals whoexcel in one dimension of performance, without lookingat the other dimensions. The potential danger doing it isthat we could have individuals with what Schaffer callsmiddling performance in all dimensions, which could

    be very useful for compromise solutions, but that will notsurvive under this selection scheme, since they are not inthe extreme for any dimension of performance (i.e. theydo not produce the best value for any objective function,

    but only moderately good values for all of them).Speciation is undesirable because it is against our goal of

    finding a compromise solution. Schaffer tried tominimize this speciation by developing two heuristics the non-dominated selection heuristic (a wealthredistribution scheme), and the mate selection heuristic(across breeding scheme).

    In the non-dominated selection heuristic, dominatedindividuals are penalized by subtracting a small fixed

    penalty from their expected number of copies duringselection. Then the total penalty for dominatedindividuals was divided among the non-dominatedindividuals and was added to their expected number ofcopies during selection.

    But this algorithm failed when the population hasvery few non-dominated individuals, resulting in a large

    fitness value for those few non-dominated points,eventually leading to a high selection pressure. The mateselection heuristic was implemented by the following

    procedure:Initially select an individual randomly from the

    population. Then the corresponding mate was selected asthe individual, which has maximum Euclidean distance

    from it.

    But it failed too to prevent the participation of poorerindividuals in the mate selection. This is because ofrandom selection of the first mate and the possibility of alarge Euclidean distance between a champion and a less

    fitted solution. Schafer concluded that the random mateselection is far superior than this heuristic.

    Start

    Initialize thePopulation

    Evaluate forobjective

    function f1

    Evaluate forObjective

    Function f2

    Select N/ksub-

    populationBased on f1

    Combined all the sub-population

    Suffle entirepopulation

    Is performancesatisfactor ?

    Evaluate forObjective

    Function fk

    Select N/ksub-

    populationBased on f2

    Select N/ksub-

    populationBased on fk

    Crossover

    Mutation

    Stop

    es

    no

    Figure 3 Schematic Representation of VEGA

  • 8/13/2019 38-57S

    7/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 44

    Merits & DemeritsThe main advantage of this algorithm is its simplicity i.e.this approach is easy enough to implement. Richardson etal.(1989)[19] noted that the shuffling and merging of all

    the sub-populations corresponds to averaging the fitnesscomponents associated with each of the objectives. SinceSchaffer used proportional fitness assignment, thesefitness components were in turn proportional to theobjectives themselves. Therefore, the resulting expectedfitness corresponds to a linear combination of theobjectives where the weights depend on the distributionof the population at each generation as shown byRichardson et al.

    The main consequence of this is that when we have aconcave trade-off surface certain points in concaveregions will not be found through this optimization

    procedure in which we are using just a linearcombination of the objectives, and it has been proved thatthis is true regardless of the set of weights used.Therefore, the main weakness of this approach is itsinability to produce Pareto-Optimal solutions in presenceof non-convex search spaces.

    4.3. Fonseca and Flemings MOGAFonseca and Fleming (1993)[20-22] implementedGoldbergs suggestion in a different way. Let us firstdiscuss what is Goldbergs suggestion about Paretoranking.

    Goldberg proposed the Pareto ranking scheme in(1989), where a solution x at generation t has acorresponding objective vector xu, and n is the

    population size, the solutions rank is defined by thefollowing algorithm.

    Algorithmcurr_rank = 1m = nWhile n != 0 do

    For i= 1. . . ,m

    doIf xuis non-dominatedRank (x, t) = curr_rankenddoFor i=1,.....,mdoIf rank (x, t)= curr_rankRemove x from populationn= n-1;enddocurr_rank = curr_rank +1m = n

    endwhile

    This means, in Pareto ranking, the whole populationis checked and all non-dominated individuals areassigned rank 1. Then remove these individuals fromthe population, which has rank 1. Again detect all non-dominated individual from the rest of the population and

    assign rank 2. In this way the procedure goes on till allsolutions have been assigned the required rank.But in MOGA, the whole population is checked and allnon-dominated individuals are assigned rank 1. Otherindividuals are ranked by checking the non-dominance ofthem with respect to the rest of the population in thefollowing way.

    For example, an individual xiat generation t, whichis dominated by pi

    (t)individuals in the current generation.Its current position in the individual's rank can be given

    byRank (xi, t) = 1 + pi

    (t)

    After completing the ranking procedure, then it is time toassign the fitness to each individual. Fonseca andFleming proposed two method of assigning fitness.

    Rank-based fitness assignment method Niche-formation methods.Rank-based fitness assignment is performed in thefollowing way [1993]

    1. Sort population according to rank2. Assign fitness to individuals by interpolating from

    the best (rank 1) to the worst (rank n N) in the

    usual way, according to some function, usually linearbut not necessarily.

    3. Average the fitness of individuals with the samerank, so that all of them will be sampled at the samerate. This procedure keeps the global populationfitness constant while maintaining appropriateselection pressure, as defined by the function used.

    As Goldberg and Deb have pointed out, this type ofblocked fitness assignment is likely to produce a largeselection pressure that might produce prematureconvergence. To avoid this, Fonseca and Fleming use the

    second method (i.e. Niche-formation method) todistribute the population over the Pareto optimal region,

    but instead of performing sharing on the parametervalues, they have used sharing on the objective functionvalues.

    Merits & DemeritsThe main advantage of MOGA is that it is efficient andrelatively easy to implement. The performance of thismethod is highly dependent on the sharing factor (share).However Fonseca and Fleming have developed a good

    methodology to compute such a value for their approach.

  • 8/13/2019 38-57S

    8/20

  • 8/13/2019 38-57S

    9/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 46

    dominated solutions having more dominated solution inthe combined population. After generating a populationfor next generation, it is required to update the external

    population. The updating procedure is as follows:For each individual in the current population, check

    whether it is dominated by, both current population andexternal population. If it is not dominated by both thenadd this individual to external population (EXTERNAL).Then all solutions dominated by the added one areeliminated from external population. In this way theexternal population is updated in every generation. Thefollowing algorithm shows the idea:

    Algorithm1. Initialize the population of size n.2. Determine all the non-dominated solutions from

    current population and store them in a separatepopulation called EXTERNAL.3. Combine the external with current population and

    assign the fitness to individuals according to howmany solutions it dominates.

    4. After assigning the fitness to all the individuals,select n individuals for next generation.

    5. Apply crossover and mutation to get the newpopulation of size n.

    6. Update the EXTERNAL population.7. Test whether the performance criterion is satisfied or

    not. If not then goto step 3.Otherwise stop.

    Zitzler and Thieles[25] solved the 0/1-Knapsackproblem using this approach. They have reported betterresults than other methods.

    Merits & DemeritsThe main merits of this method are that, it shows theutility of introducing elitism in evolutionary multi-criterion optimization. Since this method does not usesharing factor share& tdomas in the MOGA and NPGA;this facilitates good advantages.

    There exists a multi-objective problem where the

    resulting Pareto-optimal front is non-convex. In that caseGA's success to maintain diverse Pareto-optimalsolutions largely depends on fitness assignment

    procedure. This method does not converge to true Pareto-optimal solutions, because this method uses the fitnessassignment procedure, which is very sensitive to concavesurface.

    4.6. Srinivas and Debs Non-dominated

    Sorting Genetic Algorithm (NSGA)Non-dominated GAs varys from simple GAs only in the

    way the selection operator in used. The crossover and

    mutation operators remain as usual [28-30]. Let usdiscuss the selection operator they proposed.

    For selection, two steps are needed. First, thepopulation is ranked on the basis of an individuals non-domination level and then sharing is used to assign

    fitness to each individual.

    Ranking individuals based on non

    domination levelConsider a population of size n, each having m (>1)objectives function values. The following algorithm can

    be used to find the non-dominated set of solutions:

    AlgorithmFor i = 1 to n do

    For j = 1 to n do

    If (j != i) thenCompare solutions x(i)andx(j)for domination using twoconditions for all m objectives.if for any j , x(i)is dominated by x(j) ,mark x(i) as dominated

    endifendfor

    endfor

    Solutions, which are not marked dominated, are non-dominated solution. All these non-dominated solutions

    are assumed to constitute the first non-dominated front inthe population. In order to find the solutions belonging tothe second level of non-domination, we temporarilydisregard the solutions of the first level of non-domination and follow the above procedure. Theresulting non-dominated solutions are the solutions of thesecond level of non-domination. This procedure iscontinued till all solutions are classified into a level ofnon-domination. It is important to realize that the numberof different non-domination levels could vary between 1to n.

    Fitness assignmentIn this approach the fitness is assigned to each individualaccording to its non-domination level. An individual in ahigher level gets lower fitness. This is done in order tomaintain a selection pressure for choosing solutions fromthe lower levels of non-domination. Since solutions inlower levels of non-domination are better, a selectionmechanism that selects individuals with higher fitness

    provides a search direction towards the Pareto-optimalregion.

    Setting a search direction towards the Pareto-optimal region is one of the two tasks of multi-objectiveoptimization. Providing diversity among current non-

  • 8/13/2019 38-57S

    10/20

    47 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    dominated solutions is also important in order to get agood distribution of solutions in the Pareto-optimal front.The fitness assignment is performed in two stages.

    1. Assigning same dummy fitness to all the solutions ofa particular non-domination level.

    2. Apply the sharing strategy.Now we discuss the details of these two stages:

    First, all solutions in the first non-dominated frontare assigned a fitness equal to the population size. This

    becomes the maximum fitness that any solution can havein any population. Based on the sharing strategy, if asolution has many neighboring solutions in the samefront, its dummy fitness is reduced by a factor and ashared fitness is computed. The factor depends on thenumber and proximity of neighboring solutions. Once allsolutions in the first front are assigned their fitness

    values, the smallest shared fitness value is determined.Thereafter, the individuals in the second non-

    domination level are all assigned a dummy fitness equalto a number smaller than the smallest shared fitness ofthe previous front. This makes sure that no solution in thesecond front has a shared fitness better than that of anysolution in the first front. This maintains a pressure forthe solutions to lead towards the Pareto-optimal region.The sharing method is again used among the individualsof second front and shared fitness of each individual isfound. This procedure is continued till all individuals areassigned a shared fitness.

    After the fitness assignment method, use a stochasticremainder roulette-wheel selection for selecting Nindividuals. Thereafter apply the crossover and mutation.Shared fitness is calculated as follows:

    Given a set of 1n solutions in the lth non-dominated

    front each having a dummy fitness value 1f , the sharing

    procedure is performed in the following way for each

    solution 1,...,3,2,1 ni= .

    1. Compute a normalized Euclidean distance measurewith another solution j in the lthnon-dominated front, asfollows:

    2

    1)()(

    )()(

    =

    =

    P

    ps

    p

    u

    p

    j

    p

    i

    p

    ijxx

    xxd

    where P is the number of variables in the problem. The

    parameter's)(u

    px and)(s

    px are the upper and lower

    bounds of variable px .

    2. This distance dij is compared with a pre-specifiedparameter shareand the following sharing function valueis computed:

    ( )

    =otherwise

    ifdijdijdsh

    shareijshare

    ,0

    ,2

    1)(

    3. Increment j. If 1nj

  • 8/13/2019 38-57S

    11/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 48

    ) are used as an individual in a population. Thus anindividual i results in two objective vector evaluations:

    (i) df , calculated by using the decision variable of the

    dominant genotype and (ii) rf , calculated by using the

    decision variables of the recessive genotype. Let us seethe procedure of fitness evaluation and selection scheme:

    Let n be the number of objective functions. Thenthe selection process is completed in n steps. For eachstep a user defined probability is used to choose anobjective. This vector can either be kept fixed or variedwith generations. If the mthobjective function is chosen,the fitness of an individual i s computed as a weighted

    average of its dominant objective value imdf )( and

    recessive objective value imrf )( in a 2 : 1ratio.

    ( ) ( )( )im

    r

    i

    md

    i ffF 31)(32 +=

    For each selection step, the population is sortedaccording to each objective function and the best

    ( )( )thnn 1 portion of the population is chosen asparents. This procedure is repeated n times, every timeusing the survived population from the previous sorting.Thus the relationship between and is as follows:

    ( )( ) = mmm 1 All new solutions are copied into an external set.Which stores all non-dominated solutions found since thebeginning of the simulation run. After new solutions are

    inserted into the external set, a non-domination check ismade on the whole external set and only new non-dominated solutions are retained. If the size of theexternal set exceeds a certain limit, a niching mechanismis used to eliminate closely packed solutions. This allowsmaintenance of diversity in the solutions of the externalset.

    Merits & DemeritsWe observe that this algorithm performs a dominationcheck to retain non-dominated solutions and a nichingmechanism to eliminate crowded solutions. Thesefeatures are essential in a good MOEA. Since Kursawehas not simulated it for multi-objective optimization, itremains a challenging task to investigate how thisalgorithm will fare in more complex problems. Becauseof its complications it is not much used by currentresearchers.

    4.8 Weightbased Genetic Algorithm

    (WBGA)Hajela and Lin (1993)[32[33] proposed a weightbasedgenetic algorithm for multi-objective optimization. As

    the name suggests, each objective function if is

    multiplied by a weight iw . A GA string represents all

    decision variables ix as well as their associated weights

    iw . The weighted objective function values are then

    added together to calculate the fitness of a solution. Eachindividual in a GA population is assigned a different

    weight vector. The key issue in WBGAs is to maintaindiversity in the weight vectors among the populationmembers. In WBGAs the diversity in the weight vectorsis maintained in two ways: A niching method is used only on the sub-string

    representing the weight vectors. Carefully chosen subpopulations are evaluated for

    different predefined weight vectors, an approachsimilar to that of VEGA.

    Sharing function approach

    In addition to n decision variables the user codes aweight vector of size M in a string, the weights must bechosen in a way so as to make the sum of all weights inevery string one.

    = jii www

    Otherwise if only a few Pareto-optimal solutions aredesired, a fixed set of weight vectors may be chosen

    before hand and an integer variable wx can be used to

    represent one of the weight vectors, kw .A solution x(i) is then assigned a fitness equal to the

    weighted sum of the normalized objective function

    values computed as follows: ( )minmaxmin))(()( )()( jjjjix

    wj

    ifffixfwxF =

    In order to preserve diversity in the weight vectors, asharing strategy is proposed:

    )()( j

    w

    i

    wij xxd =

    The sharing function )( ijdsh is computed with a niching

    parameter share. Next calculate the shared fitness F .

    Algorithm1. For each objective function j, set upper and lower

    bounds as maxjf and minjf .

    2. For each solution .,...,3,2,1 ni= calculate thedistance

    )()( kw

    i

    wik xxd =

    with all solutions nk ...,3,2,1= . Then calculate thesharing function values as follows:

    =otherwise

    ifdddsh

    shareikshareik

    ik,0

    ,)(

    Thereafter, calculate the niche count of the solution i as

    = )( ikci dshn .

  • 8/13/2019 38-57S

    12/20

    49 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    3 For each solution ni ,....,2,1= follow the entireprocedure below.

    Assign fitness iF according to )(ixF calculate the

    shared fitness asciii

    nFF =

    .

    When all population members are assigned fitness F ,the proportionate selection is applied to create the matingpool. Thereafter crossover and mutation operators areapplied on the entire string, including the sub-string

    representing wx .

    Merits & DemeritsSince a WBGA uses a singleobjective GA, not muchchange is needed to convert a simple GA implementationinto a WBGA one. Moreover, the complexity of the

    algorithm (with a single extra variable wx ) is smallerthan other multi-objective evolutionary algorithms.

    The WBGA uses a proportionate selection procedureon the shared fitness values. Thus, in principle, theWBGA will work in a straightforward manner in findingsolutions for maximization problems. However, ifobjective functions are to be minimized, they are requiredto be converted into a maximization problem. Moreover,for mixed types of objective functions (some are to beminimized and some are to maximized), complicationsmay arise in trying to construct a fitness function.

    Weighted sum approaches share a common difficulty

    in finding Pareto-optimal solutions in problems havingnon-convex Pareto-optimal region. Thus, a WBGA mayalso face difficulty in solving such problems.

    Vector evaluated approachThis approach is similar to VEGA. First a set of Kdifferent weight vectors wk (where k= 1,2,.. K) arechosen. Thereafter, each weight vector wk is used tocompute the weighted normalized fitness for all n

    population members. Among them the best Kn

    members are grouped together into a subpopulation.

    Perform selection, crossover and mutation on pkto createa new population of size Kn , if k < K, increment k by

    one and go to step-2. Otherwise, combine all

    subpopulations to create the new population kpp = .

    If |p| < n, add randomly created solutions to make thepopulation size equal to n.

    Merits & DemeritsAs in any weight-based method, knowledge of the weightvectors is also essential in this approach. This method isbetter in complexity that the sharing function approach,

    because no pair-wise distance metric is required here.There is also no need for keeping any additional variable

    for the weight vectors. However, since GA operators areapplied to each subpopulation independently, an adequatesubpopulation size is required for finding the optimalsolution corresponding to each weight vector. Thisrequires a large population size.

    By applying both methods in a number of problems,investigators concluded that the vector-evaluatedapproach is better that the sharing function approach.

    4.9. Predator-prey evolution strategy

    (PPES)Laumanns et al. (1998)[34] suggested a multi-objectiveevolution strategy (ES), which is different from any ofthe methods discussed so far. This algorithm does not usea domination check to assign fitness to a solution. Insteadthe concept of a predator-prey model is used. Preys

    represent a set of solutions (x(i)

    , i=1,2,3.. ,n) which areplaced on the vertices of a undirected connected graph.First each predator is randomly placed on any vertex ofthe graph. Each predator is associated with a particularobjective function. Thus at least M predators areinitialized in the graph. Secondly, staying in a vertex, apredator looks around for preys in its neighboringvertices. The Predator catches a prey having the worstvalue of its associated objective function. When a preyx(i) is caught it is erased from the vertex and a newsolution is obtained by mutating (and recombining) arandom prey in the neighborhood of x(i). The new

    solution is then kept in the vertex. After this event isover, the predator takes a random walk to any of itsneighboring vertices. The above procedure continues inparallel (or sequential) with all predators until a pre-specified number of iterations have elapsed.

    Merits & Demerits

    The main advantage of this method is its simplicity.Random walks and replacing worst solutions ofindividual objectives with mutated solutions are allsimple operations, which are easy to implement. Another

    advantage of this algorithm is that it does not emphasizenon-dominated solutions directly. Experimentally shownthat the algorithm is sensitive to the choice of mutationstrength parameter and the ratio of predator and prey. Noexplicit operator is used to maintain a spread of solutionsin the obtained non-dominated set.

    4.10. Thermodyanamical genetic

    algorithm (TDGM)Kita et al. (1996) [35] suggested a fitness function, whichallows a convergence near the Pareto-optimal front and a

    diversity-preservation among obtained solutions. Thefitness function is motivated from the thermodynamic

  • 8/13/2019 38-57S

    13/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 50

    equilibrium condition, which corresponds to theminimum of the Gibbs energy F, defined as follows:

    HTEF = ,

    where E is the mean energy, H is the entropy, and T is

    the temperature of the system. The relevance of theabove term in multi-objective optimization is as follows.The mean energy is considered as the fitness measure.The second term HT controls the diversity existent in thepopulation members. In this way minimization of F

    means minimization of E and maximization of HT. In

    the parlance of multi-objective optimization, this meansminimization of the objective function (convergencetowards the Pareto-optimal set) and the maximization ofdiversity in obtained solutions. Since these two goals areprecisely the focus of an MOEA, the analogy between

    the principle of finding the minimum free energy state ina thermodynamic system and the working principle of anMOEA is applicable.

    Algorithm

    1. Select a population size N, maximum number ofgenerations maxt and an annealing schedule )(tT

    (monotonically non-increasing function) for thetemperature variation.

    2. Set the generation counter t=0 and initialize arandom population )(tP of size N.

    3. Using crossover & mutation operator, create theoffspring population )(tQ of size N from )(tP .

    Combine the parent and offspring populationstogether and call this

    )()()( tQtPtR = .

    4. Create an empty set )1( +tP and set the individualcounter i of )1( +tP to one.

    5. For every solution )(tRj , construct{ }jtPjtP += )1(),( and calculate the freeenergy fitness of the jth member as follows:

    = )()()()( jHtTjEjF k where

    ),(/)( jtPEjE k and the entropy is

    defined as =kk

    k PPjH 11 log)( . The termkP1

    is the proportion of bit l on the locus k of thepopulation ),( jtP .

    After all 2N population members are assigned afitness, find the solution )(tRj which will

    minimize F given above . Include )1( + tRj .

    6. If Ni , then 1+= ii and go to step 5.

    7. If maxtt , then 1+= tt and go to step 3.

    The term Ek can be used as the non-dominated rank ofthe kthsolution in the population ),( jtP .

    Merits & DemeritsSince both parents and offspring populations arecombined and the best set of N solutions are selectedfrom the diversity preservation point of view, thealgorithm is an elitist algorithm.

    The TDGA requires a predefined annealingschedule T(t)- It is interesting to note that thistemperature parameter acts like a normalizationparameter for the entropy term need to balance betweenthe mean energy minimization and entropymaximization. Since this requires a crucial balancing actfor finding a good convergence and spread of solutions,an arbitrary T(t) distribution may not work well in mostproblems.

    4.11. Pareto-archived evolution strategy

    (PAES)Knowles and Corne (2000) suggested a MOEA, whichuses an evolution strategy. In its simplest form, the PAESuses a (1+1)- ES. The main motivation for using an EScame from their experience in solving real worldtelecommunications network design problem.

    At first, a random solution x0 (call this a parent)is chosen. Using a normally distributed probabilityfunction with zero mean and with fixed mutation strengththen mutates it. Let us say that the mutated offspring isC0. Now both of these solutions are compared and thewinner becomes the parent of the next generation. Themain crux of the PAES lies in the way that a winner ischosen in the midst of multiple-objectives.

    At any generation t, in addition to the parent P tand the offspring Ct, the PAES, maintains an archive ofthe best solutions found so far. Initially, this archive isempty and as the generations proceed, good solutions are

    added to the archive and updated. However a maximumsize is always maintained. First the parent Pt and theoffspring Ct are compared for domination. It generatesthree scenarios: If Ptdominates Ctthe offspring Ctis not accepted and

    a new mutated solution is created for furtherprocessing.

    If Ct dominates Pt, the offspring is better than theparent. Then solution Ctis accepted as a parent of thenext generation and as copy of it is kept in thearchive.

    If both Pt and Ct are non-dominated to each other,then the offspring is compared with the currentarchive:

  • 8/13/2019 38-57S

    14/20

    51 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    The offspring is dominated by a number of thearchive. This means this is not a worth candidate,so the parent Pt again mutated to find newoffspring for further processing.

    The offspring dominates a member of thearchive. Then the dominated members aredeleted and the offspring is accepted. Theoffspring becomes the parent of the nextgeneration.

    The offspring is not dominated by any memberof the archive and it also does not dominate anymember of the archive. In this case, it belongs tothat non-dominated front in which the archivesolution belongs. In such a case the offspring isadded to the archive only if there is a slotavailable in the latter.

    The acceptance of the offspring in the archive does notqualify it to become the parent of the next generation.This is because as for the offspring Ct, the current parentPt is also a member of the archive.

    To decide who qualifies as a parent of the nextgeneration, the density of solution in their neighborhoodis checked. The one residing in the least crowded area inthe search space qualifies as the parent. If the archive isfull, the above density-based comparison is performedbetween the parent and the offspring to decide whoremains in the archive. The density calculation isdifferent from the other methods we have discussed so

    far.Each objective is divided into 2d equal divisions,

    where d is a user defined depth parameter. In this way,the entire search space is divided into (2d)Munique, equalsized M dimensional hypercubes. The archived solutionsare placed in these hypercubes according to theirlocations in the objective space. There after the numberof solutions in each hypercube is counted. If the offspringresides in a less crowded hypercube than the parent, theoffspring becomes the parent of the next generation.Otherwise the parent solution continues to be the parentof the next generation.

    Algorithm (Single iteration)

    Parent, tP ,offspring tC and an archive tA . The archive

    is initialized with the initial random parent solution.

    1. If tC is dominated by any member of tA , settt PP =+1 ( tA is not updated). Process is complete.

    Otherwise, if tC dominates a set of members from

    At:

    { }iCAiiCD ttt = ,|)( , perform the followingstep.

    )( ttt CDAA =

    { }ttt CAA =

    tt CP =+1

    Process is complete. Otherwise go to step 2.

    2. Count the number of archived solutions in eachhypercube. The parent tP belongs to a hypercube

    having pn solutions. While the offspring belongs to

    a hypercube having cn solutions. The highest count

    hypercube contains the maximum number ofarchived solutions.

    If NAt , include the offspring in the archive or

    { }ttt CAA = and ),(1 ttt PCwinnerP =+ and

    return. Otherwise (if NAt = ), check if Ctbelongsto the highest count hypercube. If yes reject Ct, set

    tt PP =+1 , and return. Otherwise replace a random

    solution r from the highest count hypercube with Ct:

    { }rAA tt =

    { }ttt CAA =

    ),(1 ttt PCwinnerP =+

    The process is complete. The winner

    ( )tt PC , chooses Ct, if nc< np.Otherwise it chooses Pt.

    Merits & DemeritsThe PAES has a direct control on the diversity that canbe archived in the Pareto optimal solutions step-2 of thealgorithm emphasizes the less populated the hypercubesto survive, thereby ensuring diversity. Choosing anappropriate value of d can control the size of thesehypercubes. In addition to choosing an appropriatearchive size (n), the depth parameter d, which directlycontrols the hypercube size, is also an importantparameter.

    4.12. Rudolphs Elitist Multi-objective

    Evolutionary Algorithm (REMOEA)Rudolph (2001)[36][37] suggested, but did not simulate,a multi-objective evolutionary algorithm that useselitism. Nevertheless, the algorithm provides a usefulplan for introducing elitism into a multi-objective EA. Inits general format, parents are used to create offspringusing genetic operators. Now there are two populations:the parent population Ptand the offspring population Qt.The algorithm works in three steps as follows:

  • 8/13/2019 38-57S

    15/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 52

    Algorithm1. Create the parent population Ptto create an offspring

    population Qt. Find non-dominated solutions of Qtand put them in Q*. Delete Q*from Qtor Qt = Qt\Q*. Set

    0== QP .

    2. For each q Q*, find all solutions p Pt that aredominated by q, put these solutions of Pt in a set

    D(q). If 0)( }qD , carry out the following:

    )(qDPP tt =

    { }qPP = Q* = Q*/ {q}

    Otherwise, if D(q)= 0 and q is non-dominated withall existing members of P(t), then perform thefollowing:

    { }qQQ = , Q* = Q* /{q}3. Form a combined population PPP tt =+1 . If

    1+tP , then fill Pt+1 with elements from the

    following sets in the order mentioned below until

    ( )tQQQtP *,,)1( =+ .

    Merits & DemeritsRudolphs has proven that the above algorithm with apositive variation kernel of its search operators allowsconvergence (of at least one solution in the population) tothe Pareto-optimal front in a finite number of trials infinite search space problem. However, what was not beenproved is a guaranteed maintenance of diversity amongthe Paretooptimal solutions. If a populationbasedevolutionary algorithm converges to a single Pareto-optimal solution, it cannot be justified for its use overother classical algorithms. What makes evolutionaryalgorithms attractive is their ability to find multiplePareto-optimal solutions in one single run.

    4.13. Elitist Non-dominated Sorting

    Genetic Algorithm (ENSGA)Like Rudolphs algorithm, in ENSGA, the offspringpopulation Qt is first created by using the parentpopulation Pt. However, instead of finding the non-dominated front of Qt, only, first the two populations arecombined together to form Rt of size 2N. Then a non-dominated sorting is used to classify the entire populationRt. Though it requires more effort than Qt alone, butallows a global non-dominated check among theoffspring and parent solutions. Once the non-dominatedsorting is over, the new population is filled by solutionsof different non-dominated fronts, are at a time. Thefilling starts with the best non-dominated front and

    continues with solutions of the second non-dominated

    front, followed by the third non-dominated front, and soon.

    Since the overall population size of Rt is 2N, not allfronts may be accommodating in N slots available in thenew population. All fronts, which could not beaccommodated, are simply deleted. When the lastallowed front is being considered, there may exist moresolutions in the last front than the remaining slots in the

    new population. Instead of arbitrarily discarding somemembers from the last front, it would be wise to use aniching strategy to choose the members of the last front,which reside in the least crowded region in that front.

    A strategy like the above does not affect theproceedings of the algorithm much in the early stage ofevolution. This is because, early on, there exist manyfronts in the combined population. It is likely thatsolutions of many good non-dominated fronts are alreadyincluded in the new population, before they add up to n.It then hardly matters which solutions is included to fill

    up the population. However, during the later stages of thesimulation, it is likely that most solution in thepopulation lie in the best non-dominated front. It is alsolikely that in the combined population Rt of size 2N, thenumber of solutions in the first non-dominated frontexceeds N. The above algorithm then ensures thatniching will choose a diverse set of solutions from thisset. When the entire population converges to the Pareto-optimal front the continuation of this algorithms willensure a better spread among the solutions. Followingshows the step-by-step format of algorithms.

    Pt+1

    F1 F2 F3 F4

    QtPt

    F1 F2 F3

    After non-dominated sorting

    Reject

    Fi ure 5. Structural re resentation of ENSGA

  • 8/13/2019 38-57S

    16/20

    53 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    Algorithm1. Combine parent and offspring populations and create

    Rt= Pt U Qt. Perform a non-dominated sorting to Rtand identify different fronts: Fi, i= 1,2,etc.

    2. Set new population P t+1= 0. Set a counter i=1.Until |P t+1| + Fi< N, perform P t+1 = P t+1Fiandi = i+1.

    3. Perform the crowding-sort (Fi, < c) procedure(described below) and include the most widelyspread (N- |Pt+1|) solutions using the crowdingdistance values in the sorted Fito P t+1.

    4. Create offspring Qt+1from P t+1by using the crowdedtournament selection, crossover and mutationoperators.

    Let us see the following crowding-sort (F,).3. For m= 1,2,.. M, assign a large distance to the

    boundary solutions, or dI1m = dIl

    m = and for all

    other solutions j = 2 to l-1, assigndjj

    m= djj

    m+ ((fm(Ij+1

    m) fm(Ij-1

    m))/(fm

    max- fm

    min))

    The index Ij denotes the solution index of the jth

    member in the sorted list.

    Crowded Tournament Selection OperatorThe crowded comparison operator (dj.

    Hence, the one residing in a less crowded area (with alarger crowding distance di) wins. The crowding distancedi can be computed in various ways. In the above wehave mentioned a method.

    Merits & Demerits

    The solutions are competing based on their crowdingdistances, no niching parameter is required here (such as

    share needed in the MOEA , NSGAs & NPGAs). In theabsence of the crowded comparison operator, thisalgorithm also exhibits a convergence proof to thePareto-optimal solution set similar to that in Rudolphsalgorithm, but the population size would grow with the

    generation counter. The elitism mechanism does notallow an already found Pareto-optimal solution to bedeleted. However when the crowded comparison isused to restrict the population size, the algorithm loses itsconvergence property.

    4.14. Distance Based Pareto Genetic

    Algorithm (DBPGA)Osyezka and Kundu (1995) [38][39] suggested an elitistGA a distancebased Pareto genetic algorithm (DPGA),which attempts to emphasize the progress towards the

    Pareto optimal front and the diversity along theobtained front by using one fitness measure. Thisalgorithm maintains two populations, One standard GApopulation Ptwhere genetic operation are performed andanother elite population Etcontaining all non-dominatedsolutions found thus far. The working principle of thisalgorithm is as follows:

    Algorithm1. Create an initial random population P0of size N and

    set the fitness of the first solution as F 1. Setgeneration counter t=0.

    2. If t=0, insert the first element of P0in an elite set E0= {1}. For each population member j 2 for t=0 andj 1 for t > 0, perform the following steps to assign afitness value.Calculate the distance dj(k)with each elite member k (with fitness em

    (k), m= 1,2,k) as follows:

    dj(k)= (( em

    (k)- fm(j))/ (em

    (k)))2

    Find the minimum distance and the index of the elite

    member closest to solution j:djmin= min dj

    (k),Kj* = { k : dj

    (k)= djmin}

    If any elite member dominates solution j the fitnessof j is

    Fj = max [0, F(e(kj*)- dmin]

    otherwise, the fitness of j isFj= F(e

    (kj*)) + dminand j is included in Et by eliminating all elitemembers that are dominated by j.

    3. Find the maximum fitness value of all elite members:|Et|

    Fmax= Max Fik=1

  • 8/13/2019 38-57S

    17/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 54

    All elite solutions are assigned fitness Fmax.4. If t < tmaxor any other termination criteria is satisfied,

    the process is complete. Otherwise go to step-5.5. Perform selection, crossover and mutation on Pt and

    create a new population Pt+1. Set t = t+1 and go to

    step-2.

    Note that no separate genetic processing is performed onthe elite population Etexplicitly. The fitness Fmaxof elitemembers is used in the assignment of fitness of solutionsof Pt.

    Merits & DemeritsFor some sequence of fitness evaluations both goals ofprogressing towards the Pareto-optimal front andmaintaining diversity among solutions are archived

    without any explicit niching method.Since the elite size is not restricted, so it will grow toany size. This will increase the computational complexityof the algorithm with generations. Here the choice of theinitial Fi is important. The DPGA fitness assignmentscheme is sensitive to the ordering of individual in apopulation.

    5.Future research directionsAlthough a lot of work has been done in this area, mostof it has concentrated on application of conventional orad-hoc techniques to certain difficult problems.

    Therefore, there are several research issues that stillremain to be solved. Some of which are discussed here.

    1. Applicability of MOEA in more difficult real worldproblems.

    2. The stopping criteria of MOGA, because it is notobvious to understand when the population hasreached a point from which no further improvementcan be reached.

    3. Choosing the best solution from Pareto optimal set.4. Hybridization of multi-objective EAs.5. Although a lot of work has been done in this area but

    the theoretical portion is not so much exploited. So atheory of evolutionary multi-objective optimizationis much needed, examining different fitnessassignment methods in combination with differentselection schemes.

    6. The influence of mating restrictions might beinvestigated, although restricted mating is not verywidespread in the field of multi-objective EA.

    6.Conclusion and DiscussionIn this paper, we have described an overview of the EAssuch as EP, ES, GP & GA. Then the importance of EA inMulti-objective optimization problems. We attempted to

    provide a comprehensive review of the most popularevolutionary-based approaches to multi-objectiveoptimization problems, also a brief discussion of theiradvantages, disadvantages. Finally we discuss the mostpromising area for future research.

    Recently multi-objective Evolutionary algorithms areapplied in various fields such as Engineering designproblems, Computer networks, Goal Programming, Gasturbine Engine controller design, and resource schedulingetc [40-43].

    Other hybridizations typically enjoy the generic andapplication specific merits of the Individual MOEAtechniques that they integrate [44-47].

    References[1] Goldberg, D.E. (1998). Genetic Algorithms in

    Search, Optimization, and Machine Learning.Addison-Wesley, Reading, Massachusetts.[2] Stadler, W. (1984). A Survey of Multi-criteria

    Optimization or the Vector Maximum Problem,Part I: 1776-1960. Journal of OptimizationTheory and Applications 29, 1(sep), 1-52.

    [3] Charnes, A. and Cooper, W.W (1961). Manag-ement Models and Industrial Applications ofLinear programming. Vol 1. John Wiley, NewYork..

    [4] Goldberg, D.E., & Deb, K. (1991). A comparativeanalysis of selection schemes used in genetic

    algorithms. In G.J.E. Rawlins (Ed.), Foundationsof genetic algorithms (pp. 69-93). San Mateo, CA:Morgan Kaufmann.

    [5] Peck, C.C., & Dhawan, A.P. (1995). GeneticAlgorithms as Global Random Search Methods:An alternative perspective. EvolutionaryComputation, 3(1), 39-80.

    [6] Back, T., Hoffmeister F., and Schwefel H.P.,(1992). A Survey of Evolution Strategies.Proceeding of the Fourth International Conferenceon Genetic Algorithms, San Mateo, CA: MorganKaufmann, 2-9.

    [7] Claus, V., Hopf, J. , and Schwefel H.P.(Eds.).Evolutionary Algorithms and their Application,Dagstuhl seminar Report No- 140, March 1996.

    [8] Oyman, A.I. Beyer, H.G. and Schwefel, H.P.(2000). Analysis of the (1,)-ES on the ParabolicRidge, Evolutionary Computation 8(3), pages249-265.

    [9] Fogel David B ed.(1998).Evolutionary Computationthe fossil record, IEEE press.

    [10] Koza, J.R. (1992). Genetic Programming: on theprogramming of Computers by means of naturalSelection. MIT Press.

  • 8/13/2019 38-57S

    18/20

    55 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    [11] Koza J.R. (1994). Genetic Programming II:Autonomous Discovery of Reusable Programs.MIT Press.

    [12] Banzhaf W, Nordin P, Keller RE, Francone FD:Genetic Programming an Introduction: on the

    Automatic Evolution of Computer Programs andits Applications. Morgan Kaufmann, 1998.

    [13] Evans, G.W. (1984). An Overview of Techniquesfor Solving Multi-objective MathematicalPrograms. Management Science 30, 11(Nov),1268-1282.

    [14] Horn, J.(1997). F1.9 Multi-criteria DecisionMaking. In Back, T., Fogel, D.B. andMichalewicz, Z., editors, Handbook ofEvolutionary Computation. Institute of physicsPublishing, Bristol, and England.

    [15] Ishibuchi, H., Murata, T. (1996). Multi-objective

    Genetic Local search algorithm. In T. Fukuda andT. Furuhashi Eds. Proceedings of the 1996International Conference on Evolutionarycomputation (Nagoya, Japan, 1996), pp. 119-124.IEEE.

    [16] Schaffer, J. D.(1984). Some Experiments inMachine Learning Using Vector Evaluated GeneticAlgorithms (Doctoral Dissertation). Nashville,TN: Vanderbilt University.

    [17] Schaffer, J.D.(1984). Multiobjective Optimizationwith Vector Evaluated Genetic Algorithms.Unpublished ph.D. Thesis, Vanderbilt University,

    Nashville, Tannessee.[18] Schaffer, J.D. (1985). Multiple Objective

    Optimization with Vector Evaluated GeneticAlgorithms. In Grefenstette, J.J., editor,Proceedings of an International conference onGenetic algorithms and their applications, pages93-100, sponsored by Texas Instruments and U.S.Navy Center for applied Research in ArtificialIntelligence (NCARAI).

    [19] Richardson, J. T., Palmer, M. R., Liepins, G., andHilliard, M.(1998). Some Guidelines for GeneticAlgorithms with Penalty Functions. In J.D.

    Schaffer Ed., Proceedings of the third InternationalConference on Genetic algorithms (george MasonUniversity, 1989), pp. 191-197. Morgan KaufmannPublishers.

    [20] Fonseca, C.M. and Fleming P. J. (1993). GeneticAlgorithms for Multi-objective Optimization:Formulation, Discussion, and Generalization.Proceedings of the Fifth International Conferenceon Genetic Algorithms. 416-423.

    [21] Fonseca, C.M. and Fleming, P.J. (1995). AnOverview of Evolutionary Algorithms in Multi-objective Optimization. Evolutionary

    Computation, 3(1):1-16.

    [22] Fonseca, C.M. and Fleming, P.J. (1998). Multi-objective Optimization and Multiple ConstraintsHandling with Evolutionary Algorithms-part ii:Application example. IEEE Transactions onSystems, Man, and Cybernetics, 2891):38-47.

    [23] Horn, J. and Nafploitis, N. and Goldberg,D.E.(1994). A Niched Pareto Genetic Algorithmfor Multi-objective Optimization. Proceedings ofthe first IEEE Conference on Evolutionary

    Computation.82-87.[24] Horn, J. and Nafpliotis, N.(1993). Multi-objective

    Optimization using the Niched Pareto GeneticAlgorithm. IlliGAL Technical Report 93005,Illinois Genetic Algorithms Laboratory, Universityof Illinois, Urbaba, Illinois.

    [25] Zitzler,E. and Thiele,L.(1998)."Multi- objective

    Optimization using Evolutionary Algorithms- Acomparative case study".Parallel problem solvingfrom Nature ,V, pp 292-301, Springer, Berlin,germany.

    [26] Zitzler, E.(199). Evolutionary Algorithms forMultiobjective Optimization: Methods andApplications. Ph. D. Thesis, Swiss FederalInstitute of Technology (ETH) Zurich,Switzerland. Shaker Verlag, Aachen, Germany,ISBN 3-8265-6831-1.

    [27] Zitzler, E. and Thiele, L. (1999). MultiobjectiveEvolutionary Algorithms: A Comparative Case

    Study and the Strength Pareto Approach. IEEETransactios on Evolutionary Computation, 3(4):257-271.

    [28] Deb,k.(1999). Multi-objective Genetic Algorithms:Problem Difficulties and Construction of TestProblems. Evolutionary Computing Journal,7(3),205-230.

    [29] Srinivas, N. and Deb, K.(1994). MultiobjectiveOptimization Using Nondominated Sorting inGenetic Algorithms. Evolutionary Computation2,3 (fall) 221-248.

    [30] Srinivas, N. and Deb, K.(1993). Multi-objective

    Optimization using Nondominated Sorting inGenetic Algorithms. Technical Report,Department of Mechanical Engineering, IndianInstitute of Technology, Kanpur, India.

    [31] Kursawe, F.(1991). A Variant of EvolutionStrategies for Vector Optimization. In H.P.Schwefel and R. Manner Eds., parallel problemsolving from nature. 1stworkshop, PPSN I, volume496 of Lecturer Notes in Computer Science(Berlin, Germany, oct. 1991), Pp. 193-197.Springer-Verlag.

    [32] Hajela, P. and Lin, C.Y.(1992). Genetic Search

    Strategies in Multicriterion Optimal Design.Structural optimization 4, 99-107.

  • 8/13/2019 38-57S

    19/20

    Evolutionary Algorithms for Multi-criterion Optimization: A Survey 56

    [33] Hajela, P. and Lee, J. (1996). Constrained geneticsearch via scheme adaptation: An ImmuneNetwork Solution. Structural optimization 12, 1,11-15

    [34] Laumanns, M., Rudolph, G., and Schwefel, H.

    P.(1998). A Spatial Predator-Prey Approach toMulti-objective Optimization: A PreliminaryStudy. Proceedings of the Parallel problemsolving from Nature, V. 241-249.

    [35] Kita, H., Yabumoto, Y., Mori, N., and Nishikawa,Y. (1996). Multi-objective Optimization byMeans of the Thermodynamical GeneticAlgorithm. In H.-M. Voigt, W. Ebeling, IRechenberg, and H.-P. Schwefel Eds., Parallelproblem solving from nature-PPSN IV, LectureNotes in computer science, pp. 504-512. Berlin,Germany: Springer-Verlag.

    [36] Rudolph, G.(1994). Convergence Properties ofCanonical Genetic Algorithms. IEEETransactions on Neural Networks, NN-5, 96-101.

    [37] Rudolph, G.(1998). Evolutionary Search forMinimal Elements in Partially Ordered Finite sets.Evolutionary programming VII. 345-353.

    [38] Osyczka, A. and Kundu, S. (1995). A New Methodto Solve Generalized Multi-criteria OptimizationProblems using the Simple Genetic Algorithm.Structural Optimization 10, 94-99.

    [39] Osyczka, A. and Koski, J. (1982). Selected WorksRelated to Multicriterion Optimization Methods for

    Engineering Design. In Proceedings of EuromechColloquium (University of Siegen, 1982).

    [40] Deb, K. (2002). Multi-objective Optimizationusing Evolutionary Algorithms. John Wiley &Sons, Ltd.

    [41] Deb, K., Horn, J. (2000). Introduction to theSpecial Issue: Multicriterion Optimization.Evolutionary Computation 8(2): iii-iv.

    [42] Chipperfield, A.J. and Fleming, P.J. (1995). GasTurbine Engine Controller Design using Multi-objective Genetic Algorithms. In A.M.S. Zalzala,editor, Proceedings of the First IEE/IEEE

    International Conference on Genetic Algorithms inEngineering Systems: Innovations and

    applications, GaLESIA95,pages 214-219, HalifaxHall, University of Sheffield, UK, IEEE.

    [43] Anchor, K.P., Zydallis, J.B., Gunsch, G.H. andLamont, G.B. (2003). Different Multi-objectiveEvolutionary Programming Approaches forDetecting Computer Network Attacks, in CarlosM. Fonseca, Peter J. Fleming, Eckart Zitzler,Kalyanmoy Deb and Lothar Thiele (editors),Evolutionary Multi-Criterion Optimization. Second

    International Conference, EMO 2003, pp. 707--

    721, Springer. Lecture Notes in Computer Science.Volume 2632, Faro, Portugal.

    [44] Andersson, J. (2003). Applications of a Multi-objective Genetic Algorithm to EngineeringDesign Problems, in Carlos M. Fonseca, Peter J.Fleming, Eckart Zitzler, Kalyanmoy Deb andLothar Thiele (editors), Evolutionary Multi-

    Criterion Optimization. Second InternationalConference, EMO 2003, pp. 737--751, Springer.Lecture Notes in Computer Science. Volume 2632,Faro, Portugal.

    [45] Jeroen C.J.H. Aerts, Herwijnen, M.V., and Stewart,T.J. (2003). Using Simulated Annealing andSpatial Goal Programming for Solving a Multi SiteLand Use Allocation Problem", in Carlos M.Fonseca, Peter J. Fleming, Eckart Zitzler,Kalyanmoy Deb and Lothar Thiele (editors),Evolutionary Multi-Criterion Optimization. Second

    International Conference, EMO 2003, pp. 448--

    463, Springer. Lecture Notes in Computer Science.Volume 2632, Faro, Portugal.

    [46] Obayashi, S. (1997). Pareto Genetic Algorithm forAerodynamic Design using the Navier-StokesEquations. In D. Quagliarella, J. Periaux,C.Poloni, and G. Winter, editors, GeneticAlgorithms and Evaluation Strategies in

    Engineering and Computer Science . Recent

    Advances and Industrial Applications, chapter12, pages 245-266. John Wiley and Sons, WestSussex.

    [47] Tan, K.C. and Li. Y. (1997). Multi-Objective

    Genetic Algorithm Based Time and FrequencyDomain Design Unification of Linear ControlSystems. Technical Report CSC-97007,Department of Electronics and ElectricalEngineering, University of Glasglow, Glasglow.

    [48] Periaux, J., Sefrioui, M., and Mantel, B. (1997).GA Multiple Objective Optimization Strategiesfor Eloctromagnetic BackScattering. In D.Quagliarella, J. Periauxx, C. Poloni, and G.Winter,editors, Genetic Algorithms and EvaluationStrategies in Engineering and Computer Science.

    Recent Advances and Industrial Applications,

    chapter 11,pages 225-243. John Wiley andSons, West Sussex.

    [49] Belegundu, A.D., Murty, D.V., Salagame, R.R.,and Constants E. W. Multiobjective Optimizationof Laminated Ceramic Composites Using GeneticAlgorithms. In Fifth AIAA/USAF/NASASymposium on Multidisciplinary Analysis and

    Optimization, pages 1015-1022, Panama City,Florida, 1994.AIAA.Paper 84-4363-CP.

    [50] Abbass, H.A.(2003). Speeding up Backprop-agation Using Multiobjective EvolutionaryAlgorithms, Neural Computation, Vol. 15, No.

    11, pp. 27052726.

  • 8/13/2019 38-57S

    20/20

    57 International Journal of Computing & Information Sciences Vol. 2, No. 1, April 2004

    [51] Vedarajan, G., Chan, L.C., and Goldberg, D.E.(1997). Investment Portfolio Optimization usingGenetic Algorithms. In John R. Koza, editor,LateBreaking Papers at the Genetic Programming

    1997 Conference, pages 255-263, Stanford

    University, California, July 1997. StanfordBookstore.

    [52] Poloni, C., and Pediroda, V. (1997). GA Coupledwith Computationally Expensive Simulations: toolsto Improve Efficiency. In D. Quagliarella,J.Periaux, C.Poloni, and G. Winter, editors,Genetic Algorithms and Evalution Strategies inEngineering and Computer Science. RecentAdvances and Industrial Applications, chapter13,pages 267-288. John Wiley and Sons, WestSussex.

    [53] Quagliarella, D., and Vicini, A. (1997). Coupling

    Genetic Algorithms and Gradient BasedOptimization Techniques. In D. Quagliarella, J.Periaux, C. Poloni, and G. Winter, editors, GeneticAlgorithms and Evolution Strategies in

    Engineering and Computer Science. Recent

    Advances and Industrial Applications, Chapter14,pages 289-309.John Wiley and Sons, WestSussex.

    About the Author

    Ashish Ghosh is an Associate Professor of the Machine

    Intelligence Unit at the Indian Statistical Institute, Calcutta. Hereceived the B.E. degree in Electronics andTelecommunication from the Jadavpur University, Calcutta in1987, and the M.Tech. and Ph.D. degrees in Computer Sciencefrom the Indian Statistical Institute, Calcutta in 1989 and 1993,respectively. He received the prestigious and most covetedYoung Scientists award in Engineering Sciences from theIndian National Science Academy in 1995; and in ComputerScience from the Indian Science Congress Association in 1992.He has been selected as anAssociateof the Indian Academy ofSciences, Bangalore in 1997. He visited the Osaka PrefectureUniversity, Japan with a Post-doctoral fellowship duringOctober 1995 to March 1997; and Hannan University, Japan as

    a Visiting scholar during September-October, 1997. DuringMay 1999 he was at the Institute of Automation, ChineseAcademy of Sciences, Beijing with CIMPA (France)fellowship. He was at the German National Research Centerfor Information Technology, Germany, with a German Govt.(DFG) Fellowship during January-April 2000. DuringOctober-December 2003 he was a Visiting Professor at theUniversity of California, Los Angeles. He also visited variousUniversities/Academic Institutes and delivered lectures in

    different countries including South Korea, Poland, and TheNetherland. His research interests include EvolutionaryComputation, Neural Networks, Image Processing, Fuzzy Sets

    and Systems, Pattern Recognition, and Data Mining. He hasalready published about 55 research papers in internationally

    reputed journals and referred conferences, has edited 4 books,and is acting as guest editor of various journals.

    Satchidananda Dehuri is currently working as a lecturer inSilicon Institute of Technology, India. He received his M.Sc.(Mathematics) from Sambalpur University in 1998, andM.Tech. (Computer Science) from Utkal University in 2001.He is working toward the Ph.D. degree in the Department ofComputer Science and Application of Utkal University. He has

    published 10 papers in various International Conferences. Hisresearch interests include pattern recognition, evolutionaryalgorithm for multi-criterion optimization, and Data Mining &Data Warehousing