multiobjective programming with continuous genetic algorithm

15
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616 135 IJSTR©2014 www.ijstr.org Multiobjective Programming With Continuous Genetic Algorithm Adugna Fita Abstract:  Nowadays, we want to have a good life, which may mean more wealth, more power, more respect and more time for our selves, together with a good health and a good second generation, etc. Indeed, all important political, economical and cultural events have involved multiple criteria in their evolution. Multiobjective optimization deals with the investigation of optimization problems that possess more than one objective function. Usually there is no single solution that optimizes all functions simultaneously, we have solution set that is called nondominated set and elements of this set are usually infinite. It is from this set decision is made by taking elements of nondominated set as alternatives, which is given by analysts. But practically extraction of nondominated solutions and setting fitness function are difficult. This Paper will try to solve problems that the decision maker face in extraction of Pareto optimal solution with continuous variable genetic algorithm and taking objective function as fitness function without modification by considering box constraint and generating initial solution within box constraint and penalty function for constrained one. Solutions will be kept in feasible region during mutation and recombination. Keywords:  Chromosome, Crossover, Heuristics, Mutation, Optimization, Population, Ranking,Genetic Algorithms, Multi-Objective, Pareto Optimal Solutions, Parent selection. ———————————————————— 1. Introduction and Literature Survey. EVERYDAY we encounter varies kinds of decision making problems as manager, resource planner, designers, administrative officers, mere individuals, and so on. In these problems, the final decision is usually made through seve ral steps; the structural model, the impact model, and the evaluation model even though they sometimes not be perceived explicitly. Though the process, the objective of the problem and alternatives to perform it are specified. Hereafter, we shall use the notation  for the objective and   for the set of alternatives, which is supposed to be a subset of an n-dimensional vector space. In order to solve our decision making problem by some systems analytical methods, we usually require that degrees of objectives be represented in numerical terms, which may be of multiple kinds even for one objective. In order to exclude subjective value judgment at this stage, we restrict these numerical terms to physical measures (for example money, weight, length, and time). As a performance index, for the objective  an objective function  :   1  is introduced. Where 1  denotes one dimensional Euclidean space. The value  ( ) indicates how much impact is given on objective  by performing an alternative . In this paper we assume that a smaller value for each objective function is preferred to large one. Now we can formulate our decision making problems as: a Multiobjective optimization problem: Minimize   ( ) = (   1 ,   2 , ,   ) ,over, .b (1.1) Where: X = {x R n : g j (x) 0,j = 1,2, ,m,x 0 (1.2) ={ : 1 =   1 , 2 =   2 , , =   , } (1.3) In some cases, some of the objective functions are required to be maintained under given levels prior to minimizing other objective functions. Denoting these objective functions by   , we require that   0,   = 1,2, , , (1.4) Such a function    is generally called a constraint function. According to the situation, we consider either the problem (1.1) itself or (1.1) accompanied by the constraint conditions (1.4) Of course, an equality constraint = 0 can be embedded within two inequalities 0 0, and hence, it does not appear explicitly in this paper. (R. E. Steuer, (1986), We call the set of alternatives   the feasible set of the problem. The space containing the feasible set is said to be a decision space, whereas the space that contain the image of the feasible set =  (  ) is referred to as criterion space. Unlike the traditional mathematical programming with a single objective function, an optimal solution in the sense of one that minimizes all the objective function simultaneously does not necessarily exist in multiobjective optimization problem, and hence, we are in trouble of conflicts among objectives in decision making, the final decision should be made by taking the total balance of objectives into account. Here we assume a decision maker who is responsible for the final decision. [Y. Sawaragi, H. Nakayama and T. Tanino, 1985 ] The decision maker’s value is usually represented by saying whether or not an alternative  is preferred to another alternative  or equivalently whether or not  ( ) is preferred to () . In other words, the decision maker’s value is represented by some binary relation over   or  (  ). Since such a binary relation representing the decision maker’s preference usually become an order, it is called a preference order  and it is supposed to be defined on the so called criteria space , which includes the set  (  ). Several kind of preference orders could be possible, sometimes the decision maker cannot judge whether or not   ( ) is preferred to  (). Such an order that admits incomparability for a pair of objects is called  partial order , whereas the order requiring the comparability for every pair of objects is called a weak order or total order . In practice, we often observe a partial order for the decision maker’s preference. Unfortun ately,  __________________________   Adugna Fita is currently lecturer and researcher at  Adama Science and Technology University, Department of Mathematics, Adama, Ethiopia  E-mail: [email protected] 

Upload: ijstr-research-publication

Post on 02-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 1/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

135IJSTR©2014www.ijstr.org 

Multiobjective Programming With ContinuousGenetic Algorithm

Adugna Fita

Abstract: Nowadays, we want to have a good life, which may mean more wealth, more power, more respect and more time for our selves, together witha good health and a good second generation, etc. Indeed, all important political, economical and cultural events have involved multiple criteria in theirevolution. Multiobjective optimization deals with the investigation of optimization problems that possess more than one objective function. Usually thereis no single solution that optimizes all functions simultaneously, we have solution set that is called nondominated set and elements of this set are usuallyinfinite. It is from this set decision is made by taking elements of nondominated set as alternatives, which is given by analysts. But practically extractionof nondominated solutions and setting fitness function are difficult. This Paper will try to solve problems that the decision maker face in extraction oPareto optimal solution with continuous variable genetic algorithm and taking objective function as fitness function without modification by consideringbox constraint and generating initial solution within box constraint and penalty function for constrained one. Solutions will be kept in feasible regionduring mutation and recombination.

Keywords:  Chromosome, Crossover, Heuristics, Mutation, Optimization, Population, Ranking,Genetic Algorithms, Multi-Objective, Pareto OptimaSolutions, Parent selection. 

———————————————————— 

1.  Introduction and Literature Survey.EVERYDAY we encounter varies kinds of decision making

problems as manager, resource planner, designers,administrative officers, mere individuals, and so on. In theseproblems, the final decision is usually made through severalsteps; the structural model, the impact model, and theevaluation model even though they sometimes not beperceived explicitly. Though the process, the objective ofthe problem and alternatives to perform it are specified.Hereafter, we shall use the notation  for the objective and 

   for the set of alternatives, which is supposed to be asubset of an n-dimensional vector space. In order to solveour decision making problem by some systems analyticalmethods, we usually require that degrees of objectives berepresented in numerical terms, which may be of multiplekinds even for one objective. In order to exclude subjective

value judgment at this stage, we restrict these numericalterms to physical measures (for example money, weight,length, and time). As a performance index, for the objective an objective function  :  → ℝ1  is introduced. Where 1 denotes one dimensional Euclidean space. The value () indicates how much impact is given on objective   byperforming an alternative . In this paper we assume that asmaller value for each objective function is preferred tolarge one. Now we can formulate our decision makingproblems as: a Multiobjective optimization problem:

Minimize () = ( 1, 2, … , ),over, ∈ .b (1.1)

Where:X = {x

∈Rn : gj(x)

≤0, j = 1,2,

…,m,x

≥0  (1.2)

= { ∈ : 1 = 1, 2 = 2, … , = , ∈ } 

(1.3)

In some cases, some of the objective functions are requiredto be maintained under given levels prior to minimizing

other objective functions. Denoting these objectivefunctions by   , we require that

  ≤ 0,  = 1,2,… ,, (1.4)

Such a function     is generally called a constrain

function. According to the situation, we consider either theproblem (1.1) itself or (1.1) accompanied by the constraintconditions (1.4) Of course, an equality constraint = 0can be embedded within two inequalities ≤ 0 − ≤ 0, and hence, it does not appear explicitly in thispaper. (R. E. Steuer, (1986), We call the set of alternatives

  the feasible set of the problem. The space containing thefeasible set is said to be a decision space, whereas the

space that contain the image of the feasible set = ( ) isreferred to as criterion space. Unlike the traditionamathematical programming with a single objective functionan optimal solution in the sense of one that minimizes althe objective function simultaneously does not necessarilyexist in multiobjective optimization problem, and hence, weare in trouble of conflicts among objectives in decisionmaking, the final decision should be made by taking thetotal balance of objectives into account. Here we assume adecision maker who is responsible for the final decision. [YSawaragi, H. Nakayama and T. Tanino, 1985 ] The decisionmaker’s value is usually represented by saying whether ornot an alternative  is preferred to another alternative ′ oequivalently whether or not 

 (

) is preferred to (

′)  . In

other words, the decision maker’s value is represented bysome binary relation over  or  ( ). Since such a binaryrelation representing the decision maker’s preferenceusually become an order, it is called a preference order  andit is supposed to be defined on the so called criteria space, which includes the set ( ). Several kind of preferenceorders could be possible, sometimes the decision makecannot judge whether or not () is preferred to (′). Suchan order that admits incomparability for a pair of objects iscalled  partial order , whereas the order requiring thecomparability for every pair of objects is called a weak ordeor total order . In practice, we often observe a partial orderfor the decision maker’s preference. Unfortunately

 __________________________

   Adugna Fita is currently lecturer and researcher at

 Adama Science and Technology University,Department of Mathematics, Adama, Ethiopia

  E-mail: [email protected] 

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 2/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

136IJSTR©2014www.ijstr.org 

however, an optimal solution in the sense of one that ismore preferred with respect to the order, hence the notionof optimality does not necessarily exist for partial orders.Instead of strict optimality, we introduce in multiobjectiveoptimization the notion of efficiency. A vector (′) is said tobe efficient if there is no ∈  such that () is preferred to

 (′)  with respect to the preference order. The finaldecision is made among the set of efficient solutions.

Scalarization approach is one of the solution methods inmultiobjective programming that is studied by differentscholars, In solving scalarized problems which are obtainedby weighting sum of objective functions to single objective.Each substitute problem is characterized by a unique [13]weighting parameter vector, the entries of which are thespecific weights for the individual objective functions.Solutions to these problems constitute so-called properlyefficient points with respect to our initial multiobjectiveoptimization problem. The advantage of this approach is itcan be solved efficiently by well-known algorithms, forexample by interior-point methods. Using this algorithm asingle solution is obtained at a time. To get all Pareto frontwe have to run the algorithm many times by applying

different weigh. A fundamental   challenge lies in theselection of the ’right’ weighting parameters  that lead todifferent optimal solutions. The other problem is assigninglarge weight to the best objective does not result in desiredsolution. Besides, stochastic methods and evolutionaryapproaches as well as different deterministic optimizationtechniques such as branch and bound algorithms forexample are other established strategies for solvingmultiobjective optimization problems. Vector evaluatedgenetic algorithm (VEGA) Schaffer (1985) presents one ofthe first treatments of multi-objective genetic algorithms,although he only considers unconstrained problems. Thegeneral idea behind Schaffer’s approach, called the vector

evaluated genetic algorithm (VEGA), involves producing

smaller subsets of the original population, or sub-populations, within a given generation, [1], [4]. One sub-population is created by evaluating one objective function ata time rather than aggregating all of the functions. Theselection process is composed of a series of computationalloops, and during each loop, the fitness of each member ofthe population is evaluated using a single objectivefunction. Then, certain members of the population areselected and passed on to the next generation, using thestochastic selection processes. This selection process isrepeated for each objective function. The process is basedon the idea that the minimum of a single objective functionis a Pareto optimal point (assuming the minimum is unique).Such minima generally define the vertices of the Pareto

optimal set. However, considering only one objectivefunction at a time is comparable to setting all but one of theweights to zero. The vector of weights represents a searchdirection in the criterion space, and if only one componentof the vector is non-zero, then only orthogonal searchdirections, parallel to the axes of the criterion space, areused (Murata et al. 1996). Such a process can be relativelyineffective. Goldberg (1989), and Fonseca and Fleming(1993) provide detailed explanations and critiques ofSchaffer’s ideas. A class of alternatives to VEGA involvesgiving each member of a population a rank based onwhether or not it is dominated (Goldberg 1989; Fonsecaand Fleming 1993; Fitness then is based on a design’s rank

within a population. The means of determining rank andassigning fitness values associated with rank may vary frommethod to method, but the general approach is common asdescribed below. For a given population, the objectivefunctions are evaluated at each point. All non-dominatedpoints receive a rank of one. Determining whether a point isdominated or not (performing a non-dominated checkentails comparing the vector of objective function values a

the point to the vector at all other points. Then, the pointswith a rank of one are temporarily removed fromconsideration and the points that are non-dominatedrelative to the remaining group are given a rank of two. Thisprocess is repeated until all points are ranked. Those pointswith the lowest rank have the highest fitness value. That isfitness is determined such that it is inversely proportional tothe rank. This may be done using any of several methods[7]; Srinivas and Deb 1995; Cheng and Li 1998; Narayananand Azarm 1999). Belegundu et al. (1994) suggesdiscarding points with higher ranks and immigrating newrandomly generated points. It is possible to have a Paretooptimal point in a particular generation that does not surviveinto the next generation. In response to this condition

Cheng and Li (1997) propose the use of a Pareto-set filterwhich is described as follows. The algorithm stores two setsof solutions: a current population and a filter. The filter iscalled an approximate Pareto set, and it provides anapproximation of the theoretical Pareto optimal set. Witheach generation, points with a rank of one are saved in thefilter. When new points from subsequent generations areadded to the filter, they are subjected to a non-dominatedcheck within the filter, and the dominated points arediscarded. Ishibuchi and Murata (1996), and Murata et al(1996) use a similar procedure called an elitist strategywhich functions independent of rank. As with the Pareto-sefilter, two sets of solutions are stored: a current populationand a tentative set of non-dominated solutions, which is an

approximate Pareto set. With each generation, all points inthe current population that are not dominated by any pointsin the tentative set are added to the set and dominatedpoints in the set are discarded. After crossover andmutation operations are applied, a user’s specified numbeof points from the tentative set is reintroduced into thecurrent population. These points are called elite points. Inaddition, the k solutions with the best values for eachobjective function can be regarded as elite points andpreserved for the next generation (Murata et al. 1996)Develop the tournament selection technique for theselection process, which proceeds as follows. Two pointscalled candidate points, are randomly selected from thecurrent population and compute for survival in the next

generation [9]. A separate set of points called a tournamenset or comparison set is also randomly compiled. Thecandidate points are then compared with each member ofthe tournament set. If there is only one candidate that isnon-dominated relative to the tournament set, thatcandidate is selected to be in the next generation. Howeverif there is no preference between candidates, or when thereis a tie, fitness sharing is used to select a candidate. [2],[4]A niche in genetic algorithms is a group of points that areclose to each other, typically in the criterion space. Nichetechniques (also called niche schemes or niche-formationmethods) are methods for ensuring that a population doesnot converge to a niche, i.e., a limited number of Pareto

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 3/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

137IJSTR©2014www.ijstr.org 

points. Thus, these techniques foster an even spread ofpoints (in the criterion space). Genetic multi-objectivealgorithms tend to create a limited number of niches; theyconverge to or cluster around a limited set of Pareto points.Fitness sharing is a common niche technique the basic ideaof which is to penalize the fitness of points in crowdedareas, thus reducing the probability of their survival to thenext generation (Goldberg and Richardson 1987; Deb

1989; Srinivas and [3] The fitness of a given point is dividedby a constant that is proportional to the number of otherpoints within a specified distance in the criterion space,[4].Narayana and Azarm (1999) present a method in which alimit is placed on the Euclidean distance in the criterionspace between points (parents) that are selected forcrossover. If the distance is too small, then the parents arenot selected for crossover. In addition, the authors suggestthat only non-dominated points (points with a rank of 1) beevaluated for constraint violation. Those points that violateconstraints then receive a fitness penalty, which is imposedby reassigning their rank to be a large number (e.g., thepopulation size). However, Pareto ranking is the mostappropriate way to generate an entire Pareto front in a

single run of a Genetic algorithm and its main advantage isthat the approach is less susceptible to the shape orcontinuity. The primary questions when developing geneticalgorithms for multi-objective problems are how to evaluatefitness, how to determine the potential solution points thatshould be passed on to the next generation, and how toincorporate the idea of Pareto optimality. The approachesthat are described so far collectively address these issues.Different techniques are discussed that serve as potentialissues in a genetic multi-objective optimization algorithm. Inaccordance with much of the literature on multi-objectivegenetic algorithms, constraints are not addressed directly. Itis assumed that a penalty approach is used to treatconstraints. But those approaches are difficult to model the

fitness function that best evaluates the solution(chromosomes) for further selections and keeping thosesolutions in feasible region during mutation andrecombination [16]. In this paper objective function andvariables are taken without modification and continuousvariable genetic algorithm is used. Variables are consideredin box constraint and initial solution will be generated withinbox constraint and will keep in feasible region duringmutation and recombination.

2.  Mathematical preliminariesIn this chapter we investigate optimization problems withfeature more than one objective function. In the beginningfundamental concepts that are essential for multiobjective

optimization are established. That is, cone, and orderingcones are introduced and key properties of efficient sets areexamined in detail. Subsequently, the existence of efficientpoints and techniques to compute these are analyzed.

2.1 Convex setDefinition 2.1.1 (algebraic sum of two sets) [9]

1.  The algebraic sum of two sets is defined as

1   + 2 ∶= 1  + 2 1 ∈  1, 2  ∈  2 In case 1 

= { 1 } is a singleton we use the form 1  + 2 

instead of { 1 } +2.

 2.  Let , 1 , 2  ⊂  ℝ  and   ∈  ℝ. The multiplication o

a scalar with a set is given by ∶= { |   ∈  }

in particular  − {− |   ∈  } 

Definition 2.1.2i.  The point    ∈  ℝ   is said to be a convex

combination of two points

1,

∈ 

ℝ   if

   =

1 +(1 -  ) 2, for some  ∈ ℝ, ≤   ≤  1.ii.  The point    ∈  ℝ   is said to be a convex

combination of  points1 ,2, … ,  ∈ ℝ  if

   =

=1

,   ≥ 0,

=1

= 1 .

Definition 2.1.3  A set   ⊂  ℝ   is said to be convex if forany  1 , 2  ∈   and every real number   ∈ ℝ, ≤  ≤  1, the point 1 +(1 -  ) 2 ∈  

In other words,   is convex if the convex combination oevery pair of points in

 lies in 

.

The intersection of all convex sets containing a givensubset  of ℝ   is called the convex hull of  and denotedby conv().

Definition 2.1.4 (Cone)   ⊂  ℝ  is called a cone if   ∈  for all   ∈   and for all   ∈  ℝ,   > 0. A cone  isreferred to as: 

  Nontrivial or proper, if   ≠ ∅    ≠ ℝ . 

  Convex, if 1  + 1 − 2  ∈    for al1  2  ∈   for all 0 <   < 1 

  Pointed, if for   ∈  ,  ≠  0,−   ,  .   ∩ −  ⊆   0 

Theorem 2.1.1 A cone  is convex if and only if it is closedunder addition. In other words, 

α 1 + (1 −α  )2 ∈     1 + 2  ∈   ∀ 1 , 2 ∈  ,

∀  ∈   [0,1] 

Proof first, assume that the cone   is convex. Then wecan conclude for 1, 2  ∈    and   = 1/2 incombination with the cone property of   tha

1

2 1   + 1 − 1

2 2  ∈   

2(1

2 1  + 1

2 2) ∈   

1  + 2  ∈  . 

Secondly, suppose that the cone  is closed under additionExploiting the fact that   is a cone we deduce for 1,2 ∈ and   ∈  [0, 1] that 1  ∈    and (1 − ) 2  ∈  Furthermore, since the cone is closed under addition wederive for these elements that

2.2 Preference Orders and Domination structuresA preference order represents the preference attitude of thedecision maker in the objective space. It is a binary relationon a set Y =   ⊂ ℝp  where  is a vector valued objective

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 4/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

138IJSTR©2014www.ijstr.org 

function, and  is a feasible decision set. The basic binaryrelation > means strict preference i.e y >   for y, z ∈ Y means objective value y is preferred to z.

Definition 2.2.1 (Binary relation ) Let   be any set. Abinary relation   on S is a subset of   × . It iscalled [5]  

1.  ,  , ∈   for all   ∈  , 

2. 

2.   ,   for all   ∈  , 3.  3.   (1,2 ) ∈  ⇒ (  2, 1 ) ∈  for all 1,

2 ∈  , 

4.  4.   (1,2 ) ∈  ⇒  (  2, 1 )   for all

1, 2 ∈  , 5.  5. ,   (1,2 ) ∈    and  (2, 1 ) ∈ 

R ⇒ 1 = 2 for all 1, 2 ∈  , 6.  6. ,  (1,2 ) ∈ R and ( 2, 3 ) ∈   ⇒ ( 1,

3 ) ∈  for all  1, 2, 3 ∈  , 

7.  7.     (1, 2 )     and ( 2,

3 )  R ⇒ ( 1, 3 )  R for all 1, 2, 3 ∈  , 

8.  8.   (1,2 ) ∈ R or (  2, 1 ) ∈  for all 1,2 ∈  with 1 ≠ 2, 

9.  9. strongly connected (or total) if (s1, s2)

∈ 

 or (  

2,

1) ∈  for all s1, s2 ∈ . 

Definition 2.2.2  A binary relation R on a set  is a preorder(quasi-order) if it is reflexive and transitive. Instead of (1,2 ) ∈  we shall also write 1 2. In the case of  being a

 preorder the pair (,)  is called a preordered set. In thecontext of (pre)orders yet another notation for the relation Ris convenient. We shall write 1 ≤ 2as shorthand for (1,2) ∈    and 1 ≰  2  for (1, 2)    and indiscriminately

refer to the relation R or the relation≤. This notation can beread as “preferred to’’ .[6],[2] Given any preorder ≤, twoother relations are closely associated with  ≤.

We define them as follows:s1 < s2 ⟺ s1 ≤ s2and 2 ≰ 1,s1 ∼ s2  ⟺ s1  ≤ s2 and s2  ≤  s1.

Actually, < and ∼ can be seen as the strict preference andequivalence (or indifference) relation, respectively,associated with the preference defined by preorder ≤.

Definition 2.2.3  A relation is named a partial order if it isreflexive, transitive and antisymmetric.  If   solely satisfiesthe first two properties it is denoted as a preorder (or quasi-order). Preference order (and more generally, binaryrelationship) on a set  can be represented by a point-to-set map from  in to . in fact, a binary relationship may be

considered to be a subset of the product set , and so itcan be regarded as a graph of a point-to-set map from  into . That means we identify the preference order ≻ withthe graph of set valued map P: [13],[15]

P(y) ∶= {y′  ∈  Y | y ≻ y′  } 

(P(y)  is the set of elements in   less preferred to y) [C. J.Goh and X. Q. Yang, (2002)], Another approach forspecifying preferences of the decision maker are the socalled domination structures (ordering cone) where thepreference order is represented by a set-valued map.

Therefore for all y ∈  ℝP, we define the set of dominationfactors

C(y) ∶= {d ∈  ℝp  | y ≻ y + d} ∪   {0} 

is defined where y ≻   y′  means that the decision makeprefers y more than y′ . A deviation of d ∈  C(y)  from y ishence less preferred than the original y. The most importan

and interesting special case of a domination structure iswhen C(.) is a constant set-valued map, especially if C(y)

equals a pointed convex cone for all y ∈  ℝP  i.e. C(. ) = CA cone C is called pointed if C ∩ −C = 0p.Given anorder relation  on ℝ , we can define a set

∶= { 2 −  1 ∶   1R 2}, (**)

Which we would like to interpret as the set of nonnegativeelements of ℝ  according to  It is interesting to considethe definition (**) with y1 ∈  ℝp , fixed, i.e., CR( y1) = { y2 − y1 ∶   y1R y2}. If R is an order relation, y1   + CR( y1)  is theset of elements of ℝp , that y1 is preferred to (or that aredominated by y1). A natural question to ask is: Under wha

conditions is CR   (y) the same for all y ∈  ℝp   ? In order toanswer that question, we need another assumption onorder relation R.

Definitions 2.2.4 [5]

1.  A binary relation R is said to be compatible with1   = 2  −  1  ∈  addition if ( y1+z, y2+z) ∈  Rfor allz ∈  ℝp , and all ( y1, y2) ∈ R.

2.  A binary relation R  is said to be compatible withscalar multiplication if (αs1, αs2) ∈  R  holds foall(s1,s2) ∈  R and for all α ∈  ℝ, and α > 0 

Theorem 2.2.1  Let   be a relation which is compatible withscalar multiplication, then  is a cone. 

Proof Let d ∈   then   = 2 −  1 for some 1, 2∈ ℝwith ( 1, 2) ∈ R. Thus, (α 1, α 2) ∈ R for all α > 0, Hence = α  ( 2 −  1) = α 2 − α 1 ∈ ,  for all   > 0. □ Thefollowing result constitutes how certain properties of abinary relation  affects the characteristics of the cone  .

Lemma 2.3.1  If R is compatible with addition and   ∈  then 0. 

Proof Let   ∈  . Then there are 1, 2  ∈  ℝ  with 1R 2

such that   = 2 −  1.  Using z = − 1, compatibility with

addition implies ( 1

+ )  ( 2

+ )  or 0.  The aboveLemma indicted that if R is compatible with addition, thesets (),  ∈  ℝ, do not depend on y. In this report, wewill be mainly concerned with this case.□ 

Theorem 2.2.2   Let R be a binary relation on ℝ  which iscompatible with scalar multiplication and addition. Then thefollowing statements hold .

1.  0 ∈   if and only if  is reflexive.

2.   is pointed if and only if  is antisymmetric.

3.   is convex if and only if  is transitive.

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 5/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

139IJSTR©2014www.ijstr.org 

Proof [5]  Let R be reflexive and let   ∈  ℝ . Then and   −    = 0 ∈   . Let 0 ∈   . Then there issome   ∈  ℝ with . Now let ′ ∈ ℝ . Then ′  = y +z forsome  ∈ ℝ . Since  and R is compatible with additionwe get ′ ′  2. Let  be antisymmetric and let   ∈   suchthat −  ∈  , too. Then there are 1, 2  ∈ ℝ   such that12     =  2−1  as well as 3, 4  ∈ ℝ   such that3 4 and −  = 4  − 3  . Thus, 2−1  = 3  − 4  and

there must be y ∈ ℝ   such that 2   = 3  +   and1  =4  + . Therefore compatibility with addition implies 2 1 . Antisymmetry of R now yields 2  = 1  and therefore,  = 0, i.e.   is pointed. Let 1  ,2  ∈ ℝ  with 1 2 and2 1  . Thus,   = 2  − 1  ∈    and −  = 1  −  2  ∈  . If   is pointed we know that {, −} ⊂    implies  = 0 and therefore 1   = 2  , i.e., R is antisymmetric. 3.Let   be transitive and let 1 ,2  ∈  . Since R iscompatible with scalar multiplication,   is a cone and weonly need to show 1  + 2  ∈   . By Lemma 2.3.1 we have01 and 02. Compatibility with addition implies 1(1 + 2), transitivity yields 0(1 + 2), from which 1  + 2  ∈  . Let  be convex and let 1 , 2 ,3  ∈  be such that

2  and

3  .Then and 

2   =

3  

− 

2  

∈ 

.

Because   is convex, 1   + 2   = 3  −  1  ∈     ByLemma 2.3.1 we get 0(3 − 1  ) and by compatibility withaddition 1 3  . □  Example The weak component wiseorder ≤  is compatible with addition and scalarmultiplication. ≤   = ℝ+

 contains 0, is pointed, and convex.We have defined cone  given a relation R. We can alsouse a cone to define an order relation. The concept of aconstant set-valued map defining a domination structurehas a direct connection to partial orderings. We recall thedefinition of partial orderings

Definition 2.2.2  [2],[8 i)   A nonempty subset   ⊂ ℝ   ×ℝ  is called a

binary relation on 

ℝ. We write

  for

, ∈  . ii)   A binary relation ≤ on ℝ  is called a partial

ordering on ℝ if for arbitrary  , ,,   ∈ ℝ :(1)    ≤   (Reflexivity),(2)    ≤  ,   ≤    ⇒    ≤   (Transitivity),(3)    ≤  ,   ≤    ⇒    +   ≤    +  

(Compatibility with addition),(4)    ≤  ,   ∈  ℝ+  ⇒    ≤   

(Compatibility with the scalarmultiplication).

iii)   A partial ordering ≤  on ℝ  is calledantisymmetric if for arbitrary  ,  ∈ ℝ    ≤ ,   ≤    ⇒    = .

A linear space ℝPequipped with a partial ordering is called apartially ordered linear space. An example for a partialordering on ℝP is the natural (or component wise) ordering≤p  defined by

≤p := {(x,y) ∈  ℝP  × ℝP  | x i ≤   yi for all i = 1,. . . , p}.

Partial orderings can be characterized by convex cones.Any partial ordering ≤ in ℝP  defines a convex cone by [3] asC ∶= {x ∈  ℝP  |0p ≤  x}  and any convex cone, then alsocalled ordering cone, defines a partial ordering on ℝP by

≤C := {(x,y) ∈ ℝP  × ℝP  | y −  x ∈  C}. 

For example the ordering cone representing the naturaordering in ℝP  is the positive orthant ℝ+

p A partial ordering

≤C  is antisymmetric if and only if C  is pointed. Thuspreference orders which are partial orderings correspond todomination structures with a constant set-valued map beingequal to a convex cone. The point-to- set map : → ℝ

reprsents preference order and we call C dominationstructure (ordering cone)

3.  Properties And Existence Of The EfficientSolution

3.1 Efficient SolutionsThe concept of optimal solutions to multiobjectiveoptimization is not trivial. It is closely related to thepreference attitudes of the decision makers. The mostfundamental solution concept is that of efficient solution(non dominated, or noninferior) solution with respect to thedomination structure of the decision maker. [5] Let   = 

    be the image of the feasible set

   and

=

 for   ∈ . Hence, a multiobjective optimization problem isgiven by

Minimize  = ( 1() , . . . , ()) (3.1)

subject to   ∈    ⊂ ℝ .

A domination structure representing a preference attitude othe decision maker is supposed to be given as a point-toset map   from   to ℝ   .Hence, the optimization problem3.1 can be restated as: ′′′′  = () (3.2)

. .   ∈   =  . Definition 3.1.1 (Efficiency via order relations) An elemen∗  ∈    is denoted as efficient, if there exists no otherelement    ∈  , that differs from ∗ and is less than it withrespect to the employed ordering relation ≤. That is∄y ∈  Y ∶  y ≠   y∗, y ≤C   y∗ 

Definition 3.1.2 [R. I. Bot, S-M. Grad, G. Wanka, (2009)],(Eciency via ordering cones) Let Y be a set and  a relatedordering cone. Then ∗ ∈   is called efficient, if there existsno y ∈    such that ∗ −    ∈  \{0}.Moreover, all efficienelements for problem (P) are combined into the efficient sen(, ) ∶= {  ∈   |   ∩   (  −  ) = {}}. In addition, if is the image of the feasible set  , ∗ ∈   is labelled as

Pareto optimal if ∗ ∈  , satisfying ∗ = (∗) is efficient.

Definition 3.1.3  A feasible solution ′ ∈    is called Paretooptimal, if there is no other   ∈    such that (x) ≤  ( ′ ). I′  is Pareto optimal,   ( ′ ) is called nondominated point. I1, 2  ∈    and (1) ≤ (2) we say 1 dominates 2 and

 (1)  dominates 2. A feasible vector ′ ∈     is called

efficient or Pareto optimal ( = ℝ+

), if there is no otherdecision vector   ∈     such that  (x) ≤   (′ ) for all  =1,2, . . . , , and    ()≤   ′  for at least one objective

function. In this case,  ′ − ℝ+ ∩ = { ′}  o

equivalently 

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 6/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

140IJSTR©2014www.ijstr.org 

( −  ′) ∩ (−ℝ+ = 0. ′ ∈    

is called weak Pareto optimal solution to the problem ifthere does not exist another decision vector ∈  such that

 (x) <  (′ ). This means,

( −  ′) ∩ (− = 0. (3.2)

( −  ′) ∩ (− ∖ {0} = 0. (3.3)

( −  ′) ∩ (−ℝ+ = ∅ (3.4)

(If there is no other decision vector   ∈    such that

 (x) < (′ ) for all  = 1,2, . . . , . 

Proposition3.1.1 Let  and  be two sets in ℝ , and let  be constant ordering cone on ℝ  then ( + , ) ⊂(, ) + (, ) 

Proof  Let ∗ ∈ E(Y + Z,C), then ∗ = +   for some

∈ ,

∈ .We want to show

∈E(Y, C) and

∈E(Z, C).

Suppose not, then there exist ′ ∈   and 0 ≠ ∈   suchthat  = ′ + . Then ∗ = ′ + +   and ′ + ∈ +  which contradict the supposition ∗ ∈ EY + Z , C. Similarlywe can show for . □ 

Theorem 3.1.1 Let  be a pointed convex cone, then(, ) = ( + ,) 

`

Figure1.

Proof The result is trivial if   = ∅, because Y + C = ∅ and the nondominated subsets of both are empty, too. Solet   ≠  ∅. First, assume   ∈ EY + C,C, but   ∉  E(Y,C) .There are two possibilities. If   ∉    there is ′  ∈    and0 ≠    ∈    such that   = ′  +  . Since, ′   = ′   + 0 ∈ Y + C ,we get   ∉  EY + C,C wich is a contradiction. If  ∈   there  ′ ∈    such that ′  ≤  . Let   = − ′  ∈ . Therefore y = ′+ d and   ∉  E(Y + C,C), againcontradicting the assumption. Hence in either cases 

 

∈E(Y, C)  Second, assume   ∈  EY, C but   ∉  E(Z + C,C).Then there is some ′ ∈   +  with   −  ′   = ′  ∈  . Thatis ′ = ′′ + ′′  with ′′  ∈  ,′′  ∈  and therefore  = ′ + ′   = ′′ + (′ + ′′ ) = ′′ +   with   = ′ +

′′  ∈  . This implies   ∉  E(Y, C), contradicting theassumption. Hence,   ∈  EY + C,C. □ 

Theorem 3.1.2  Let   be a nonempty ordering cone with  ≠   {0}.Then (, ) ⊆  (). 

Proof  Let ( = ℝ+

), and   ∈  EY, C  and suppose  ∉  (). Therefore   ∈    and there exists an ε-

neighborhood (, ) of y (with (, ) ∶=   + (0, ) ⊂  (0, )  is an open ball with radius ε centered at the origin)

Let   ≠  0,  ∈  . Then we can choose some   ∈  ℝ, 0 <   <   such that   ∈  (0, ). Now,   −    ∈    with  ∈ \ {0}, . .   ∉  EY, C. □ 

Theorem 3.1.3  If   is a closed and convex set and thesection (  −  ) ∩   is compact for all    ∈  , then (, )

is connected. Proof [5]  

3.2 Existence of efficient solution

Definition 3.2.1 Let (, ≤) be a preordered set, i.e. ≤  isreflexive and transitive. (,≤)  is inductively ordered, ievery totally ordered subset of (, ≤) has a lower bound. Atotally ordered subset of (, ≤) is also called a chain.

Definition 3.2.2  A set   ⊂ ℝ   is called C-semicompact i

every open cover of  of the form{  –  : ∈ , ∈

has a finite subcover. For some indexed set IThis means

that whenever   ⊂    –  ∈ there exist   ∈    and

{ 1, . . . , } ⊂   such that   ⊂    –  =1   here  –  

denotes the complement  \ (  –  )  o

(  –  ). Note that these sets are always open.

Theorem 3.2.1  If   is an acute convex cone and   isnonempty -semicompact set in ℝ , then , ≠ ∅. 

Figure. 2.

Proof [5] As ⊂   EY,clC ⊂  EY, C It is enough toshow the case in which  is a pointed closed convex coneIn this case,  define a partial order ≤  on  as 1 ≤ 2 if −2 1 ∈ . An element in EY, C is a minimal element withrespect to ≤ .  therefore we can show that  is inductivelyordered and applying Zorn’s lemma to establish theexistence of minimal element. Now, suppose the contrarythat Y is not inductively ordered, then there exist a totallyordered set = {: ∈τ} in  which has no lower bound in

. Thus [  –   ∩ ]∈τ = ∅.Otherwise any elements othis intersection is a lower bound of in . Now it follows

that for any ∈   there exist ∈   such that ∉ − since  –   is closed, the family of {(  –  ) : ∈ } forms

an open cover of Y. Moreover,  –   ⊂ ′ –    if

≤ ′, and so they are totally ordered by inclusion

Since  is C-semicompact, the cover admits finite subcoverand hence there exist a single ∈   such that   ⊂ (  –  ) .  However, this contradicts the fact that ∈ Therefore  is inductively ordered by ≤  and EY, C ≠ ∅ byZorn’s lemma.  □  Even though, the existence of efficientsolution is mathematically proved under some conditionssuch as the convexity, connectedness, and compactness ofeasible region and lower semi continuity of objectivesfunctions. But practical estimation of efficient set is very

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 7/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

141IJSTR©2014www.ijstr.org 

difficult. Therefore, in the next chapter one of a globalsearch heuristics which find the true or approximate of nearoptimal solutions to problems is discussed. This method isless susceptible to the connectivity and convexity offeasible (search space). In addition to this continuity anddifferentiability of objective functions does not require.

4.  Continuous Variable Genetic Algorithm.

4.1. IntroductionGenetic algorithms is a class of probabilistic optimizationalgorithms inspired by the biological evolution process usesconcepts of “Natural Selection” and “Genetic Inheritance”(Darwin 1859) originally developed by John Holland (1975),[12]. Particularly well suited for hard problems where little isknown about the underlying search space. The specificmechanics of the algorithms involve the language ofmicrobiology and, in developing new potential solutions,through genetic operations. A population represents agroup of potential solution points. A generation representsan algorithmic iteration. A chromosome is comparable to adesign point, and a gene is comparable to a component of

the design vector. Genetic algorithms are theoretically andempirically proven to provide robust search in complexphases with the above features. Genetic algorithms arecapable of giving rose to efficient and effective search in theproblem domain. Hence the application is now widelyspread in business, sciences and engineering. Thesealgorithms are computationally less complex but morepowerful in their search for improvement. These featureshave motivated the researchers to study differentapproaches of genetic algorithm.

4.2. Genetic AlgorithmMany search techniques required auxiliary information inorder to work properly. For e.g. Gradient techniques need

derivative in order to chain the current peak and otherprocedures like greedy technique requires access to mosttabular parameters whereas genetic algorithms do notrequire all these auxiliary information. GA is blind to performan effective search for better and better structures they onlyrequire objective function values associated with theindividual strings. A genetic algorithm (or GA) is categorizedas global search heuristics used in computing to find true orapproximate solutions to optimization problems. Geneticalgorithms is a particular class of evolutionary algorithmsthat use techniques inspired by evolutionary biology suchas inheritance, mutation, selection, and crossover (alsocalled recombination). Genetic algorithms are implementedas a computer simulation in which a population of abstract

representations (called chromosomes or the genotype orthe genome) of candidate solutions (called individuals,creatures, or phenotypes) to an optimization problemevolves toward better solutions. [1] Traditionally, solutionsare represented in binary as strings of 0s and 1s, but otherencodings are also possible. The evolution usually startsfrom a population of randomly generated individuals andhappens in generations. In each generation, the fitness ofevery individual in the population is evaluated, multipleindividuals are selected from the current population (basedon their fitness), and modified (recombined and possiblymutated) to form a new population. The new population isthen used in the next iteration of the algorithm. Commonly,

the algorithm terminates when either a maximum number ofgenerations has been produced, or a satisfactory fitnesslevel has been reached for the population.

4.2.1 Initialization of GA with real valued variablesInitially many individual solutions are randomly generated toform an initial population. The population size depends onthe nature of the problem, but typically contains severa

hundreds or thousands of possible solutions. Commonlythe population is generated randomly, covering the entirerange of possible solutions (the search space)Occasionally, the solutions may be "seeded" in areas whereoptimal solutions are likely to be found. To begin the GAwe define an initial population of N  pop  chromosomes. Amatrix represents the population with each row in the matrixbeing a 1 x N var  array (chromosome) of continuous valuesGiven an initial population of N  pop  chromosomes, the fulmatrix of N  pop x N var  random values is generated by: 

Pop = rand (Npop, Nvar)

If variables are normalized to have values between 0 and 1

If not the range of values is between  X lb and  X ub, then theunnormalized values are given by, [12]:

 X =(X ub− X lb )X norm+ X lb

Where X lb: lowest number in the variable range

 X ub: highest number in the variable range

 X norm: normalized value of variable.

But one can generate an initial feasible solution from searchspace simply as,

for i=1:Npop

pop(i,:)=(Xub−Xlb).*rand(1, nvar) + Xlb; end for;

This generate real valued population N  pop x N var  populationmatrix. Generally data encoding and generation of initiasolution is problem dependent, possible individual’sencoding can be:

  Bit strings  Real numbers  Permutations of element; ... any data

structure...

4.2.2 Evaluation (fitness function)

Fitness values are derived from the objective functionvalues through a scaling or ranking function. Note that fomultiobjective functions, the fitness of a particular individuais a function of a vector of objective function valuesMultiobjective problems are characterized by having nosingle unique solution, but a family of equally fit solutionswith different values of decision variables. Therefore, careshould be taken to adopt some mechanism to ensure thatthe population is able to evolve the set of Pareto optimasolutions. Fitness function measures the goodness of theindividual, expressed as the probability that the organismwill live another cycle (generation). It is also the basis forthe natural selection simulation better solutions have a

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 8/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

142IJSTR©2014www.ijstr.org 

better chance to be the next generation. In many GAalgorithm fitness function is modeled according to theproblem, but in this paper we use objective functions asfitness function.

Cost = f(chromosome) that meansCost 1 = f(X 1 X  2 X 3 X 4 …. X nvar  ) Cost  2  = f(X 1 X  2 X 3 X 4 …. X nvar  ) 

Cost 3 = f(X 1 X  2 X 3 X 4 …. X nvar  )… Sorting cost according to cost 1 

 [cost 1 , ind] = sort(cost); % min cost in element 1 Chro = Chro (ind,:); % sort chromosomecost1 = f1 ( pop) ; cost2 = f2 ( pop);… [cost, ind] = sort {cost1}reorder based on sorted cost1 cost2  =cost2 ( ind1 ) …. 

 pop = pop( ind1) rank =1Reorder based on sorted cost1 assign chromosome onPareto front cost = rank remove all chromosome assignedthe value of rank from the pop.

rank = rank +1

4.2.3 Parent SelectionDuring each successive generation, a proportion of theexisting population is selected to breed a new generation.Individual solutions are selected through a fitness-basedprocess, where fitter solutions (as measured by a fitnessfunction) are typically more likely to be selected. Certainselection methods rate the fitness of each solution andpreferentially select the best solutions. Other methods rateonly a random sample of the population, as this processmay be very time-consuming. Most functions are stochasticand designed so that a small proportion of less fit solutionsare selected. This helps keep the diversity of the population

large, preventing premature convergence on poor solutions.Popular and well-studied selection methods include roulettewheel selection and tournament selection and rank basedselections.

4.2.3.1 Roulette wheel selectionIn roulette wheel selection, individuals are given aprobability of being selected that is directly proportionate totheir fitness. Two individuals are then chosen randomlybased on these probabilities and produce offspring.Roulette Wheel’s Selection Pseudo Code: 

for all members of populationsum = fitness of individuals

end for

for all members of populationprobability = sum of probabilities + (fitness / sum)

sum of probabilities = probabilityend forloop until new population is fulldo this twicenumber = Random between 0 and 1

for all members of populationif number > probability but less than next

probability then you have been selectedend for

endcreate offspring

end loop

4.2.3.2. Tournament-based selectionFor K less than or equal to the number of populationextract K individuals from the population randomly makethem play a “tournament”, where the probability for an

individual to win is generally proportional to its fitness.

M =ceil ((pop size-keep)/2); number of mating

Picks = ceil(keep*rand(K, M));

Repeat M time. The parameter for tournament selection isthe tournament size Tour. Tour takes values ranging from 2- Npop (number of individuals in population). Selectionintensity increases with Tournament size.

4.2.4 ReproductionThe next step is to generate a second generationpopulation of solutions from those selected through genetic

operators: crossover (also called recombination), andmutation. For each new solution to be produced, a pair of"parent" solutions is selected for breeding from the pooselected previously. By producing a "child" solution usingthe above methods of crossover and mutation, a newsolution is created which typically shares many of thecharacteristics of its "parents". New parents are selected foeach child, and the process continues until a newpopulation of solutions of appropriate size is generatedThese processes ultimately result in the next generationpopulation of chromosomes that is different from the initiageneration. Generally the average fitness will haveincreased by this procedure for the population, since onlythe best organisms from the first generation are selected fo

breeding, along with a small proportion of less fit solutionsfor reasons already mentioned above.

4.2.5 CrossoverThe most common type is single point crossover. In singlepoint crossover, you choose a locus at which you swap theremaining alleles from one parent to the other. This iscomplex and is best understood visually. As you can seethe children take one section of the chromosome from eachparent. The point at which the chromosome is brokendepends on the randomly selected crossover point. Thisparticular method is called single point crossover becauseonly one crossover point exists. Sometimes only child 1 orchild 2 is created, but oftentimes both offspring are created

and put into the new population. Crossover does not alwaysoccur, however. Sometimes, based on a set probability, nocrossover occurs and the parents are copied directly to thenew population.

4.2.6 Crossover (Real valued recombination)Recombination produces new individuals in combining theinformation contained in the parents The simplest methodschoose one or more points in the chromosome to mark asthe crossover points. Some of Real valued recombination(crossover).

•  Discrete recombination•  Intermediate recombination

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 9/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

143IJSTR©2014www.ijstr.org 

•  Line recombinationReal valued recombination: a method only applicable to realvariables (and not binary variables). The variable values ofthe offspring's are chosen somewhere around and betweenthe variable values of the parents as:

Offspring = parent 1 +  λ(parent 2  − parent 1)For example consider the two parents to be:

 parent 1= [X m1 X m2 X m3 X m4…. X mNvar  ] parent  2 = [X d1 X d2 X d3 X d4…. X dNvar  ]Crossover points are randomly selected, and then thevariables in between are exchanged:Offspring1= [X m1 X m2 X d3 X m4 … ↓ X m7 …. X mNvar  ]Offspring 2 = [X d1 X d2 ↑X m3 X d4 …↑ X d7 …. X dNvar  ]

The extreme case is selecting Nvar  points and randomlychoosing which of the two parents will contribute its variableat each position. Thus one goes down the line of thechromosomes and, at each variable, randomly chooseswhether or not to swap the information between the twoparents.

This method is called uniform crossover:

Offspring1 = [Xd1 Xm2 Xm3… Xd6…. XmNvar]Offspring2 = [Xm1 Xd2 Xd3… Xm6…. XdNvar]

The problem with these point crossover methods is that nonew information is introduced: Each continuous value thatwas randomly initiated in the initial population is propagatedto the next generation, only in different combinations andthis strategy work fine for binary representations. There isnow a continuum of values, and in this continuum we aremerely interchanging two data points. The blendingmethods remedy this problem by finding ways to combinevariable values from the two parents into new variable

values in the offspring. A single offspring variable value,Xnew, comes from a combination of the two correspondingoffspring variable values:

 X new =  λ X mn+(1−  λ )X dn

Where: λ: random number on the interval −, + 1  

= 0   ∈ (0,1)

 X mn: nth variable in the mother chromosome X dn: nth variable in the father chromosome

The same variable of the second offspring is merely thecomplement of the first. If λ =1, the  X mn propagates in its

entirety and  X dn  dies. In contrast, if λ =0, then  X dn propagates in its entirety and  X mn  dies. When λ =0.5, theresult is an average of the variables of the two parents.Choosing which variables to blend is the next issue.Sometimes, this linear combination process is done for allvariables to the right or to the left of some crossover point.Any number of points can be chosen to blend, up to N var  values where all variables are linear combination of those ofthe two parents. The variables can be blended by using thesame λ for each variable or by choosing different λ’s foreach variable. However, they do not allow introduction ofvalues beyond the extremes already represented in thepopulation. Top do this requires an extrapolating method.

The simplest of these methods is linear crossover. In thiscase three offspring are generated from the two parents by:

 X new1 = 0.5X mn +0.5 X dn

 X new2 = 1.5X mn − 0.5X dn 

Any variable outside the bounds is discarded in favor of the

other two. Then the best two offspring are chosen topropagate. Of course, the factor 0.5 is not the only one thatcan be used in such a method. Heuristic crossover is avariation where some random number, λ, is chosen on theinterval [0,1] and the variables of the offspring are definedby:

Xnew = λ(Xmn−Xdn)+Xmn

Variations on this method include choosing any number ovariables to modify and generating different λ for eachvariable. This method also allows generation of offspringoutside of the values of the two parent variables. If thishappens, the offspring is discarded and the algorithm tries

another λ. The blend crossover method begins by choosingsome parameter α that determines the distance outside thebounds of the two parent variables that the offspringvariable may lie. This method allows new values outside ofthe range of the parents without letting the algorithm straytoo far. The method used for us is a combination of anextrapolation method with a crossover method. We want tofind a way to closely mimic the advantages of the binary GAmating scheme. It begins by randomly selecting a variablein the first pair of parents to be the crossover point:

α  = roundup(random*Nvar)We’ll let  

 parent1 = [X m1 X m2 X m3… X mα …. X mNvar  ]

Parent 2 = [X d1 X d2 X d3 … X d α ….X dNvar  ]

Where the m and d subscripts discriminate between themom and the dad parent, then the selected variables arecombined to form new variables that will appear in thechildren:

 X new1 = X mα  −  λ [X mα − X d α  ] X new2 = X d α  +  λ [X mα − X d α  ]

Where λ is also a random value between 0 and 1, the finastep is to complete the crossover with the rest of thechromosome as before:

Offspring1 = [X m1 X m2 X m3… X new1…. X mNvar  ]Offspring 2  = [X d1 X d2 X d3… X new2 …. X dNvar  ]

If the first variable of the chromosomes is selected, thenonly the variables to the right to the selected variable areswapped. If the last variable of the chromosomes isselected, then only the variables to the left of the selectedvariable are swapped. This method does not allow offspringvariables outside the bounds set by the parent unless λ >1. 

Eg. Intermediate recombinationparent 1 : 12 25 5parent 2 : 123 4 34

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 10/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

144IJSTR©2014www.ijstr.org 

lambda are:sample 1: 0.5 1.1 -0.1sample 2: 0.1 0.8 0.5

The new individuals are calculated

offspring = parent 1 + λ (parent 2 − parent 1)offspring 1: 67.5 1.9 2.1

offspring 2: 23.1 8.2 19.5

4.2.7 MutationAfter selection and crossover, you now have a newpopulation full of individuals. Some are directly copied, andothers are produced by crossover. In order to ensure thatthe individuals are not all exactly or the same, you allow fora small chance of mutation. You loop through all the allelesof all the individuals, and if that allele is selected formutation, you can either change it by a small amount orreplace it with a new value. The probability of mutation isusually small. Mutation is, however, vital to ensuring geneticdiversity within the population. We force the routine toexplore other areas of the cost surface by randomly

introducing changes, or mutations, in some of the variables.We chose a mutation rate,

Nmut = ceil(popsize*Nvar*mutrate); Total number ofmutations.

Next random numbers are chosen to select the row and thecolumns of the variables to be mutated.

  mrow = ((1, ) ∗ (−)) + ; 

    = ((1, ) ∗);

A mutated variable is replaced by a new random variable.The following pairs were randomly selected: for Example

mrow = [7 3 2 11]  mcol = [2 2 1 3]  

The first random pair is (7,2). Thus the value in row 9 andcolumn 2 of the population matrix is replaced with randomnumber from the feasible region. To keep the solution infeasible region after mutation apply:

Pop = (ub(mcol) − lb(mcol)).∗ rand(1, Nmut) + lb(mcol); 

3.2.8 ElitismWith crossover and mutation taking place, there is a highrisk that the optimum solution could be lost as there is no

guarantee that these operators will preserve the fitteststring. To counteract this, elitist preservation is used. Elitismis the name of the method that first copies the bestchromosome (or few best chromosomes) to the newpopulation. In an elitist model, the best individual from apopulation is saved before any of these operations takeplace. After the new population is formed and evaluated, itis examined to see if this best structure has beenpreserved. If not, the saved copy is reinserted back into thepopulation and the rest of the population is constructed inways described above. Elitism can rapidly increase theperformance of GA, because it prevents a loss of the bestfound solution [10]. In this paper chromosome on Pareto

front is used as elite children and not participate in mutationoperations.

4.2.8 Parameters Of Genetic AlgorithmThere are two basic parameters of GA - crossoveprobability and mutation probability. Crossover probabilityhow often crossover will be performed. If there is nocrossover, offspring are exact copies of parents. If there is

crossover, offspring are made from parts of both parent'schromosome. If crossover probability is 100%, then aloffspring are made by crossover. If it is 0%, whole newgeneration is made from exact copies of chromosomesfrom old population (but this does not mean that the newgeneration is the same!). Crossover is made in hope thatnew chromosomes will contain good parts of oldchromosomes and therefore the new chromosomes will bebetter. However, it is good to leave some part of oldpopulations survive to next generation. Mutation probabilitythe probability of mutation is normally low because a highmutation rate would destroy fit strings and degenerate thegenetic algorithm into a random search. Mutation probabilityvalue in some literature is around 0.1% to 0.01% are

common, these values represent the probability that acertain string will be selected for mutation i.e. for aprobability of 0.1%; one string in one thousand will beselected for mutation. If there is no mutation, offspring aregenerated immediately after crossover (or directly copiedwithout any change. If mutation is performed, one or moreparts of a chromosome are changed. If mutation probabilityis 100%, whole chromosome is changed, if it is 0%, nothingis changed. Mutation generally prevents the GA from fallinginto local extremes. Mutation should not occur very oftenbecause then GA will in fact change to random search. Ifthe mutation rate is too high, the GA searching becomes arandom search, and it becomes difficult to quickly convergeto the global optimum. There are also some othe

parameters of GA. One another particularly importanparameter is population size. Population size: how manychromosomes are in population (in one generation). If thereare too few chromosomes, GA has few possibilities toperform crossover and only a small part of search space isexplored. On the other hand, if there are too manychromosomes, GA slows down. Research shows that aftesome limit (which depends mainly on encoding and theproblem) it is not useful to use very large populationsbecause it does not solve the problem faster than moderatesized populations.

4.2.9 Termination conditionsCommonly, the algorithm terminates when either a

maximum number of generations has been produced, or asatisfactory fitness level has been reached for thepopulation. If the algorithm has terminated due to amaximum number of generations, a satisfactory solutionmay or may not have been reached. Thus generationaprocess is repeated until a termination condition has beenreached. Common terminating conditions can be:

•  A solution is found that satisfies minimum criteria•  Fixed number of generations reached•  Allocated budget (computation time) reached•  The highest ranking solution's fitness is reaching o

has reached a plateau such that successiveiterations no longer produce better results

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 11/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

145IJSTR©2014www.ijstr.org 

•  Manual inspection•  Any Combinations of the above.

Simple GA flows chart•  Initialize P(t)•  Evaluate P(t)•  cost1 = f1 ( P(t)) ;cost2 = f2 ( P(t))•  [cost, indx1] = sort {cost1}•

 

reorder based on sorted cost1;•  cost2 =cost2 (indx1 ) ;•  P(t) =P(( ind1)•  rank =1•  assign chrom .on Pareto front•  cost = rank•  remove all chrom . assigned the value of rank from

the pop•  rank = rank +1•  Selection•  Apply crossover on P(t) to yield C(t)•  Apply mutation on C(t) to yield D(t)•  Select P(t+1)•  from P(t) and D(t) based on the fitness•

 

Stopping criteria•  Stop

4.2.10 Test functions1. A two objective linear programming problem

  (1) = 3 ∗ (1) + (2;

  (2) = −(:,1) − 2 ∗ (2);. . 3 ∗ (1) − (2) ≤ 6(2) ≤ 3;1, (2) ≥ 0

Figure 1a. Simulation Results

Figure 1b. Simulation Results

2. bi-objective nonlinear problem (Rendon)Minimize 1 = 1/((1 − 3)2 + (2 − 3)2 + 1);Minimize 2 = (1 − 3)2  + 3(2 − 3)2 + 1;

s.t [0 ≤ 1, 2 ≤ 6] 

Figure 2a. Simulation Results

Figure 2b. Simulation Results

3. Two-objective warehouse location problem that minimizethe total distance to five location so that the distance fromfire station at (20,20) is less than 10 unit.(using Euclidiandistance)

[(0,45);(20,45);(70,30);(80,5);(80,15);(20,20)]

%f(:,1)=sqrt((0-x(:,1)).^2+(45-x(:,2)). 2̂)+sqrt((5-x(:,1)).^2+(10-x(:,2)).^2)...+sqrt((20-x(:,1)).^2+(45-x(:,2)).^2)+sqrt((70-x(:,1)).^2+(30-

x(:,2)).^2)...+sqrt((80-x(:,1)).^2+(5-x(:,2)).^2)+sqrt((80-x(:,1)).^2+(15-x(:,2)).^2);f(:,2)= abs(sqrt((20-x(:,1)).^2+(20-x(:,2)).^2)-10);

Figure 3a.Simulation Results

Pareto front

Pareto front

Pareto front

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 12/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

146IJSTR©2014www.ijstr.org 

Figure 3b. locations

5  A two-objective with two variable minimization problemintroduced by Deb (2001) is chosen to illustrate the functionof SPEA2.

Minimize f1(x) = x1;Minimize f2(x) =(x2 + 1)/x1Subject to 0.1≤x1 ≤ 1; 0 ≤x2 ≤5:

Figure 4a. Simulation Results

Figure 4b. Simulation Results

5. Constraint Handling a two-objective with two variableminimization problem introduced by Deb(2001) is chosen to illustrate the function of SPEA2Minimize f1(x) = x1;Minimize f2(x) =(x2 + 1)/x1Subject to −2 − 91 ≤ −6; 2 − 91 ≤ −1 0.1≤x1 ≤ 1; 0 ≤x2 ≤5 (Using external appropriate penalty)

Figure 5a. Simulation Results

Figure 5b. Simulation Results

6. Three-Objective test problems

Min f1 = X1;Min f2 = (X2 +2) /X1; Min f3 = X1

 2+ X2;

ub=[4,3]; lb=[0.1,1 ];

Figure 6. Simulation Results

7. Unconstrained linear minimization  1 = 1 −  3 2;  2 = 3 1 + 2

  3 = 3 −  2 2;  = [2,2 ,2]; = [ 1 , 1 ,0]; 

Figure 7. Simulation Results

8. Comet problem (Deb 2001)Min F1 =(1+X 3 )*(X 1 )

3 *(X  2  )

 2 -10X 1-4X  2  );

Pareto front

Pareto Surface

Fire station 

Pareto

Surface

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 13/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

147IJSTR©2014www.ijstr.org 

Min f2 =(1+X 3 )*(X 1 )3*(X  2  )

 2 -10X 1+4X  2  );

Min f3 =3*(1+X 3 )*(X 1 ) 2 ;

s.t ub=[3.5,2,1]; lb=[1,-2,0 ];

Figure8. Simulation Results

4.2.10 Discussion of using genetic algorithmMany search techniques required auxiliary information inorder to work properly. For example Gradient techniquesneed derivative in order to minimize /maximize a given

objective and other whereas genetic algorithms do notrequire all these auxiliary information. GA is blind to performan effective search for better and better structures. Thischaracteristic makes GA a more suitable method than manysearch schemes. GA uses probabilistic transition rules toguide their search towards regions of search space withlikely improvement. Because of these importantcharacteristics possessed by the GA it is more robust thanother commonly used techniques when the search space islarge, complex or loosely defined problems. It is also betterthan other optimization algorithm:

•  When domain knowledge is scarce or expertknowledge is difficult to encode to narrow thesearch space.

 

When no mathematical analysis is available.•  When traditional search methods fail.•  It can quickly scan a vast solution set.•  Bad proposals do not affect the end solution

(independent of initial feasible solution)

4.  Concluding remarksIn general, multi-objective optimization requires morecomputational effort than single-objective optimization.Genetic algorithms require several heuristic parameters andthis process is not necessarily straightforward it may requiresignificant experience in selection. However, genetic multi-objective algorithms are relatively easy to use then otherevolutionary algorithm. Classical methods of preferences

require the user to specify preferences only in terms ofobjective functions. Alternatively, genetic algorithm methodsof preferences allow the user to view potential solutions inthe criterion space or in the design space, and to select anacceptable solution that helps the decision maker intradeoff analysis. In this paper algorithm is tested withlinear, non-linear and with constrained objectives functionsafter external penalty is carefully selected and theperformance is very good compared to classical methodand some evolutionary algorithms. It is also simplifies theway fitness function is represented in other evolutionaryalgorithm.

5.  Appendix A

%_______________________________________  %some of the concept of the m-file is taken from theMATWORKS web site www.matworks.com (genetic toobox) and modified by researcher. The algorithm can bework for function with more than two objectives with simplemodification.

%_______________________________________function f = ParetoGA %for two objective% I parameters Setupwarning off;% ub = upper bound ;lb = lower bound of variables; %variable limitsA=[lb;ub];% matrix of the varriable boundsnvar=length(A); % number of optimization variablesGenerations =50; % max number of iterationsselection=0.5; % fraction of population keptNpop=3000;% populationsize; keep=selection*Npop;M=ceil((Npop-keep)/2); % number of matingsPm=0.2; % mutation rate%_______________________________________

%II Create the initial populationt=0; % generation counter initializedphen=intpop(lb, ub,Npop,nvar); % random population ocontinuous values in matrix A%_______________________________________%% III Evaluation of random population with objectivefunction:fout=feval('parto',phen)% Iterate through generationswhile t<Generationst=t+1;% increments generation counter[f1,ind]=sort(fout(:,1));%sorts objective function valuesphen=phen(ind,:); % sorts chrom ; f2=fout(ind,2);%f3=fout(ind,3);

r=0; rank=1; elt=0;while r<Npopfor j=1:Npopif f2(j)<=min(f2(1:j))r=r+1; elt(r)=j; value(r,1)=rank;end%ifif rank==1f11=f1(elt);f21=f2(elt);elite=length(elt);end%ifif r==Npop break; end%if; end%forrank = rank+1;;end%while; phen=phen(elt,:); % sortschromosomevalue = value+1;phen=phen(ind,:);%_______________________________________%iv. selectionprob=flipud([1:keep]'/sum([1:keep])); % weights o

chromosomesF=[0 cumsum(prob(1:keep))']; % probability distributionfunctionpick1=rand(1,M); % mate #1;pick2=rand(1,M); % mate #2% mam and dad contain the indicies of the chromosomesthat will mateic=1;while ic<=Mfor id=2:keep+1if pick1(ic)<=F(id) & pick1(ic)>F(id-1);endmam(ic)=id-1;if pick2(ic)<=F(id) & pick2(ic)>F(id-1)dad(ic)=id-1;end;end

Pareto

Surface

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 14/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

148IJSTR©2014www.ijstr.org 

ic=ic+1;end%_______________________________________%v. real valued Recombinationii=1:2:keep; % index of mate #1pt=floor(rand(1,M)*nvar); % crossover point; p=rand(1,M);mix=phen(mam+Npop*pt)-phen(dad+Npop*pt);% mix fromma and paphen(keep+ii+Npop*pt)=phen(mam+Npop*pt)-p.*mix;% 1st

phen(keep+ii+1+Npop*pt)=phen(dad+Npop*pt)+p.*mix;%2nd offspringif pt<nvar% crossover when last variable is selectedphen(keep+ii,:)=[phen(mam,1:pt) phen(dad,pt+1:nvar)];phen(keep+ii+1,:)=[phen(dad,1:pt) phen(mam,pt+1:nvar)];end % if%_______________________________________%vi. Real valued MutationNmut=ceil((Npop-elite)*nvar*Pm);% total number ofmutationsmrow=ceil(rand(1,Nmut)*(Npop-elite))+elite;mcol=ceil(rand(1,Nmut)*nvar);mutindx=mrow+(mcol-1)*Npop;phen(mutindx)= (ub(mcol) -

lb(mcol)).*rand(1,Nmut)+lb(mcol);%andphen(mutindx)= max([ub-lb])*rand(1,Nmut)+min([lb]);%to keep the randam numberwithin ub and lb%_______________________________________% vii.The new offspring and mutated chromosomes areevaluated for costrow=sort(rem(mutindx,Npop)); iq=1; rowmut(iq)=row(1);for ic=2:Nmutif row(ic)>rowmut(iq)iq=iq+1; rowmut(iq)=row(ic);if row(ic)>keep; break; end %if;end %if;end %forif rowmut(1)==0;rowmut=rowmut(2:length(rowmut)); end %iffout(rowmut,:)=feval('parto',phen(rowmut,:));

fout(keep+ii:Npop,:)=feval('parto',phen(keep+ii:Npop,:));fout(keep+ii+1:Npop,:)=feval('parto',phen(keep+ii+1:Npop,:));%_______________________________________%viii. Stopping criteriaif t>Generationsbreak ;end %if%_______________________________________%ix. Displays the outputphen(1:10,:);[phen(elt,:) fout]figure(1);plot(fout(:,1), fout(:,2),'k.',f11,f21,'r*','LineWidth',3);xlabel('f1');ylabel('f2');grid on;%plot pareto fronterif index of objective function is twotitle('pareto fronter in objective space ');

figure(2);plot(phen(ind,1),phen(ind,2),'*');xlabel('x1');ylabel('x2');grid on;title('decision space'),pause(0.01);end %tformat short gdisp(['Pareto front and objective value'])disp([' x1 x2 f(x1) f(x2)'])disp(num2str(phen(elt,:)))% generating matrices of initial populationfunction phen=intpop(lb,ub,Npop,nvar) %for ii=1:Npopphen(ii,:)=(ub-lb).*rand(1, nvar) + lb;end

6.  AcknowledgmentFirst of all, I would like to express my deepest and speciathanks to department of Mathematics, ASTU, and almembers of the department are gratefully acknowledged fotheir encouragement and support. I would like to extend mythanks to Adama Science and Technology University forsponsorship and financial support. Lastly, I would like toforward my grateful thanks to my family for their incredible

support and encouragement.

7.  References[1].  David A. Coley, An Introduction to Genetic

Algorithms for Scientists and Engineers, 1999 byWorld Scientific Publishing Co. Pte. Ltd.

[2].  R. I. Bot, S-M. Grad, G. Wanka, Duality in VectorOptimization, Springer-Verlag Berlin Heidelberg(2009)

[3].  Deb, K., Multi-objective optimization usingevolutionary algorithms, (2001), Wiley.

[4]. 

Deb, K., Agrawal, S., Pratap, A., Meyarivan, T. AFast Elitist Non-dominated sorting geneticalgorithm for multi-objective optimization: NSGA-II(2000).

[5].  M. Ehrgott: Multicriteria Optimization, Springer2005, New York

[6].  G. Eichfelder, Vector Optimization, Springer, 2008Berlin Heidelberg

[7].  Fonseca, C. M. and Fleming, P. J,An overview oevolutionary algorithms in multiobjectiveoptimization (1995).

[8].  C. J. Goh and X. Q. Yang, Duality in Optimizationand Variational Inequalities, Taylor and Francis(2002), New York

[9].  Knowles, J. D. and Corne, D. W., The Paretoarchived evolution strategy, A new baselinealgorithm for multiobjective optimizationProceedings of the 1999 Congress on EvolutionaryComputation.

[10]. Kokolo, I., Kita, H., and Kobayashi, S, Failure oPareto-based MOEAs: Does nondominated reallymean near to optimal, Proceedings of the

Congress on Evolutionary Computation 2001.

[11]. John R. Koza, Genetic Programming, 1992Massachusetts Institute of Technology

[12]. Randy L. Haupt, Sue Ellen Haupt: practical geneticalgorithms second edition, 2004 by John Wiley &Sons, Inc.

[13]. Y. Sawaragi, H. Nakayama and T. Tanino, Theoryof Multiobjective optimization, academic press1985.

8/11/2019 Multiobjective Programming With Continuous Genetic Algorithm

http://slidepdf.com/reader/full/multiobjective-programming-with-continuous-genetic-algorithm 15/15

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 3, ISSUE 9, SEPTEMBER 2014 ISSN 2277-8616

[14]. M. Semu (2003), On Cone d.c optimization andConjugate Duality, Chin. Ann. Math, 24B:521-528.

[15]. R. E. Steuer, Multiple Criteria: Theory, computationand Application, John Wiley & Sons, New York(1986)

[16]. S.N. Sivanandam, S.N. Deepa, Introduction to

Genetic Algorithms, Springer-Verlag BerlinHeidelberg 2008

[17]. http://www.matworks.com.