[ieee iecon 2012 - 38th annual conference of ieee industrial electronics - montreal, qc, canada...

6
An Enhanced GA Technique for System Optimization Dezhi Li, Student Member, IEEE, Wilson Wang *, Senior Member, EE, Fathy Ismail Abstra-The commonly used genetic algorithms (GAs) have some shortcomings in applications such as lengthy computations and slow convergence. A novel enhanced genetic algorithm, EGA, technique is developed in this paper to overcome these problems to enhance the efficiency in system training and optimization. The proposed EGA technique involves two approaches: a) a novel group-based branch crossover operator is suggested to thoroughly explore local space and to speed convergence, and b) an enhanced MPT akinen-Periaux- Toivanen) mutation operator is proposed to promote global search capability. The effectiveness of the developed EGA verified by simulations using benchmark test problems. Test results show that the EGA technique can improve the classical GA methods with respect to convergence speed and global search capability. Keywords: branch crossover, enhanced MPT mutation, genetic algorithm, optimization. I. INTRODUCTION e genetic algorithm, GA, is a commonly used global search method, which has been employed in many optimization [1-3] and training [4-6] applications. GA is a derivative-ee stochastic optimization algorithm which has several advantages over other related classical optimization methods such as the gradient method. For example, 1) it can be used for parallel-processing operations, 2) its search space (both continuous and discrete) is more flexible, and 3) its stochastic attributes can help escape local minima [7]. Unlike the classical optimization algorithms, GA operates on a set of points as a population that is then evolved towards a hier fitness grade [ 8]. each generation, the GA constructs a new population using some genetic operators such as the elitism, crossover and mutation. Individuals with higher fitness are more likely to be selected to participate in mating operations. The entire population is upgraded to form a new generation with a higher fitness grade. Usually, individuals are denoted as binary strings. general applications, however, mapping real values to binary strings may degrade processing efficiency [9] and precision [10]. To solve these problems, several operators have been desied in real coded GA-based optimizations. For example, the Laplace crossover was suggested in [11] with the Makinen-Periaux-Toivanen (MPT) mutation to speed up search operations. A parent-centric real- parameter crossover operator was proposed in [12] to select male and female parents separately with the offspring close to the female parent. Furthermore, several approaches were proposed in the literature to improve the convergence of GA based on, for example, nonlinear transformation of genetic operators [13], a feedback dynamic penalty function [14], an annealing nction [15], and a differential operator [16]. However, GA using these classical crossover and mutation operators still has some shortcomings in applications, such as slow convergence (time-consuming) and relatively poor capability to attain global minimum especially when the search space is complex. To tackle these aforementioned problems in the classical GA-based methods, the objective of this work is to develop an enhanced GA (EGA) technique for system training and optimization. is new in the following aspects: 1) a new branch crossover (BC) operator is proposed to speed up the search process and expand the local search space. 2) An enhanced Makinen-Periaux-Toivanen mutation (EMM) operator is developed to improve global search capability of the EGA technique. 3) The EGA technique is implemented for real-time machinery state proostics. e effectiveness of the proposed EGA technique is verified by simulation tests. Test results show that the developed EGA is an efficient optimization technique in terms of the success rate and execution time. e remainder of this paper is organized as follows: the developed EGA technique is discussed in Section II. e effectiveness of the proposed EGA technique is verified by simulation in Section III. Some concluding remarks are summarized in Section . II. THE ENNCED GA CIQUE e developed EGA [17] technique aims to promote the convergence speed and global search capability. consists of a novel branch crossover operator and an enhanced MPT mutation operator, which are discussed below. A. The Proposed Branch Crossover Operator e Heuristic crossover (HC) operator is one of the commonly used classical crossover operators in GA methods. generates a new member based on a linear combination of two parents, as illustrated in Fig. l. s search direction is determined by the fitness of the parents and its search step is selected according to the distance between the parents. For example, given a pair of parents X ( I ) = ( x? ) ,Xi i) , ..., X: I ) and X(2) = (xi2) , xi 2 ) , ..., X: 2 ) with the fitness x (l) and (2) where x (l) > (2) , an offspring Y = (Y I 'Y 2 ,... ,y") is generated by (1) where u is a random number uniformly distributed in the range [0, 1]. 978-1-4673-2421-2/12/$31.00 ©2012 IEEE 1471

Upload: fathy

Post on 22-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

An Enhanced GA Technique for System Optimization Dezhi Li, Student Member, IEEE, Wilson Wang *, Senior Member, IEEE, Fathy Ismail

Abstract-The commonly used genetic algorithms (GAs) have some shortcomings in applications such as lengthy computations

and slow convergence. A novel enhanced genetic algorithm, EGA, technique is developed in this paper to overcome these

problems to enhance the efficiency in system training and optimization. The proposed EGA technique involves two

approaches: a) a novel group-based branch crossover operator is suggested to thoroughly explore local space and to speed

convergence, and b) an enhanced MPT (Makinen-Periaux­

Toivanen) mutation operator is proposed to promote global search capability. The effectiveness of the developed EGA is verified by simulations using benchmark test problems. Test results show that the EGA technique can improve the classical GA methods with respect to convergence speed and global

search capability.

Keywords: branch crossover, enhanced MPT mutation, genetic algorithm, optimization.

I. INTRODUCTION

The genetic algorithm, GA, is a commonly used global search method, which has been employed in many optimization [1-3] and training [4-6] applications. GA is a derivative-free stochastic optimization algorithm which has several advantages over other related classical optimization methods such as the gradient method. For example, 1) it can be used for parallel-processing operations, 2) its search space (both continuous and discrete) is more flexible, and 3) its stochastic attributes can help escape local minima [7]. Unlike the classical optimization algorithms, GA operates on a set of points as a population that is then evolved towards a higher fitness grade [8]. In each generation, the GA constructs a new population using some genetic operators such as the elitism, crossover and mutation. Individuals with higher fitness are more likely to be selected to participate in mating operations. The entire population is upgraded to form a new generation with a higher fitness grade. Usually, individuals are denoted as binary strings. In general applications, however, mapping real values to binary strings may degrade processing efficiency [9] and precision [10]. To solve these problems, several operators have been designed in real coded GA-based optimizations. For example, the Laplace crossover was suggested in [11] with the Makinen-Periaux-Toivanen (MPT) mutation to speed up search operations. A parent-centric real­parameter crossover operator was proposed in [12] to select male and female parents separately with the offspring close to the female parent. Furthermore, several approaches were proposed in the literature to improve the convergence of GA based on, for example, nonlinear transformation of genetic operators [13], a feedback dynamic penalty function [14], an annealing function [15], and a differential operator [16]. However, GA using these classical crossover and mutation

operators still has some shortcomings in applications, such as slow convergence (time-consuming) and relatively poor capability to attain global minimum especially when the search space is complex.

To tackle these aforementioned problems in the classical GA-based methods, the objective of this work is to develop an enhanced GA (EGA) technique for system training and optimization. It is new in the following aspects: 1) a new branch crossover (BC) operator is proposed to speed up the search process and expand the local search space. 2) An enhanced Makinen-Periaux-Toi van en mutation (EMM) operator is developed to improve global search capability of the EGA technique. 3) The EGA technique is implemented for real-time machinery state prognostics. The effectiveness of the proposed EGA technique is verified by simulation tests. Test results show that the developed EGA is an efficient optimization technique in terms of the success rate and execution time.

The remainder of this paper is organized as follows: the developed EGA technique is discussed in Section II. The effectiveness of the proposed EGA technique is verified by simulation in Section III. Some concluding remarks are summarized in Section IV.

II. THE ENHANCED GA TECHNIQUE

The developed EGA [17] technique aims to promote the convergence speed and global search capability. It consists of a novel branch crossover operator and an enhanced MPT mutation operator, which are discussed below.

A. The Proposed Branch Crossover Operator The Heuristic crossover (HC) operator is one of the

commonly used classical crossover operators in GA methods. It generates a new member based on a linear combination of two parents, as illustrated in Fig. l. Its search direction is determined by the fitness of the parents and its search step is selected according to the distance between the parents. For example, given a pair of parents X( I) = (x?) ,Xii) , ... ,X: I) and

X(2) = (xi2) , xi2) , ... , X:2) with the fitness x(l) and :X(2) where

x(l) > :X(2) , an offspring Y = (YI 'Y

2 , ... ,y") is generated by

(1)

where u is a random number uniformly distributed in the range [0, 1].

978-1-4673-2421-2/12/$31.00 ©2012 IEEE 1471

Page 2: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

y

X(2) Fig. I. The crossover operation of the HC operator.

If the derived offspring lies beyond the feasible region, a new random number u is generated to produce another offspring using Eq. (1). If it fails to produce a feasible offspring after certain attempts, the He operator randomly selects a point over the feasible region in place of the infeasible offspring produced. Accordingly, an offspring generated by He will reduce the search dimension and lead to insufficient exploration of the local minima.

The Laplace crossover (LC) operator is another classical crossover approach whose offsprings are placed symmetrically about the positions of parents. However, it explores local space in a stochastic fashion without the fitness guidance of parents, which will cause redundant calculations in local search operations and thus take more execution time. To speed up the search process and expand the local search space, the Be operator is proposed in this work for local search, as demonstrated in Fig. 2. Three members are selected

as a group: x(l) = (x�l) ,Xii) , ... ,xl;l) , X(2) = (x?) ,xi2) , ... ,XI;2»

and X(3) = (x?) ,xi3) , ... ,XI;3» , where x(l) and X(l) have the largest and smallest fitness among the three, respectively; n is the dimension of the problem. The intermediate point

y(l) = (y�l) , yil) , ... , y�l) is formulated by

(2)

where u, is a random number uniformly distributed over [0, 1 ].

Fi g. 2. The crossover operati on of the proposed BC operator.

The intermediate point i2) = (YI(2), yi2) , ... ,y�2» is generated by

(3)

h X

2 -X

3 d - - d - h w ere u2

= ; an x" x2

an x, are t e (XI -X

3)+(X

2 -x

3)

normalized fitness of x(l) , X(2) and X(l) , respectively. Since

X(I) has a larger fitness than X(2) (i.e., XI > x2) as stated

before, it is seen that u2

E [0,0.5] . To facilitate

implementation, let u2

= 0 if (XI -X3)+(X2 -x3)=0. Thus

the offspring i3) = (y?) , yi3) , ... , YI;3» is formed as

(4)

where uJ E [0,1] is a constant. The offspring generated by the proposed Be operator will

be located around the best individual in the group (i.e., the three selected members), while the local search space can be further expanded. The method to derive y in He, as illustrated in Fig. 1, may not be the optimal solution of the offspring. Our solution in the proposed Be operator is to employ more variant factors to formulate the offspring. For the Be operator, the final offspring /') is located according to position /2)

along the line determined by X(2) and /1) and the extension

of the segment from X(3) to i2). With the help of the new variant factors, the local space can be explored more effectively. If the derived offspring lies beyond the feasible region, new random numbers u, and uJ will be generated to produce another offspring using Equations (2-4). If it fails to produce a feasible offspring after some attempts, the Be operator will randomly select a point over the feasible region to substitute for the original infeasible offspring.

B. The Proposed EMM Mutation Operator The commonly used classical mutation operators in

unconstrained optimization include the non-uniform mutation (NUM) and MPT mutation, but each having its own merits and limitations. For example, although MPT mutation explores the search space more efficiently than NUM, NUM is more productive to evolve the search coverage than MPT mutation. For instance, given a point Xl =(x;,x;, ... x:,), the

NUM builds the mutated point XI+1 = (X;+ I , X;+ I , ... , X�+ I ) by

1+1_ {X:+1l(t,xr-x:), ifr:S;0.5 (5) XI - I

( I '

) h .

XI -1l t,xl -XI' ot erwlse

where t is the generation number and r is a value randomly

selected over [0,1]; x: and x� are the respective lower and

1472

Page 3: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

upper bounds of the ith component of the decision vector. In addition,

(6)

where a; is a random number uniformly distributed over [0,1]; T is the maximum number of generations; and b is a parameter related to the strength of the mutation operator.

Different from the NUM, the MPT mutation keeps the search strength during the whole search processes. When the generation increases, however, its fitness of elitisms increases accordingly. Hence it is difficult for the MPT mutation to spot a new individual to beat current elitisms as the population evolves to certain extent. Consequently the global search speed slows down accordingly. For example, given a point x = (xi'x2, ... ,xn ) , the MPT mutation formulates a

mutated point x = (i\,x2, . • • ,xn) by

(7)

where x; and x;' are the respective lower and upper bounds of

the ith decision variable; i is represented as

1, = ( J I -r

I - I -' -

.-I I

I,

t,

{ J r -t 1+ (1 -1 --

' , ' 1- t,

if r < t,

if r = t, (8)

if r > I,

where r is a random number uniformly distributed over [0,1], x -x'

and ti = -�--' . Xi -Xi

The proposed EMM operator in this work aims to maintain

superior search ability in the first few generations to improve

global search capability, and to evolve the whole search space

as the number of generation increases to fully explore the

local space. If x = (xl'x" ... ,xJ and x = (XI'X" ... ,xJ are the

selected parent point and mutated point, respectively, the

operation of the proposed EMM operator will be

(9)

where

I -r I-I -, -( jb

I I Ii

if r <ti

ti = ti if r = ti 1+ (I_t { r -ti

jb

if r > ti , '\ I -ti

X -Xl 1. = - '--'

I X,�I -x,

(11)

(12)

where ri is a random number over [0,1]; x; and x;' are the respective lower and upper bounds of the ith decision variable; VE [0,1] is a uniformly distributed random number; T is the maximum number of generation; band p determine the strength of algorithm and evolving speed of the search space respectively.

III. PERFORMANCE EVALUATION

A. Overview To verify the effectiveness of the developed EGA

technique, a series of tests will be conducted in this section based on some commonly used benchmark problems in the research area of optimization. The performance of the EGA

technique will be compared with other related methods summarized as follows:

Be operator: Scheme-I: the GA with the LC operator and the MPT

mutation operator; Scheme-2: the GA with the HC operator and the MPT

mutation operator; Scheme-3: the GA with the proposed BC operator and

MPT mutation operator.

EMM operator: Scheme-4: the GA with the LC operator and the NUM

operator; Scheme-5: the GA with the LC operator and the MPT

mutation operator; Scheme-6: the GA with the LC operator and the proposed

EMM operator;

Comprehensive operation: Scheme-7: GA with the LC and the MPT mutation

operators; Scheme-8: EGA with the BC operator and the EMM

operator.

The tests will be conducted based on the following 5 commonly used benchmark test functions [18]:

(lO) Problem #1: the Dixon and Price function: n

minf(x) = (Xl _1)2 + �:>(2Xi

2 -Xi_I )2

i=2 -lO �Xj � lO,) = 1,2,3,4,5 , j(x* ) = o.

1473

Page 4: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

Problem #2: the Rosenbrock function:

minf(x) = I [100(x;2 _X;+!)2 +(x; _1)2] , i=!

-5 � Xj � 10,) = 1,2,3 ,x* = (1,1,1),f(x*) = O .

Problem #3: the Sum Squares function:

minf(x)= 'I ix;2, i=!

-10 � x j � 10,) = 1,2 ... n; n = 10, x* = (O, ... ,O),f(x*) = O .

Problem #4: the Sphere function: 11

minf(x) = � >;2 , i=!

-5.12 � Xj � 5.12,) = 1,2 ... n; n = 20,

x* = (O, ... ,O),f(x*) = O .

Problem #5: the Shekel function:

m}nf(x)=-� [t (X; _Cij )2 + Pj r P = � [1,2,2,4,4,6,3,7,5,5 Y , m = 10

10 [4 1 8 6 3 2 5 8 6 7

4 1 8 6 7 9 5 1 2 3 C=

4 1 8 6 3 2 3 8 6 7

4 1 8 6 7 9 3 1 2 3

o �Xj � 10,) = 1,2,3,4 , x' = (4,4,4,4),f(x')= -10.5364 .

The simulation tests will be terminated as long as the error becomes less than Ed = {l0-6, 10-5, 10-5, 10-6, 1O-4} correspondingly to problems #1-#5, respectively, or the maximum generation number is achieved (104 in this case).

B. Comparison of the Crossover Operators The first test is to verify the effectiveness of the proposed

BC operator (Scheme-3). The comparison is with the classical LC (Scheme-I) and HC (Scheme-2) operators. All the crossover operators are combined with the MPT mutation for mutation operations. Figures 3-5 show the averaged test results of Scheme-l to Scheme-3, corresponding to different crossover operators (LC, HC, and the proposed BC operators). From Fig. 3, Scheme-3 provides the best success rates for these test problems. Although Scheme-3 could not reach 100% success rate for Problem #5 under the employed termination criteria, it still performs better than Schemes-l and 2. By examining Figures 4 and 5, it is clear that the developed BC operator (Scheme-3) outperforms the classical HC and LC operators in terms of number of function evaluations (Fig. 4) and execution time (Fig. 5). The proposed BC operator can explore the local space more thoroughly than other crossover operators. It takes much less generations to reach minima compared to other classical crossover operators; and thus its overall search speed is

improved significantly. On the other hand, from Fig. 4 and Fig. 5, it is seen that there is no big difference in the performance of HC operator (Scheme-2) and the LC operator (Scheme-I) in terms of execution time and number of evaluations; that is, each of these two classical crossover operators has its own advantages and disadvantages in optimization appl ications.

120 ,--------.--------.--------.-------,

2 10 (tl c:::

I I I ------ � ----- -�------�k-------I I I I __

:2 'F"""_ -_-_----=-:::--_- _---"_Ov ________ : _______ � __ � __

� I I I � I I I o I I I g M _______ L ______ � _______ � ______ _

en I I I 2 3

Problem Number 4 5

Fig. 3. The comparison of Scheme-l (triangle line), Scheme-2 (square line) and Scheme-3 (circle line) in terms of successful rate.

X 105 2 .--------.--------.--------.-------, I I I I � 1.5 -------�------�------- -------

«i I I :;j A I ./ � 1 _______ L _ --- ---� - �----- -> � I I I � O. - - - - - -:- - - - - -��- - - - � - - - - - - � 'It

---

--- 1- ------

� --

---- �-----

O� ______ _L ________ L_ ______ -L ______ � 1 2 3

Problem Number 4 5

Fig. 4. The comparison of Scheme-l (triangle line), Scheme-2 (square line) and Scheme-3 (circle line) in terms of number of function evaluations.

10

� 8 Q) E 6 F c .Q 4 "5 0 Q) x W

0 1

I I -------1--------:------- 1---/- -

I I I � -------�------�-- ---- �------

� I �: --- --_

1- I I

= = = = = = + = = = � �?i � � � � ;�-��: �: 2 3

Problem Number 4 5

Fig. 5. The comparison of Scheme-l (triangle line), Scheme-2 (square line) and Scheme-3 (circle line) in terms of execution time.

C. Mutation Comparison Figures 6-8 illustrate the averaged test results using the

proposed EMM mutation operator (Scheme-6) compared with the classical operators of NUM (Scheme-4) and MPT mutation (Scheme-5). The LC operator is utilized for crossover operations in all three schemes. It is seen from Fig. 6 that the suggested EMM method provides the highest success rate compared with the classical NUM (Scheme-4)

1474

Page 5: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

and MPT mutation (Scheme-5) mutation operators under the

applied termination criteria.

From Fig. 7 and Fig. 8, it is seen that the overall

performance and execution time of the classical NUM

(Scheme-4) and MPT mutation (Scheme-5) operators are

similar for these test problems. However, the suggested EMM operator (Scheme-6) provides best performance by taking not

only the least number of function evaluations (Fig. 7) but also

the fastest operations (Fig. 8). The EMM operator can

enhance the global search capability of the EGA technique by

preventing the possible trapping due to local minima.

120 ,-------,--------,,-------,--------,

2 10 ro c::: :2 If) If) :;j <.> g 60

en

1 '" :: � : : �

-:-

-_- --�- 1

_-� 1 � ---1- -- ----1- -------1 - -----

-------1- -------1- -------1 - ------

2 3 Problem Number

4 5

Fig. 6. The comparison of Scheme-4 (triangle line), Scheme-5 (square line) and Scheme-6 (circle line) in terms of successful rate.

X 105 2 ,--------.--------,---------,--------,

1 1 1 .� 1.5 - ------:--------:- - - -/- � -----

� 1 1 �

� ------1-----

dj -- � _--

-<lr ------ � "

'5 0.5 :: :: :: :: :: :: �:::::: - ---:- - - - - - - � - - - -' -�­'It

O� ______ _L ________ � ________ L-______ � 1 2 3 4 5

Problem Number

using the same benchmark test functions. It is clear that

Scheme-8 outperforms the classical GA method in terms of

the successful rate, number of function evaluations and

execution time. EGA is the only scheme that can achieve 100% of successful rates for all of these tested examples (Fig.

9); it is even superior to Scheme-3 using BC and MPT

mutation operators (Fig. 3) and Scheme-6 using EMM and

LC operators (Fig. 6), especially for problems with complex

search spaces such as problems #1, #2 and #5. Therefore, it

can be concluded that the developed EGA technique can

effectively improve convergence speed, successful rate and

global search capability of the classical GA methods.

120 ,--------,--------,-------,--------, ;g � 2 ro c::: :2 If) If) :;j <.> <.> :;j en

M -------�------ ------,-------1

M -------� ------ �------- � -------

2 3 Problem Number

4 5

Fig. 9. The comparison of Scherne-7 (triangle line), and Scherne-8 (circle line) in terms of successful rate.

X 104 15 ,--------,--------,---------,--------m

If) c .Q 10 15 :;j (ij > W 5 '5 'It

- - - - - - _1 - _______ 1 ______ _ 1 1

- - - - - - _1 - _____ _ 1 1 � 1

---- ----2 3

1 ______ .J ______ _

1 ___ ev---

4 5 Fig. 7. The comparison of Scheme-4 (triangle line), Scheme-5 (square line) Problem Number and Scheme-6 (circle line) in terms of number of function evaluations. Fig. 10. The comparison of Scheme-7 (triangle line), and Scherne-8 (circle

line) in terms of number of function evaluations.

10,-------,--------,,-------,;--------, � I I I - 8 - ------1- -------1- - - - - - - � - ----

� 6 - - - - - - _:_ - -----1 / 1

.§ 1 ---- 1 _ - - - 1- ' , "5 4 - = = ===r====�-=----i----'-- -- � � 2 - - - - - - -,- - - - - - - -,- - - - - - - J - ------� I I I

OL-______ i-______ -L ______ -i ______ � 1 2 3 4 5

Problem Number

8,--------,---------,--------,--------m

� Q) 6 E F c 4 .Q '5 � 2 ..n

-------1- -------1- ------ -, --

------�---- -- �-------

1 1 -------1- -------1- ------ -, -------

---- ?-- ----- : ---- � 2 3 4 5

Fig. 8. The comparison of Scheme-4 (triangle line), Scheme-5 (square line) Problem Number and Scheme-6 (circle line) in terms of execution time. Fig. 11. The comparison of Scheme-7 (triangle line), and Scheme-8 (circle

D. Comprehensive Comparison More tests are taken in this subsection to verify the

effectiveness of the proposed EGA technique with both BC

and EMM operators (Scheme-8). Its performance is compared

with a classic GA using the LC and MPT mutation methods

(Scheme-7). Figures 9-11 demonstrate the comparison results

line) in terms of execution time.

IV. CONCLUSION

A new enhanced GA (or EGA) technique is developed in

this work for system training and optimization. A novel BC

1475

Page 6: [IEEE IECON 2012 - 38th Annual Conference of IEEE Industrial Electronics - Montreal, QC, Canada (2012.10.25-2012.10.28)] IECON 2012 - 38th Annual Conference on IEEE Industrial Electronics

operator is proposed to speed up the search process and

expand the local search space. An EMM operator is suggested

to prevent possible trapping due to local minima. The

effectiveness of the developed EGA and the related techniques has been verified by simulation tests

corresponding to 5 benchmark test examples. Test results

have shown that both the BC and EMM operators outperform

their classical counterparts in terms of execution time and

success rate. EGA with BC and EMM methods can provide

superior performance over the classic GA methods. The EGA

method outperforms the related classical GA methods In

terms of convergence and global search capabi lity. In the future work, the proposed EGA method will be

applied to motor fault detection applications.

ACKNOWLEDGMENT

This work was financially supported by Natural Sciences

and Engineering Research Council of Canada (NSERC) and

MC Technologies Inc.

REFERENCES

[I] S. Venkatraman, G. G. Yen, "A generic framework for constrained optimization using genetic algorithms," IEEE Transactions on

Evolutionary Computation, vol. 9, no. 4, pp. 424-435, 2005. [2] R. Kumar, K. lzui, Y. rnasataka, S. Nishiwaki, "Multilevel

redundancy allocation optimization using hierarchical genetic algorithm," IEEE Transactions on Reliability, vol. 57, no. 4, pp.650-661,2008.

[3] J Tsai, T. Liu, J Chou, "Hybrid taguchi-genetic algorithm for global, numerical optimization " IEEE Transactions on Evolutionary

Computation, vol. 8, no. 4, pp. 365-377, 2004. [4] W. Wang and D. Kanneg, "An integrated classifier for condition

monitoring in transmission systems," Mechanical Systems and Signal Processing, Vol. 23, No. 4, pp. 1298-1312,2009.

[5] C. Wang, H. Liu, C. Lin, "Dynamic optimal learning rates of a certain class of fuzzy neural networks and its applications with genetic algorithm," IEEE Transactions on Systems, Man and

Cybernetics, Part B: Cybernetics, vol. 31, no. 3, pp. 467-475, 2001. [6] W. Wang, Y. Li, "Evolutionary learning of BMF fuzzy-neural

networks using a reduced-form genetic algorithm," IEEE

Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, vol. 33, no. 6, pp. 966-976, 2003.

[7] J-S. R. Jang, c.-T. Sun, E. Mizutani, Neuro-:!uzzy And Soft

Computing, New Jersey: Prentice Hall, 1997. [8] W. Wang, F. Ismail, F. Golnaraghi, "A neuro-fuzzy approach to gear

system monitoring," IEEE Transactions on Fuzzy Systems, vol. 12, pp. 710-723,2004.

[9] L. Chen, "Real Coded Genetic Algorithm Optimization," Journal of the American Water Resources Association, vol. 39, no. 5, pp. 1157-1165,2003.

[10] T. Srikanth, V. Kamala, "A Real Coded Genetic Algorithm for Optimization of Cutting Parameters in Turning," International

Journal of Computer Science and Network Security, vol. 8, no. 6, pp. 189-193, 2008.

[II] K. Deep, M. Thakur, "A new crossover operator for real coded genetic algorithms," Applied mathematics and computation, vol. 188, pp. 895-911,2007.

[12] C. Garcia-Martinez, M. Lozano, F. Herrera, D. Molina, A. M. Sanchez, "Global and local real-coded genetic algorithms based on parent-centric crossover operators," European Journal of

Operational Research, vol. 185, pp. 1088-1113,2008. [13] Z. H. Cui, J C. Zeng, Y. B. Xu, "A new nonlinear genetic algorithm

for numerical optimization," IEEE International Conference on

Systems, Man and Cybernetics, vol. 5, pp. 4660-4663, 2003.

[14] W. Paszkowicz, "Properties of a genetic algorithm equipped with a dynamic penalty function," Computational Materials Science, vol. 45, pp. 77-83,2009.

[15] S. F. Hwang, R. S. He, "A hybrid real-parameter genetic algorithm for function optimization," Advanced Engineering Informatics, vol. 20, pp. 7-21,2006.

[16] O. Hrstka, A. Kucerova, "Improvements of real coded genetic algorithms based on differential operators preventing premature convergence," Advances in Engineering Software, vol. 35, pp. 237-246,2004.

[17] D. Li and W. Wang, "An enhanced GA technique for system training and prognostics," Advances in Engineering Software, vol. 42, no. 7, pp. 452-462, 20 II.

[18] M. Molga and C. Smutnicki, "Test functions for optimization needs," 2005. Available at http://www. zsd.ict. . pwr.wroc. pllfiles/docS/functio -ns.pdf.

1476