constrained multiple-swarm particle swarm optimization ...isc.okstate.edu/papers/74...

16
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012 475 Constrained Multiple-Swarm Particle Swarm Optimization Within a Cultural Framework Moayed Daneshyari, Member, IEEE, and Gary G. Yen, Fellow, IEEE Abstract—Particle swarm optimization (PSO) has been recently adopted to solve constrained optimization problems. In this paper, a cultural-based constrained PSO is proposed to incorporate the information of the objective function and constraint violation into four sections of the belief space, specifically normative knowledge, spatial knowledge, situational knowledge, and temporal knowl- edge. The archived information facilitates communication among swarms in the population space and assists in selecting the leading particles in three different levels: personal, swarm, and global levels. Comprehensive comparison of the proposed heuristics over a number of benchmark problems with selected state-of-the-art constraint-handling techniques demonstrates that the proposed cultural framework helps the multiple-swarm PSO to perform competitively with respect to selected designs. Index Terms—Constrained optimization, constrained particle swarm optimization (CPSO), cultural algorithm (CA), PSO. I. I NTRODUCTION P OPULATION-BASED paradigms to solve constrained op- timization problems have attracted much attention during the most recent years. Genetic-based algorithms and swarm- based paradigms are two popular population-based heuristics introduced for solving constrained optimization problems [1]– [3]. Particle swarm optimization (PSO) [4]–[10] is a swarm in- telligence design based upon mimicking the behavior of social species such as flocking birds, schooling fish, swarming wasps, and so forth. Constrained PSO (CPSO) is a relatively new approach to tackle constrained optimization problems [11]– [24]. What constitute the challenges of constrained optimiza- tion problems are various limits on decision variables, the types of constraints involved, the interference among constraints, and the interrelationship between the constraints and the objective functions. In general, constrained optimization problems can be formulated as Optimize f (x)= f (x 1 ,x 2 ,...,x n ) (1) subject to inequality constraints g k (x) 0, k =1, 2,...,L (2) Manuscript received May 13, 2010; revised November 1, 2010 and March 14, 2011; accepted April 30, 2011. Date of publication August 12, 2011; date of current version February 17, 2012. M. Daneshyari is with the Department of Technology, Elizabeth City State University, Elizabeth City, NC 27909 USA (e-mail: mdaneshyari@ mail.ecsu.edu). G. G. Yen is with the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078 USA (e-mail: gyen@ okstate.edu). Digital Object Identifier 10.1109/TSMCA.2011.2162498 and equality constraints h k (x)=0, k = L +1,...,m. (3) We should note that, in this paper, minimization problems are considered without loss of generality (due to duality prin- ciple). Individuals that satisfy all of the constraints are called feasible individuals, while individuals that do not satisfy at least one of the constraints are called infeasible individuals. Active constraints are defined as inequality constraints that satisfy g k (x) 0(k =1, 2,...,L) at the global optimum solution, so all equality constraints, i.e., h k (x)=0(k = L +1,...,m), are active constraints. Although there are a few research works on PSO proposed to solve constrained optimization problems, none of these studies fully explores the information from all particles in order to perform communication within PSO. Therefore, due to lack of communication, the particles will not be able to act synchronously. When particles share their information through communication with each other, they will be able to efficiently handle the constraints and optimize the objective function. In order to construct the environment needed to share information, we need to build the groundwork to enable us to employ this information as needed. In this paper, the foundation is the belief space under cultural framework [25], [26]. Cultural algorithm (CA) has alone shown its own ability to solve engineering problems [26]–[42], particularly some constrained optimization ones [36], [39]–[41]. The information-sharing process can be improved by migration of data within swarms. From a sociological point of view, a study has shown that human societies will migrate from one place to another in order to counter their own life constraints and limitations as well as to reach a better economical, social, or political life [43]. People living in different societies migrate in spite of the dif- ferent value systems and cultural distinctions. Indeed, cultural belief is an important factor affecting the issues underlying the migration phenomena [44]. As CA is one of the newly emerging nature-inspired computational paradigms, the inspiration from sociology and migration analogy places the proposed design in perspective. On the other hand, information sharing will effectively re- duce the computational complexity. Indeed, finding the appro- priate information for communication within swarms can be computationally expensive if it is not performed in a systematic and categorical manner. One computational aspect is the diffi- culties of finding the appropriate information to communicate within PSO in order to be able to simultaneously handle the constraints and optimize the objective function. Using many 1083-4427/$26.00 © 2011 IEEE

Upload: trankhanh

Post on 26-Aug-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012 475

Constrained Multiple-Swarm Particle SwarmOptimization Within a Cultural Framework

Moayed Daneshyari, Member, IEEE, and Gary G. Yen, Fellow, IEEE

Abstract—Particle swarm optimization (PSO) has been recentlyadopted to solve constrained optimization problems. In this paper,a cultural-based constrained PSO is proposed to incorporate theinformation of the objective function and constraint violation intofour sections of the belief space, specifically normative knowledge,spatial knowledge, situational knowledge, and temporal knowl-edge. The archived information facilitates communication amongswarms in the population space and assists in selecting the leadingparticles in three different levels: personal, swarm, and globallevels. Comprehensive comparison of the proposed heuristics overa number of benchmark problems with selected state-of-the-artconstraint-handling techniques demonstrates that the proposedcultural framework helps the multiple-swarm PSO to performcompetitively with respect to selected designs.

Index Terms—Constrained optimization, constrained particleswarm optimization (CPSO), cultural algorithm (CA), PSO.

I. INTRODUCTION

POPULATION-BASED paradigms to solve constrained op-timization problems have attracted much attention during

the most recent years. Genetic-based algorithms and swarm-based paradigms are two popular population-based heuristicsintroduced for solving constrained optimization problems [1]–[3]. Particle swarm optimization (PSO) [4]–[10] is a swarm in-telligence design based upon mimicking the behavior of socialspecies such as flocking birds, schooling fish, swarming wasps,and so forth. Constrained PSO (CPSO) is a relatively newapproach to tackle constrained optimization problems [11]–[24]. What constitute the challenges of constrained optimiza-tion problems are various limits on decision variables, the typesof constraints involved, the interference among constraints, andthe interrelationship between the constraints and the objectivefunctions. In general, constrained optimization problems can beformulated as

Optimize f(x) = f(x1, x2, . . . , xn) (1)

subject to inequality constraints

gk(x) ≤ 0, k = 1, 2, . . . , L (2)

Manuscript received May 13, 2010; revised November 1, 2010 andMarch 14, 2011; accepted April 30, 2011. Date of publication August 12, 2011;date of current version February 17, 2012.

M. Daneshyari is with the Department of Technology, Elizabeth CityState University, Elizabeth City, NC 27909 USA (e-mail: [email protected]).

G. G. Yen is with the School of Electrical and Computer Engineering,Oklahoma State University, Stillwater, OK 74078 USA (e-mail: [email protected]).

Digital Object Identifier 10.1109/TSMCA.2011.2162498

and equality constraints

hk(x) = 0, k = L+ 1, . . . ,m. (3)

We should note that, in this paper, minimization problemsare considered without loss of generality (due to duality prin-ciple). Individuals that satisfy all of the constraints are calledfeasible individuals, while individuals that do not satisfy at leastone of the constraints are called infeasible individuals. Activeconstraints are defined as inequality constraints that satisfygk(x) ≤ 0 (k = 1, 2, . . . , L) at the global optimum solution,so all equality constraints, i.e., hk(x) = 0 (k = L+ 1, . . . ,m),are active constraints.

Although there are a few research works on PSO proposedto solve constrained optimization problems, none of thesestudies fully explores the information from all particles inorder to perform communication within PSO. Therefore, dueto lack of communication, the particles will not be able to actsynchronously. When particles share their information throughcommunication with each other, they will be able to efficientlyhandle the constraints and optimize the objective function. Inorder to construct the environment needed to share information,we need to build the groundwork to enable us to employ thisinformation as needed. In this paper, the foundation is the beliefspace under cultural framework [25], [26]. Cultural algorithm(CA) has alone shown its own ability to solve engineeringproblems [26]–[42], particularly some constrained optimizationones [36], [39]–[41]. The information-sharing process can beimproved by migration of data within swarms.

From a sociological point of view, a study has shown thathuman societies will migrate from one place to another in orderto counter their own life constraints and limitations as wellas to reach a better economical, social, or political life [43].People living in different societies migrate in spite of the dif-ferent value systems and cultural distinctions. Indeed, culturalbelief is an important factor affecting the issues underlying themigration phenomena [44]. As CA is one of the newly emergingnature-inspired computational paradigms, the inspiration fromsociology and migration analogy places the proposed design inperspective.

On the other hand, information sharing will effectively re-duce the computational complexity. Indeed, finding the appro-priate information for communication within swarms can becomputationally expensive if it is not performed in a systematicand categorical manner. One computational aspect is the diffi-culties of finding the appropriate information to communicatewithin PSO in order to be able to simultaneously handle theconstraints and optimize the objective function. Using many

1083-4427/$26.00 © 2011 IEEE

476 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

concepts inspired from CA, such as normative knowledge,situational knowledge, spatial knowledge, and temporal knowl-edge, we will be able to efficiently and effectively organize theknowledge acquired from the evolutionary process to facilitatePSO’s updating mechanism as well as swarm communications.The interswarm communication for the constrained optimiza-tion problems using PSO is an important duty that cannot besolved unless we have access to the knowledge throughout thesearch process, given CA as the computational framework.

In this paper, we have proposed a computational frameworkbased on CA adopting the knowledge stored in the beliefspace in order to assist the interswarm communication, tosearch for the leading particles in the personal, swarm, andglobal levels. Every particle in CPSO will fly through a three-level flight. Then, particles cluster into several swarms, andinterswarm communication takes place to share the informa-tion. The remaining sections complete the presentation of thispaper as follows. In Section II, we briefly review principlesof CA and related works in CPSO performed in this area.In Section III, the proposed cultural CPSO is elaborated. InSection IV, simulation results are evaluated on the benchmarktest problems in comparison with the state-of-the-art constraint-handling models. Finally, Section V summarizes the concludingremarks and future directions of this paper.

II. LITERATURE SURVEY

A. Related Works in Constrained PSO

Relevant works of CPSO algorithms are briefly reviewedin this subsection to motivate the proposed ideas. PSO [4]–[10] has shown its promise to solve constrained optimizationproblems. Hu and Eberhart simply generated particles in PSOfor constrained optimization problems until they are located inthe feasible region and then used these particles in the feasibleregion for finding the best personal and global particles [11].Parsopoulos and Vrahatis used a dynamic multistage penaltyfunction for constraint handling [12]. The penalty function isthe weighted sum of all constraint violations, with each con-straint having a dynamic exponent and a multistage dynamiccoefficient. Coath and Halgamuge presented a comparison oftwo constraint-handling methods based upon preserving feasi-ble solutions [11] and dynamic penalty function [12] to solveconstrained nonlinear optimization problems using PSO [13].It demonstrated that the convergence rate for penalty-function-based PSO was faster than that of the feasible solution method.

Paquet and Engelbrecht proposed a modified PSO to solvelinearly constrained optimization problems [14]. An essentialcharacteristic of their modified PSO is that the movement ofthe particles in the vector space is mathematically guaranteedby the velocity and position update mechanism of PSO. Theyproved that their modified PSO is always assured to find at leasta local optimum for linearly constrained optimization problems.Takahama and Sakai, in their �-constrained PSO, proposed analgorithm in which particles that satisfy the constraints move tooptimize the objective function while particles that violate theconstraints progress to satisfy the constraints [15]. In order toadaptively control the maximum velocity of particles, particles

are divided into some groups, and their movement in thosegroups is compared.

Krohling and Coelho adopted Gaussian distribution insteadof uniform distribution for the personal- and global-term ran-dom weights of the PSO mechanism to solve constrainedoptimization problems formulated as min–max problems [16].They used two populations simultaneously. First, PSO focuseson evolving the variable vector while the vector of Lagrangianmultiplier is kept frozen, and the second PSO is to concentrateon evolving the Lagrangian multiplier while the first populationis stayed frozen. The use of normal distribution for the stochas-tic parameters of PSO seems to provide a good compromisebetween the probability of having a large number of smallamplitudes around the current points and the small probabilityof having large amplitudes, which may cause particles to moveaway from the current points and escape from the local optima.

Yang et al. [17] proposed a master–slave PSO in which themaster swarm is responsible for optimizing the objective func-tion while the slave swarm is focused on constraint feasibility.Particles in the master swarm only fly toward the current betterparticles in the feasible region. The slave swarm is respon-sible for searching feasible particles by scouting through theinfeasible region. The feasible/infeasible leaders from a swarmwill then communicate to lead the other swarm. By exchangingflight information between swarms, the algorithm can explore awider solution space.

Zheng et al. [18] adopted an approach that congregates theneighboring particles in PSO to form multiple swarms in orderto explore an isolated, long, and narrow feasible space. Theyalso applied a mutation operator with dynamic mutation rateto encourage flight of particles to the feasible region morefrequently. For constraint handling, a penalty function has beenadopted as to how far the infeasible particle is located fromthe feasible region. Saber et al. [19] introduced a version ofPSO for constrained optimization problems. In their version ofPSO, the velocity update mechanism uses a sufficient number ofpromising vectors to reduce randomness for better convergence.The velocity coefficient in the positional update equation is adynamic rate depending on the error and iteration. They alsoreinitialized the idle particles if there are not improving forsome iterations.

Li et al. [20] proposed dual PSO with stochastic rankingto handle the constraints. One regular PSO evolves simulta-neously along with a genetic PSO which is a discrete versionof PSO including a reproduction operator. The better of thetwo positions generated by these two PSOs is then selectedas the updated position. Flores-Mendoza and Mezura-Montes[21] used the Pareto dominance concept for constraint handlingon a biobjective space, with one objective being the sum ofinequality constraint violations and the second objective beingthe sum of equality constraint violations, in order to promotea better approach to the feasible region. They also adopted adecaying parameter to control the constriction factor and globalacceleration of PSO to prevent premature convergence and toadvance the exploration of the search space. Ting et al. [22]introduced a hybrid heuristic consisting of PSO and geneticalgorithm to tackle the constraint optimization problem of loadflow problems. They adopted two-point crossover, mutation,

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 477

and roulette-wheel selection from genetic algorithms alongwith the regular PSO to generate the new population space.Liu et al. [23] incorporated discrete genetic PSO with differ-ential evolution (DE) to enhance the search process in whichboth genetic PSO and DE update the position of the individualat every generation. The better position will then be selected.

Yen and Leong [24] embedded the constraint-handling tech-niques into the flight mechanism of PSO, including separateprocedures to update the infeasible and feasible personal bestsin order to guide the infeasible individuals toward the feasi-ble regions while promoting the search for optimal solutions.Additionally, storing infeasible nondominated solutions alongwith the best feasible solutions in the global best archive is toassist the search for feasible regions and a better solution. Theadjustment of accelerated constants is based on the number offeasible personal bests and the constraint violations of personalbests and global best. The simulation study shows that theproposed design is able to obtain a quality solution in a veryefficient manner.

In the context of information sharing, all existing algorithmsare short of a data feedback process which we have solved inour proposed algorithm by adopting a cultural framework. Inthe proposed algorithm, the population and belief spaces willcommunicate through feedbacklike communication channels.This feedbacklike procedure will increase the efficiency andeffectiveness of the PSO mechanism in the search process.

B. Related Work in CA on Constrained Optimization

For the completeness of the presentation, the basic principleof CA is briefly outlined hereinafter. CA, which is originatedby Reynolds [25], [26], is a dual-inheritance system whereinformation exists at two different spaces, namely, populationand belief spaces, and can pass along to the next generation.CA, an adaptive evolutionary computation method derived fromcultural evolution, consists of evolving agents whose experi-ences are gathered into the belief space consisting of variousforms of symbolic knowledge. CA has shown its ability to solvedifferent types of problems [26]–[42].

Researchers have identified five basic sections of knowl-edge stored in the belief space based upon the literature incognitive science and semiotics: situational knowledge, nor-mative knowledge, spatial or topographical knowledge [29],domain knowledge, and temporal or history knowledge [34].Situational knowledge is a set of exemplary individuals use-ful for experiences of all individuals. Situational knowledgeguides all individuals to move toward the exemplar individuals.Normative knowledge consists of a set of promising ranges.Normative knowledge provides a standard guiding principlewithin which individual adjustments can be made. Individualsjump into the good range using normative knowledge. Topo-graphical knowledge keeps track of the best individuals foundso far in the promising region. Topographical knowledge leadsall individuals toward the best performing areas in the searchspace. Domain knowledge incorporates information from theproblem domain to lead the search. Domain knowledge aboutlandscape contour and its related parameters guides the searchprocess. History knowledge keeps track of the history of the

search process and records key events in the search. It mightbe either a considerable move in the search space or a dis-covery of landscape change. Individuals use history knowledgefor guidance in selecting a move direction. Domain knowl-edge and history knowledge are useful on dynamic landscapeproblems [45].

Becerra and Coello Coello proposed a cultured DE forconstrained optimization [35]. The population space in theirstudy was DE, while the belief space consisted of situational,topographical, normative, and history knowledge. The variationoperator in DE was influenced by the knowledge source ofthe belief space. Yuan et al. introduced chaotic hybrid CA forconstrained optimization in which the population space wasDE and the belief space included normative and situationalknowledge [40]. They incorporated a logistic map function forbetter convergence of DE. Tang and Li proposed a culturedgenetic algorithm for constrained optimization problems byintroducing a triple-space CA [41]. The triple space includesbelief and population spaces in addition to an anticulture popu-lation consisting of individuals disobeying the guidance of thebelief space and going away from the belief-space-guided in-dividuals. The effect of disobeying enhanced by some mutationoperations appreciably makes the algorithm faster and less riskyfor premature convergence by awarding the most successfulindividuals and punishing the unsuccessful population. Zhaoand Gao [42] introduced a cultural-based PSO to design alow-pass finite-impulse response digital filter. The key ideabehind their algorithm is to acquire problem-solving knowledge(i.e., beliefs) from the evolving population and, in turn, usethat knowledge to guide the evolution process. They adoptthe structure of CA including belief space, acceptance, andinfluence functions along with particle swarm for designing thefilter.

III. CULTURAL CONSTRAINED OPTIMIZATION

USING MULTIPLE-SWARM PSO

The pseudocode of the proposed design is shown in Fig. 1,and the block diagram depicting the operations of the pro-posed algorithm is also shown in Fig. 2, which is based uponthe general structure of CA consisting of two spaces (beliefand population spaces) and communication channels (influencefunction from belief space to population space and acceptancefunction from population space to belief space). The populationspace (PSO) will be initialized and then clustered into severalswarms based upon the proximity of the particles in the decisionspace. The correspondent belief space (BLF) will then be ini-tialized. We evaluate the population space using fitness values.Acceptance function is applied to select particles, which willbe used for the belief space. The belief space consists of foursections: normative, spatial (topographical), situational, andtemporal (history) knowledge. This cultural framework playsa key role in the algorithm. Influence function is then applied tothe belief space to adjust the key parameters of PSO for the nextiteration, i.e., personal best, swarm best, and global best. Aftera predefined iteration, influence function manipulates the beliefspace to perform communication among swarms, which is doneby preparing two sets of particles for each swarm to share with

478 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

Fig. 1. Pseudocode of the cultural constrained optimization using multiple-swarm PSO.

Fig. 2. Schema of the cultural framework adopted, where belief space con-sists of normative, spatial (topographical), situational, and temporal (history)knowledge, and population space is a multiple-swarm PSO.

the other swarms. Afterward, particles in the population spacefly using newly computed personal, swarm, and global best.This process continues until the stopping criteria are met.

In the remainder of this section, we explain the multiswarmpopulation space, acceptance function, different parts of be-lief space, influence functions, and interswarm communicationstrategy, respectively.

A. Multiswarm Population Space

The population space in this paper consists of multipleswarms, with each swarm performing a PSO paradigm. Theparticles are clustered into a predefined number of swarmsusing k-means clustering in the decision space. In this paper, thenumber of swarms, P , is chosen roughly 10% of the populationsize, N

P = �0.1N� (4)

where �·� refers to a rounding operator. This multiple-swarmPSO is a modified version of the algorithm introduced byYen and Daneshyari [46], [47]. To overcome the prematureconvergence problem of PSO and to promote the particles ina swarm sharing information among themselves, a three-levelflight for a PSO mechanism has been adopted. In the personal

level, the particle will follow its best experienced behavior inits history. In the swarm level, the particle will simultaneouslyfollow the best behaving particle in its swarm to achieve asynchronal behavior among the neighboring particles. Finally,in the global level, the entire population will follow the bestknown particle seeking a global goal. This modified paradigmof PSO is formulated as

vdi (t+ 1) =wvdi (t) + c1r1(pbestdi (t)− xd

i (t))

+ c2r2(sbestdi,j(t)− xd

i (t))

+ c3r3(gbestd(t)− xd

i (t))

xdi (t+ 1) =xd

i (t) + vdi (t+ 1) (5)

where vdi (t) is the dth dimension of the velocity of the ithparticle at time t; xd

i (t) is the dth dimension of the positionof the ith particle at time t; pbestdi (t) is the dth dimension ofthe best past position of the ith particle at time t; sbestdi,j(t) isthe dth dimension of the best particle from swarm j in whichparticle i belongs; gbestd(t) is the dth dimension of the bestparticle of a population at time t; r1, r2, and r3 are uniformlygenerated random numbers in the range of (0, 1); c1, c2, and c3are constant parameters representing the weights for personal,swarm, and global behaviors; and w is the momentum for theprevious velocity.

B. Acceptance Function

The belief space should be affected by a selection of bestindividuals. Therefore, we select all particles which are locatedin the feasible space, and also p% of the infeasible particles thathave the least violation of constraints, where p is a predefinedvalue. This allows the infeasible individuals with minimumconstraint violations to portray feasibility landscape.

C. Belief Space

The belief space in this paradigm consists of four sections:normative, spatial, situational, and temporal knowledge. Sincethe constrained optimization problems of our interest havestatic landscapes, we only implement these four sections be-cause domain knowledge, the fifth element, is mainly usefulwhen fitness landscape is dynamic. We will briefly explain thetype of information, the way to represent the knowledge, andhow to update the knowledge for each section of the beliefspace.

1) Normative Knowledge: Normative knowledge representsthe best area with respect to the objective function values. It isrepresented as Fig. 3(a), where f(t) = [f1(t)f2(t) . . . fN (t)] andV (t) = [v1(t)v2(t) . . . vN (t)] (N is the number of particles).fj(t) is a normalized objective function defined as follows:

fj(t) =f(xj)− fmin

j (t)

fmaxj (t)− fmin

j (t), j = 1, 2, . . . , N (6)

where f(xj) is the objective function value for particle xj ,fminj (t) = minx∈X(f(xj)) is the lower bound of the ob-

jective function value on the jth particle at time t, and

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 479

Fig. 3. Representation of (a) normative knowledge, (b) spatial knowledge,(c) situational knowledge, and (d) temporal knowledge.

fmaxj (t) = maxx∈X(f(xj)) is the upper bound of the objective

function value on the jth particle at time t. X is referred to asthe current population at time t. At every iteration, the lowerand upper bounds will be calculated in the algorithm basedupon the current population. They vary by iteration and are notfixed. In (6), the range of objective function values had beennormalized. The reason for the normalization is that, throughoutthe algorithm, we do not deal with scaling. For cases with largevalues of fmax

j (t), the algorithm still works properly, becausefmaxj (t) and fmin

j (t) come from the current population (notsome fixed values), so the value of f(xj)− fmin

j (t) in thenumerator of (6) will not be statistically very small compared tothe denominator fmax

j (t)− fminj (t). As a result, fj(t) will still

be a proper measure to handle objective functions with verylarge maximum values.vj(t) is a measure of violation of all constraints for particle

xj , which is defined as follows:

vj(t) =1

m

m∑k=1

ck(xj)

cmaxk

, j = 1, 2, . . . , N (7)

where m is the number of constraints and ck(xj) is related tothe kth constraint violation evaluated at particle xj as follows:

ck(xj) =

{max (0, gk(xj)) , k = 1, 2, . . . , Lmax (0, |hk(xj)| − δ) , k = L+ 1, . . . ,m

(8)

cmaxk = maxx∈X (ck(xj)) . (9)

In order to update normative knowledge, new objective functionvalues will be normalized using (6), and constraint violationmeasures will be updated by the new position of the particlesusing (7). The information in normative knowledge is used toassemble the framework for spatial knowledge.

2) Spatial Knowledge: In order to represent spatial or to-pographical knowledge, we adopt normative knowledge. Themethod adopted in this section is similar to the penalty func-tion method to handle constraints introduced by Tessema andYen [48]. The normalized objective functions, f , and viola-tion measures, V, are set as the axes of a 2-D space, asshown in Fig. 4. Two particles are mapped in this space forvisualization. Fig. 3(b) shows the spatial knowledge storedfor every particle located in the f−V space where D(t) =[D1(t)D2(t) . . . DN (t)] and F(t) = [F1(t)F2(t) . . .FN (t)]

(N is the number of particles). Dj(t) is the Euclidean distancefrom the origin of the f−V space, which is defined as

Dj(t) =(vj(t)

2 + fj(t)2)1/2

, j = 1, 2, . . . , N (10)

and Fj is the modified objective function value to handleconstraints computed as a weighted sum of three spatial dis-tances D, v, and f , i.e.,

Fj(t) =

⎧⎪⎨⎪⎩

Dj(t) + (1− r(t)) vj(t)+r(t)fj(t), r(t) �= 0,xj infeasile

vj(t), r(t) = 0,xj infeasilefj(t), xj feasile,

j = 1, 2, . . . , N (11)

where r(t) is the ratio of the number of feasible particlesover the population size, and fj(t) and vj(t) are defined in(6) and (7), respectively. If 0 � r(t) < 1, then vj(t) will bemore important than fj(t) in (11); consequently, F1(t) > F2(t)in the schema shown in Fig. 4, which means that particle 2outperforms particle 1 for a minimization problem. However,if 0 < r(t) � 1, then fj(t) will be more important than vj(t)in (11); consequently, F1(t) < F2(t) shown in Fig. 4, whichimplies that particle 1 outperforms particle 2. For the sake ofcompleteness and to demonstrate that the modified objectivefunction works well, we will elaborate all possible cases.

1) In case 1 of (11), i.e., r(t) �= 0 and infeasible particle,there is at least one feasible particle. In this case, themodified objective function for the infeasible particle isformulated. It should incorporate both the violation factorand the normalized objective value. For the following twoinfeasible particles:a) If r (percentage of feasible particles) is too large, then

objective function is more important than constraintviolation in computing the modified objective value.The particle with the higher (worse in the minimiza-tion problem) objective function will have a higher(worse) modified objective value.

b) If r is too small, then constraint violation is moreimportant than objective function value. The particlewith the higher constraint violation will have a higher(worse) modified objective value.

2) In case 2 of (11), i.e., r(t) = 0 and infeasible particle,there is no feasible particle. Therefore, the only importantfactor will be constraint violation. The particle with thehigher constraint violation will have a higher (worse)modified objective value.

3) In case 3 of (11), i.e., feasible particles, there is atleast one feasible particle. For two feasible particles, theconstraint violation factor does not play a role, and onlythe objective function value is important. The particlewith the higher objective function value will have a higher(worse) modified objective value.

4) When comparing two particles, i.e., one feasible andanother infeasible, we should adopt cases 1 and 3 (sincethere is at least one feasible particle, r cannot be zero). Inthis comparison, using the geometrical graphs in Fig. 4,

480 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

Fig. 4. Schema to represent how spatial knowledge is computed.

it can be observed that always the infeasible particle willhave a higher (worse) modified objective value.

5) There is no comparison between cases 2 and 3. In addi-tion, there will be no comparison between cases 1 and 2,because case 2 is when there is no feasible particle at all,and case 1 or 3 is when there exists at least one feasibleparticle.

At every iteration, spatial knowledge will be updated. To doso, the updated normative knowledge will be used to rebuild thespatial distance for every particle using (10) and (11). Spatialknowledge will be used later to find the global best particlein the population space and to build a communication strategyamong swarms.

3) Situational Knowledge: This part of the belief space isused to keep the good exemplar particles for each swarm. Itsrepresentation is shown in Fig. 3(c). X̂i(t) [i = 1, 2, . . . , P ,where P is the number of swarms defined in (4)] is the bestparticle in the ith swarm based upon the information receivedfrom spatial knowledge in accordance with both objectivefunction value and constraint violation. Assume that at anarbitrary iteration, the ith swarm consists of Ni particles asΩi = {z1, z2, . . . , zNi

} and that F = {F1,F2, . . . ,FNi} is a

set consisting of the modified objective values extracted fromspatial knowledge corresponding to z1, z2, . . . , zNi

. X̂i(t) ∈Ωi is defined such that

F(X̂i(t)

)= min1<l<Ni

Fl, i = 1, 2, . . . , P (12)

where F(X̂i(t)) is the modified objective function value for theparticle X̂i(t). In order to update situational knowledge, the up-dated position of the particles will be used to evaluate (6)–(11)to compute the updated modified objective function values, andthen, the particle corresponding to the least value in each swarmwill be stored in situational knowledge. Situational knowledgewill be used later to compute the swarm best particles and tofacilitate communication among swarms.

4) Temporal Knowledge: This part of the beliefspace is used to keep the history of the individual’sbehavior. Its representation is shown in Fig. 3(d),where T (t) = {T1(t), T2(t), . . . , TN (t)} and P(t) ={P1(t),P2(t), . . . ,PN (t)} (N is the number of particlesin the population space). Tj(t) is a set of past and currenttemporal patterns of the jth particle, which are collected at

every time step from part of spatial knowledge, Fj(t), and isdefined as follows:

Tj(t)={Fj(1),Fj(2), . . . ,Fj(t)} , j = 1, 2, . . . , N (13)

where Fj(1), Fj(2), . . . , Fj(t) are the modified objective func-tion values defined in (11) for time steps 1, 2, . . . t, respectively.Pj(t) is a set of all past and current positions of the jthparticle in the whole population, which is defined as Pj(t) ={xj(1),xj(2), . . . ,xj(t)}, j = 1, 2, . . . , N . Temporal knowl-edge will be updated at every iteration. To do so, the updatedspatial knowledge, the updated position of the particle, andthe previously stored temporal knowledge will be adopted asfollows:

Tj(t+ 1) = Tj(t) ∪ {Fj(t+ 1)} , j = 1, 2, . . . , N

Pj(t+ 1) =Pj(t) ∪ {xj(t+ 1)} , j = 1, 2, . . . , N.

(14)

Temporal knowledge will later be used to compute the per-sonal best for every particle in the population space.

D. Influence Functions

After the belief space is updated, correspondent knowledgeshould be used to influence the flight of particles in PSO. Wepropose to use the knowledge in the belief space to select thepersonal best, swarm best, and global best for the PSO flightmechanism. Furthermore, we propose to adopt the informationin the belief space to perform a communication strategy amongswarms.

1) pbest Selection: In order to select the personal best,we exploit the information in the temporal knowledge sectionof the belief space. The best behaving particle’s past historyshould be selected as follows:

pbesti(t) ={∃xi(t̂) ∈ Pi(t), |Fi(t̂) = mint (Ti(t))

},

i = 1, 2, . . . , N (15)

where Pi(t) = {xi(1),xi(2), . . . ,xi(t)} is a set ofall past positions of the ith particle and Ti(t) ={Fi(1),Fi(2), . . . ,Fi(t)} denotes the corresponding modifiedobjective values for the past history of the ith particle, bothextracted from the temporal knowledge section of the beliefspace.

2) sbest Selection: In order to select the swarm best parti-cle, situational knowledge is adopted. The information stored inthe situational knowledge section of the belief space is simplycopied into the swarm best particles

sbesti(t) = X̂i(t), i = 1, 2, . . . , P (16)

where P is the number of swarms and X̂i(t) is the representa-tion of the situational knowledge of the belief space.

3) gbest Selection: We use the spatial knowledge stored inthe belief space to compute gbest(t) at each iteration. The

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 481

global best particle is found as follows:

gbest(t) = {∃xj(t) ∈ P(t), 1 ≤ j ≤ N |Fj(t) = min (F(t))}(17)

where P(t) = {x1(t),x2(t), . . . ,xN (t)} is the entirepopulation of particles at time t and F(t) ={F1(t),F2(t), . . . ,FN (t)} is a set consisting of the modifiedobjective function values for all particles at time t.

4) Interswarm Communication Strategy: After some prede-fined iterations, Tmigration, swarms will perform informationexchange. Each swarm prepares a list of sending particles tobe sent to the next swarm and also assembles a list of replace-ment particles to be replaced by particles coming from otherswarms. This communication strategy is a modified version ofthe algorithm adopted in [46]. We use the information stored inthe belief space to perform communication among swarms. Todo so, each swarm prepares two lists of particles, i.e., Si and Ri,i = 1, 2, . . . , P , where P is the fixed number of swarms definedin (4). Si is the list of particles in the ith swarm to be sent tothe next swarm, and Ri is the list of particles in the ith swarmto be replaced by particles coming from another swarm. Theinterswarm communication strategy is based upon the particles’locations in the swarm and their modified objective value whichis stored in the belief space. The sending list for the swarm isprepared in the following order.

1) The highest priority in the selection of particles is givento a particle that has the least average Hamming distancefrom others. The average Hamming distance betweenevery particle and all other particles will be computed.To do so, the Hamming distance between two particles iscalculated as a sum of absolute differences between thepositions of two particles in different dimensions. Theparticle with the lowest value of the average Hammingdistance will then be selected as the representative of theswarm.

2) The second priority is given to the closest particles tothe representative particle whose modified objective valuestored in the spatial knowledge of the belief space isgreater than that of the representative.

3) The third priority is given to the closest particles to therepresentative particle whose modified objective valueextracted from the belief space is less than that of therepresentative.

Note that depending on the predefined fixed value for theallowable number of the sending list, Nmigration, the sendinglist will be filled in each swarm using the aforementionedpriorities.

There will also be a replacement list that each swarm pre-pares, based upon the similar positional information of particlesin the swarm. When swarms are approaching local optima,many particles’ locations are the same. Each swarm will re-move this excess information through its replacement list. Thereplacement list in each swarm is assembled in the followingorder.

1) The first priority is given to the particles with identicaldecision space information in the order of their modified

objective values extracted from the belief space, with theleast modified objective values being replaced first.

2) The second and last priority is given to the particles withthe lowest modified objective values if all particles of thefirst priority have already been placed in the replacementlist.

This information exchange among swarms happens in a ringsequential order between each pair of swarms. Each swarmaccepts the sending list from the other swarm and will replaceit with its own replacement list.

IV. COMPARATIVE STUDY

In this section, the performance of the cultural CPSO is eval-uated against those of the selected state-of-the-art constrainedoptimization heuristics.

A. Parameter Settings

The parameters of the cultural CPSO are set as follows.The tolerance for equality constraints in (8), i.e., δ, is set as0.0001, as suggested in the literature. In the flight mechanism,the momentum, w, is randomly selected from the uniformdistribution of (0.5, 1), and the personal, swarm, and globalaccelerations, i.e., c1, c2, and c3, are all selected as 1.5. Pleasenote that the selection of parameters of the flight mechanism,such as momentum and accelerations, follows the same guide-lines depicted in [38]. The population size is fixed at 100particles. The maximum velocity for the particles in a specificdimension, vdmax, is set at half of the range of the particle’sposition in that dimension

vdmax =(xdmax − xd

min

)/2. (18)

The rate for information exchange among swarms affectshow much swarms communicate with each other. A higher ratecorresponds to more communication and better overall perfor-mance of the algorithm, but it does incur higher computationalcomplexity, while a lower rate imposes less computationalcomplexity and results in relatively poorer performance. Theheuristic choice is set at 30%. The allowable number of migrat-ing particles among swarms is set as 5% of the population size,which is Nmigration = 0.05N = 5.

B. Benchmark Test Functions

The proposed cultural CPSO has been tested on 24 bench-mark functions [49] to verify its performance. The characteris-tics on these test functions are summarized in Table I. Theseproblems include various types of objective functions suchas linear, nonlinear, quadratic, cubic, and polynomial. Thesebenchmark problems vary in number of decision variables, n,i.e., between 2 and 24, and number of constraints, i.e., between1 and 38. In this table, ρ is the estimated ratio of the feasibleregion over the search space, which varies by as low as 0.0000%and as high as 99.9971%. The numbers of different types ofconstraints are also shown for each test function: the numberof linear inequality (LI), the number of nonlinear inequality

482 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

TABLE ISUMMARY OF 24 BENCHMARK TEST FUNCTIONS

(NI), the number of linear equality, and the number of nonlinearequality. In this table, a is the number of active constraints at theknown optimal solution, x̃, and f(x̃) is the objective functionof the known optimal solution [49].

C. Simulation Results

The experiments reported in this paper are performed on acomputer with 1.66-GHz dual-core processor and 1-GB RAMoperating on Windows XP Professional. The programs are writ-ten in Matlab. We performed extensive experiments on all 24benchmark test functions based upon the comparison methodssuggested in [49]. For three different function evaluations (FEs)of 5000, 50 000, and 500 0000, the objective function errorvalues, f(x)− f(x̃), are found, while f(x̃) is the best knownsolution [49] presented in the rightmost column in Table I.Notice that when f(x)− f(x̃) < 1e− 10, the final error isconsidered as zero. For each benchmark test problem, a totalof 25 independent runs are performed.

The statistical measures, including the best, median, worst,mean, and standard deviations, are then computed. These re-sults are tabulated in Table II. For the best, median, and worstsolutions, the number of constraints that cannot satisfy thefeasibility condition is found and shown as an integer insidethe parenthesis after the best, median, and worst solutions,

respectively, in this table. The parameter c shows three differentintegers demonstrating the number of constraints, including theequality and inequality ones that are violated by more than1, 0.01 and 0.0001, respectively, for the median solution. Theparameter v indicates the average value of the violations of allconstraints at the median solution, as defined in [49].

For each independent run, the number of FEs to locatea solution satisfying f(x)− f(x̃) < 0.0001 is recorded. Foreach benchmark function, the statistical measures of these 25runs, including the best, median, worst, mean, and standarddeviations, are then computed. These results are shown inTable III. In the same table, feasible rate, success rate, andsuccess performance are also calculated for each test function.Feasible rate is a ratio of feasible runs over total runs, wherefeasible run is defined as a run with maximum FEs of 500 000during which at least one feasible solution is found. Successrate is a ratio of successful runs over total runs, where success-ful run is defined as a run during which the algorithm finds afeasible solution, x, satisfying f(x)− f(x̃) < 0.0001. Successperformance is defined as [49]

success performance

=mean{FEs for successful runs} × (no. of total runs)

no. of successful runs.

(19)

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 483

TABLE IIERROR VALUES FOR DIFFERENT FES ON TEST PROBLEMS g01–g24

484 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

TABLE II(Continued.) ERROR VALUES FOR DIFFERENT FES ON TEST PROBLEMS g01–g24

TABLE IIINUMBER OF FES TO ACHIEVE THE FIXED ACCURACY LEVEL (f(x)− f(x̃) < 0.0001), SUCCESS RATE, FEASIBILITY RATE,

AND SUCCESS PERFORMANCE

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 485

TABLE IVSUMMARY OF STATISTICAL RESULTS FOUND BY CULTURAL CPSO (IS DENOTED FOR INFEASIBLE SOLUTION)

These tables show that feasible solutions can be reliablyfound within the maximum FEs for all benchmark problemsexcept for function g20. The final solutions of all benchmarkproblems can be identified with an error of less than 0.0001from the optimal solution within the maximum FEs exceptfor functions g20 and g22. Most benchmark functions findthe optimal solution with an error of less than 0.0001 before50 000 FEs except for functions g02, g17, g20, g22, and g23.It can also be observed that the cultural CPSO has 100%feasible rate for all benchmark problems except for functiong20, and 100% success rate for all benchmark problems exceptfor functions g02, g10, g17, g21, and g22. However, we shouldnote that for functions g10, g17, and g21, the success rates arefairly high at 96%, 92%, and 96%, respectively. A summaryof the statistical results for the best, median, mean, worst, andstandard deviations obtained by the cultural CPSO over 25independent runs is shown in Table IV. As can be seen in thistable, except for function g20, we have found feasible solutionsfor all other benchmark problems.

D. Convergence Graphs

For the median run of each test function with FEs of 500 000,two semilog graphs are plotted for each test function. The firstgraph is log10[f(x)− f(x̃)] versus FEs, while f(x̃) is givenin the rightmost column of Table I, and f(x) is the objectivevalue for the best solution at specific FEs. The second graphis log10(v) versus FEs, where v is the average value of theconstraint violations at specific FEs. For these two graphs,points which satisfy f(x)− f(x̃) ≤ 0 are not plotted, sincethe logarithm for zero or negative numbers cannot be com-puted. Figs. 5–8 show these two graphs for all 24 benchmarkproblems.

E. Complexity Analysis

In Table V, the algorithm’s complexity corresponding toall 24 benchmark problems is shown. The computed timesin seconds for complexity are T1, T2, and (T2− T1)/T1,where T1 represents the average computing time of 10 000evaluations for each test problem and T2 is the average of10 000 evaluations for all benchmark problems [49].

Furthermore, in this section, the average time for 1000 FEshad been found for some test functions to examine the memoryaccess and processing time and its relation with the problems.We have selected eight problems based on their FEs that theyreach to the optimum with an error of less than 0.0001. FromTable III, we can see that g08, g12, and g11 will reach to theoptimum point fairly early at FEs of 2302, 3289, and 4589,respectively, showing that g08, g12, and g11 are fairly easierthan the others in the set of 24 benchmark problems. On theother hand, g17, g23, and g02 will reach to the optimum pointfairly late at FEs of 158 367, 62 091, and 56 392, respectively,showing that g17, g23, and g02 are harder than the othersin the set of 24 benchmark problems. By choosing these sixbenchmark functions, we compare the average time for 1000FEs. These values are summarized in Table VI. As it is shown,the processing time for 1000 FEs for the easier problems is, ingeneral, less than that for the more difficult problems.

F. Performance Comparison

Furthermore, we have compared the performance of the cul-tural CPSO with nine state-of-the-art constrained optimizationheuristics in terms of two performance indicators, namely,feasible rate and success rate. The selected high-performancealgorithms are PSO [50], DMS-PSO [51], ε_DE [52], GDE

486 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

Fig. 5. Convergence graphs for problems g01–g06. (a) Function error values. (b) Mean constraint violations.

Fig. 6. Convergence graphs for problems g07–g12. (a) Function error values. (b) Mean constraint violations.

Fig. 7. Convergence graphs for problems g13–g18. (a) Function error values. (b) Mean constraint violations.

Fig. 8. Convergence graphs for problems g19–g24. (a) Function error values. (b) Mean constraint violations.

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 487

TABLE VCOMPUTATIONAL COMPLEXITY

TABLE VIAVERAGE PROCESSING TIME (IN SECONDS) FOR DIFFERENT

BENCHMARK TEST FUNCTIONS FOR 1000 EVALUATIONS

[53], jDE-2 [54], MDE [55], MPDE [56], PCX [57], PESO+[58], and SaDE [59]. The comparative results are then demon-strated in Table VII. The average performance for each al-gorithm is also computed. This table demonstrates that thecultural CPSO has an average feasible rate of 95.83% on 24benchmark problems, which places it as the top-performingalgorithm along with DMS-PSO [51], ε_DE [52], and SaDE[59]. The results in the same table indicate that the proposedcultural CPSO has an average success rate of 90.00% on24 benchmark problems, making it the third best performingalgorithm after ε_DE [52] and PCX [57] with 91.67% and90.17% success rates, respectively. ε_DE [52] is a differentialalgorithm adopted to solve constraint problems. The differentialalgorithm is a stable population-based search algorithm that isrobust to multimodal problems. The ε_DE improves the behav-ior of a regular differential algorithm by adopting a gradient-based mutation that finds feasible points using the gradient ofconstraints at an infeasible point [52]. As a result, the ε_DE issuitable for multimodal problems. PCX [57] is a population-based algorithm generator that includes systematic plans suchas selection, generation, replacement, and update plans. Eachessential feature of an optimization task is independently de-signed via these four plans which are suitable for optimizationproblems. The generation plan in PCX uses a planar versionof parent-centric recombination operator to produce an off-spring [57].

G. Sensitivity Analysis

In this subsection, the sensitivity of algorithm performancewith respect to some parameters is briefly assessed. The param-eters to be tuned in the proposed algorithm are personal acceler-ation (c1), swarm acceleration (c2), global acceleration (c3), andthe rate for information exchange among swarms (r). Noticethat the allowance number of particles to migrate (Nmigration)is a fraction of the population size and does not need to betuned. The tolerance for equality constraints is considered afixed number of 0.0001 to be able to fairly compare our resultswith those of other algorithms. The flight momentum is alsorandomly selected from a uniform distribution and does nothave a tuning issue, and the maximum velocity of particles ina specific dimension depends on the particle’s positional range,which will consequently not be adjusted either.

We have applied sensitivity analysis to a selected set ofbenchmark problems by varying one parameter at a time whilethe other parameters are set as values discussed in the earlier

sections. Table VIII shows the results of the sensitivity analysis.For every set of parameters, 25 independent runs are performed.We have recorded the mean statistical results for feasiblesolutions, feasible rate, and success rate, as defined earlier,for every set of parameters. This table shows that the effectof varying the acceleration on the algorithm’s performance isby some extent problem dependent. This makes it difficult toidentify the optimum parameters in order to achieve the bestperformance. We suggest the further analysis of this issue andthe implementation of an adaptive dynamic law based upon theneed for exploration or exploitation in the f−V space discussedin the spatial knowledge of the belief space. This approach issimilar to the one introduced in [38]. This table also shows thatby increasing the rate for information exchange, the success ratewill be greatly improved for all selected benchmark problems.On the other hand, by decreasing this rate, the success rate getsdeteriorated.

H. Real-World Application Problem

In this subsection, the proposed algorithm is applied ona real-world application problem. Spring design is a me-chanical design problem [60] to minimize the weight of atension/compression spring, as shown in Fig. 9. There are NIconstraints on minimum deflection, shear stress, surge fre-quency, and limits on outside diameter and on design variables.The design variables are mean coil diameter (x1), wire diameter(x2), and the number of active coils (x3), along with fourinequality constraints. The mathematical formulation of theproblem is as follows:

Minimize : f(x) = (x3 + 2)x1x22 (20)

Subject to : 1− x31x3

71785x42

≤ 0

4x21 − x1x2

12566 (x1x32 − x3

1x3)− 1

5108x22

− 1 ≤ 0

1− 140.45x2

x31x3

≤ 0

x1 + x2

1.5− 1 ≤ 0 (21)

with the following limits on variables: 0.25 ≤ x1 ≤ 1.3, 0.05 ≤x2 ≤ 2.0, and 2 ≤ x3 ≤ 15.

Table IX demonstrates the simulation results for the springdesign problem using the proposed cultural PSO. The settingof the algorithm is considered as it has been discussed inSection IV-A. These results are computed after 30 independentruns have been performed. The decision variable values andoptimized solution are summarized in the table. As can beobserved from the table, the proposed algorithm finds the betteroptimum solution compared to the other algorithms.

V. CONCLUSION

In this paper, we have proposed the cultural CPSO, a novelheuristic to solve constrained optimization problems, whichincorporates the information of objective function and con-straint violation, to construct a cultural framework consisting

488 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

TABLE VIICOMPARISON OF CULTURAL CPSO WITH STATE-OF-THE-ART METHODS IN TERMS OF FEASIBLE RATE AND SUCCESS RATE

TABLE VIIISENSITIVITY ANALYSIS WITH RESPECT TO PERSONAL ACCELERATION (c1), SWARM ACCELERATION (c2), GLOBAL ACCELERATION (c3), AND RATE OF

INFORMATION EXCHANGE (R). THE MEAN RESULTS OF FEASIBLE SOLUTIONS, FEASIBLE RATE, AND SUCCESS RATE ARE COMPUTED OVER

25 INDEPENDENT RUNS

of two sections: a multiple-swarm PSO with the ability of inter-swarm communication as population space and a belief spaceincluding four elements, namely, normative knowledge, spatial

knowledge, situational knowledge, and temporal knowledge.Each swarm assembles two lists of particles to share with otherswarms based upon the cultural information retrieved from

DANESHYARI AND YEN: CONSTRAINED MULTIPLE-SWARM PSO WITHIN A CULTURAL FRAMEWORK 489

Fig. 9. Schema for the spring design problem.

TABLE IXBEST SOLUTION FOR THE SPRING DESIGN APPLICATION PROBLEM

different sections of the belief space. This cultural-information-based communication facilitates the algorithm’s performanceon better handling the constraints along with optimizing theobjective function simultaneously. In the proposed algorithm,the feedbacklike communication channels are to connect thepopulation and belief spaces to increase the efficiency of thePSO mechanism in the search process. In system science, anegative feedback loop is used to stabilize the system and totune the output to the desired values, and results in a betterbehavior of the closed-loop system compared to that of theopen-loop one. If we look at the problem of optimization as theoutput of a system that needs to be optimized and if we considerthe population of particles and its associated knowledge as anoverall system, the behavior of this closed-loop system shouldhave been improved by applying the feedback-type communi-cation channels compared to that of the open-loop one, i.e., onlypopulation space.

The cultural CPSO shows competitive results when per-forming extensive experiments on 24 benchmark test func-tions. The comparison study with the chosen state-of-the-artconstrained optimization techniques indicates that the culturalCPSO is able to perform competitively in terms of commonlyused performance metrics, namely, feasible rate and successrate. Furthermore, sensitivity analysis was performed on theparameters of the paradigm, which shows that by increasingthe rate of information exchange, the success rate is greatlyimproved. We have also applied our proposed algorithm ona real-world optimization problem, which demonstrates betterperformance compared to the other algorithms applied on theproblem. As future work, the proposed framework for single-objective optimization will be extended into a cultural-basedmultiobjective PSO (Cultural-MOPSO) [63]–[65] to exploit itsrobust performance under a dynamic environment when fitnesslandscape and constraints will change periodically or sporadi-cally. Additionally, applications of Cultural-MOPSO to medicaldiagnosis [66] and structural control [67] will be exploited.

REFERENCES

[1] S. Venkatraman and G. G. Yen, “A generic framework for constrained op-timization using genetic algorithms,” IEEE Trans. Evol. Comput., vol. 9,no. 4, pp. 424–435, Aug. 2005.

[2] Y. Wang, Y. C. Jiao, and H. Li, “An evolutionary algorithm for solv-ing nonlinear bilevel programming based on a new constraint-handlingscheme,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 35, no. 2,pp. 221–232, May 2005.

[3] J. Grefenstette, “Optimization of control parameters for genetic algo-rithms,” IEEE Trans. Syst., Man, Cybern., vol. SMC-16, no. 1, pp. 122–128, Jan. 1986.

[4] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” inProc. Int. Joint Conf. Neural Netw., Perth, Australia, 1995, pp. 1942–1948.

[5] M. Clerc and J. Kennedy, “The particle swarm—Explosion, stability, andconvergence in a multidimensional complex space,” IEEE Trans. Evol.Comput., vol. 6, no. 1, pp. 58–73, Feb. 2002.

[6] F. van den Bergh and A. P. Engelbrecht, “A cooperative approach toparticle swarm optimization,” IEEE Trans. Evol. Comput., vol. 8, no. 3,pp. 225–239, Jun. 2004.

[7] N. R. Samal, A. Konar, S. Das, and A. Abraham, “A closed loop stabil-ity analysis and parameter selection of the particle swarm optimizationdynamics for faster convergence,” in Proc. IEEE Congr. Evol. Comput.,Singapore, 2007, pp. 1769–1776.

[8] S. Bhattacharya, A. Konar, and A. Nagar, “A Lyapunov-based extension toPSO dynamics for continuous function optimization,” in Proc. Eur. Symp.Comput. Model. Simul., Liverpool, U.K., 2008, pp. 28–33.

[9] N. R. Samal, A. Konar, and A. Nagar, “Stability analysis and parame-ter selection of a particle swarm optimizer in a dynamic environment,”in Proc. Eur. Symp. Comput. Model. Simul., Liverpool, U.K., 2008,pp. 21–27.

[10] A. Ratnaweera, S. K. Halganuge, and H. C. Watson, “Self organiz-ing hierarchical particle swarm optimizer with time-varying accelerationco-efficient,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 240–255,Jun. 2004.

[11] X. Hu and R. C. Eberhart, “Solving constrained nonlinear optimizationproblems with particle swarm optimization,” in Proc. World Multiconf.Syst., Cybern. Informat., Orlando, FL, 2002, pp. 122–127.

[12] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimiza-tion method for constrained optimization problems,” in IntelligentTechnologies-Theory and Application: New Trends in Intelligent Tech-nologies, vol. 76, P. Sincak, J. Vascak, V. Kvasnicka, and J. Pospichal, Eds.Amsterdam, The Netherlands: IOS Press, 2002, pp. 214–220.

[13] G. Coath and S. K. Halgamuge, “A comparison of constraint handlingmethods for the application of particle swarm optimization to constrainednonlinear optimization problems,” in Proc. IEEE Congr. Evol. Comput.,Canberra, Australia, 2003, pp. 2419–2425.

[14] U. Paquet and A. P. Engelbrecht, “A new particle swarm optimizer forlinearly constrained optimization,” in Proc. IEEE Congr. Evol. Comput.,Canberra, Australia, 2003, pp. 227–233.

[15] T. Takahama and S. Sakai, “Solving constrained optimization problems bythe constrained particle swarm optimizer with adaptive velocity limit con-trol,” in Proc. IEEE Int. Conf. Cybern. Intell. Syst., Bangkok, Thailand,2006, pp. 1–7.

[16] R. A. Krohling and L. S. Coelho, “Coevolutionary particle swarm opti-mization using Gaussian distribution for solving constrained optimizationproblems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 6,pp. 1407–1416, Dec. 2006.

[17] B. Yang, Y. Chen, Z. Zhao, and Q. Han, “A master–slave particleswarm optimization algorithm for solving constrained optimization prob-lems,” in Proc. World Congr. Intell. Control Autom., Dalian, China, 2006,pp. 3208–3212.

[18] J. Zheng, Q. Wu, and W. Song, “An improved particle swarm algorithmfor solving nonlinear constrained optimization problems,” in Proc. IEEEInt. Conf. Natural Comput., Haikou, China, 2007, pp. 112–117.

[19] A. Y. Saber, S. Ahmmed, A. Alshareef, A. Abdulwhab, andK. Adbullah-Al-Mamun, “Constrained non-linear optimization by mod-ified particle swarm optimization,” in Proc. Int. Conf. Comput. Inf. Tech-nol., Gyeongju, South Korea, 2008, pp. 1–7.

[20] J. Li, Z. Liu, and C. Peng, “Solving constrained optimization via dualparticle swarm optimization with stochastic ranking,” in Proc. Int. Conf.Comput. Sci. Softw. Eng., Wuhan, China, 2008, pp. 1215–1218.

[21] J. I. Flores-Mendoza and E. Mezura-Montes, “Dynamic adaptation andmultiobjective concepts in a particle swarm optimizer for constrainedoptimization,” in Proc. IEEE Congr. Evol. Comput., Hong Kong, 2008,pp. 3427–3434.

[22] T. O. Ting, K. P. Wong, and C. Y. Chung, “Hybrid constrained geneticalgorithm/particle swarm optimization load flow algorithm,” IET Gener.,Transm., Distrib., vol. 2, no. 6, pp. 800–812, Nov. 2008.

[23] Z. Liu, C. Wang, and J. Li, “Solving constrained optimization via amodified genetic particle swarm optimization,” in Proc. Int. WorkshopForensic Appl. Tech. Telecommun., Inf., Multimedia, Adelaide, Australia,2008, pp. 217–220.

[24] G. G. Yen and W. Leong, “Constraint handling in particle swarmoptimization,” Int. J. Swarm Intell. Res., vol. 1, no. 1, pp. 42–63, 2010.

490 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 2, MARCH 2012

[25] R. G. Reynolds, “An introduction to cultural algorithms,” in Proc. Annu.Conf. Evol. Program., River Edge, NJ, 1994, pp. 131–139.

[26] R. G. Reynolds, “Cultural algorithms: Theory and applications,” in Ad-vanced Topics in Computer Science Series: New Ideas in Optimization,D. Corne, M. Dorigo, and F. Glover, Eds. New York: McGraw- Hill,1999, pp. 367–377.

[27] R. G. Reynolds and W. Sverdlik, “Problem solving using culturalalgorithms,” in Proc. IEEE Congr. Evol. Comput., Orlando, FL, 1994,pp. 645–650.

[28] C. J. Chung and R. G. Reynolds, “A testbed for solving optimization prob-lems using cultural algorithms,” in Proc. Annu. Conf. Evol. Programm.,San Diego, CA, 1996, pp. 225–236.

[29] C. J. Chung and R. G. Reynolds, “Function optimization using evolution-ary programming with self-adaptive cultural algorithms,” in Proc. Asia-Pacific Conf. Simul. Evol. Learn., Taejon, South Korea, 1996, pp. 17–26.

[30] Z. Kobti, R. G. Reynolds, and T. Kohler, “A multi-agent simulation us-ing cultural algorithms: The effect of culture on the resilience of socialsystems,” in Proc. IEEE Congr. Evol. Comput., Canberra, Australia, 2003,pp. 1988–1995.

[31] R. G. Reynolds, B. Peng, and J. Brewster, “Cultural swarms II: Virtualalgorithm emergence,” in Proc. IEEE Congr. Evol. Comput., Canberra,Australia, 2003, pp. 1972–1979.

[32] R. Iacoban, R. G. Reynolds, and J. Brewster, “Cultural swarms: Modelingthe impact of culture on social interaction and problem solving,” in Proc.IEEE Swarm Intell. Symp., Indianapolis, IN, 2003, pp. 205–211.

[33] B. Peng, R. G. Reynolds, and J. Brewster, “Cultural swarms,” in Proc.IEEE Congr. Evol. Comput., Canberra, Australia, 2003, pp. 1965–1971.

[34] B. Peng and R. G. Reynolds, “Cultural algorithms: Knowledge learning indynamic environments,” in Proc. IEEE Congr. Evol. Comput., Portland,OR, 2004, pp. 1751–1758.

[35] R. L. Becerra and C. A. C. Coello, “Culturizing differential evolutionfor constrained optimization,” in Proc. Mexican Int. Conf. Comput. Sci.,Colima, Mexico, 2004, pp. 304–311.

[36] R. L. Becerra and C. A. C. Coello, “Optimization with constraints using acultured differential evolution approach,” in Proc. Genetic Evol. Comput.Conf., Washington, DC, 2005, pp. 27–34.

[37] R. G. Reynolds and B. Peng, “Cultural algorithms: Computational mod-eling of how cultures learn to solve problems: An engineering example,”Cybern. Syst., vol. 36, no. 8, pp. 753–771, 2005.

[38] M. Daneshyari and G. G. Yen, “Cultural-based multiobjective parti-cle swarm optimization,” IEEE Trans. Syst., Man, Cybern. B, Cybern.,vol. 41, no. 2, pp. 553–567, Apr. 2011.

[39] X. Jin and R. G. Reynolds, “Using knowledge-based system with hierar-chical architecture to guide the search of evolutionary computation,” inProc. IEEE Int. Conf. Tools Artif. Intell., Chicago, IL, 1999, pp. 29–36.

[40] X. Yuan, A. Su, Y. Yuan, X. Zhang, B. Cao, and B. Yang, “A chaotichybrid cultural algorithm for constrained optimization,” in Proc. IEEEInt. Conf. Genetic Evol. Comput., Jingzhou, China, 2008, pp. 307–310.

[41] W. Tang and Y. Li, “Constrained optimization using triple spaces culturedgenetic algorithm,” in Proc. IEEE Int. Conf. Nat. Comput., Jinan, China,2008, pp. 589–593.

[42] Z. Zhao and H. Gao, “FIR digital filters based on cultural particle swarmoptimization,” in Proc. Int. Workshop Inf. Secur. Appl., Qingdao, China,2009, pp. 252–255.

[43] L. A. Sjaastad, “The costs and returns of human migration,” J. PoliticalEcon., vol. 70, no. 5, pp. 80–93, Oct. 1962.

[44] G. Bottomley, From Another Place: Migration and the Politics of Culture.Cambridge, U.K.: Cambridge Univ. Press, 1992.

[45] Y. G. Woldesenbet and G. G. Yen, “Dynamic evolutionary algorithm withvariable relocation,” IEEE Trans. Evol. Comput., vol. 13, no. 3, pp. 500–513, Jun. 2009.

[46] G. G. Yen and M. Daneshyari, “Diversity-based information exchangeamong multiple swarms in particle swarm optimization,” Int. J. Comput.Intell. Appl., vol. 7, no. 1, pp. 57–75, 2008.

[47] G. G. Yen and W. F. Leong, “Dynamic multiple swarms in multiobjectiveparticle swarm optimization,” IEEE Trans. Syst., Man, Cybern. A, Syst.,Humans, vol. 39, no. 4, pp. 890–911, Jul. 2009.

[48] B. Tessema and G. G. Yen, “An adaptive penalty formulation for con-strained evolutionary optimization,” IEEE Trans. Syst., Man, Cybern. A,Syst., Humans, vol. 39, no. 3, pp. 565–578, May 2009.

[49] J. J. Liang, T. P. Runarsson, E. Mezura-Montes, M. Clerc, P. N. Suganthan,C. A. C. Coello, and K. Deb, “Problem definitions and evaluation criteriafor the CEC2006,” Nanyang Technological Univ., Singapore, Tech. Rep.,2006, Special Session on Constrained Real-Parameter Optimization.

[50] K. Zielinski and R. Laur, “Constrained single-objective optimization us-ing particle swarm optimization,” in Proc. IEEE Congr. Evol. Comput.,Vancouver, BC, Canada, 2006, pp. 443–450.

[51] J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarmoptimizer with a novel constraint-handling mechanism,” in Proc. IEEECongr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 9–16.

[52] T. Takahama and S. Sakai, “Constrained optimization by the constraineddifferential evolution with gradient based mutation and feasible elites,” inProc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 1–8.

[53] S. Kukkonen and J. Lampinen, “Constrained real-parameter optimiza-tion with generalized differential evolution,” in Proc. IEEE Congr. Evol.Comput., Vancouver, BC, Canada, 2006, pp. 207–214.

[54] J. Brest, V. Zumer, and M. S. Maucec, “Self-adaptive differential evolu-tion algorithm in constrained real-parameter optimization,” in Proc. IEEECongr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 215–222.

[55] E. Mezura-Montes, J. Velazquez-Reyes, and C. A. C. Coello, “Modifieddifferential evolution for constrained optimization,” in Proc. IEEE Congr.Evol. Comput., Vancouver, BC, Canada, 2006, pp. 25–32.

[56] M. F. Tasgetiren and P. N. Suganthan, “A multi-populated differen-tial evolution algorithm for solving constrained optimization problem,”in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006,pp. 33–40.

[57] A. Sinha, A. Srinivasan, and K. Deb, “A population-based parent centricprocedure for constrained real parameter optimization,” in Proc. IEEECongr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 239–245.

[58] A. E. Munoz-Zavala, A. Hernandez-Aguirre, E. R. Villa-Diharce, and S.Botello-Rionda, “PESO+ for constrained optimization,” in Proc. IEEECongr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 231–238.

[59] V. L. Huang, A. K. Qin, and P. N. Suganthan, “Self-adaptive differentialevolution algorithm for constrained real-parameter optimization,” in Proc.IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 17–24.

[60] M. Daneshyari and G. G. Yen, “Talent-based social algorithm for op-timization,” in Proc. IEEE Congr. Evol. Comput., Portland, OR, 2005,pp. 786–791.

[61] J. S. Arora, Introduction to Optimum Design. New York: McGraw-Hill,1989.

[62] T. Ray and P. Saini, “Engineering design optimization using a swarmwith an intelligent information sharing among individuals,” Eng. Optim.,vol. 33, no. 6, pp. 735–748, 2001.

[63] H. Lu and G. G. Yen, “Rank-density-based multiobjective genetic algo-rithm and benchmark test function study,” IEEE Trans. Evol. Comput.,vol. 7, no. 4, pp. 325–343, Aug. 2003.

[64] Y. G. Woldesenbet, G. G. Yen, and B. G. Tessema, “Constraint handling inmulti-objective evolutionary optimization,” IEEE Trans. Evol. Comput.,vol. 13, no. 3, pp. 514–525, Jun. 2009.

[65] W. F. Leong and G. G. Yen, “PSO-based multiobjective optimization withdynamic population size and adaptive local archives,” IEEE Trans. Syst.,Man, Cybern. B, Cybern., vol. 38, no. 5, pp. 1270–1293, Oct. 2008.

[66] P. Meesad and G. G. Yen, “Combined numerical and linguistic knowledgerepresentation and its application to medical diagnosis,” IEEE Trans.Syst., Man, Cybern. A, Syst., Humans, vol. 33, no. 2, pp. 206–222,Mar. 2003.

[67] L. Davis, D. Hyland, and G. G. Yen, “Adaptive neural control for spacevibration suppression,” Smart Mater. Struct., vol. 8, no. 6, pp. 753–766,Dec. 1999.

Moayed Daneshyari (S’96–M’08) received the B.S.degree in electrical engineering from the SharifUniversity of Technology, Tehran, Iran, in 1995and the M.S. degree in physics and the Ph.D. de-gree in electrical and computer engineering fromOklahoma State University, Stillwater, in 2007 and2010, respectively.

He has been an Assistant Professor with the De-partment of Technology, Elizabeth City State Uni-versity, Elizabeth City, NC.

Gary G. Yen (S’87–M’88–SM’97–F’09) receivedthe Ph.D. degree in electrical and computer engineer-ing from the University of Notre Dame, Notre Dame,IN, in 1992.

He is currently a Professor with the School ofElectrical and Computer Engineering, OklahomaState University, Stillwater.