particle swarm optimization with convergence speed ... · scale up particle swarm optimization...

17
Soft Computing https://doi.org/10.1007/s00500-018-3098-9 METHODOLOGIES AND APPLICATION Particle swarm optimization with convergence speed controller for large-scale numerical optimization Han Huang 1 · Liang Lv 1 · Shujin Ye 2 · Zhifeng Hao 3 © Springer-Verlag GmbH Germany, part of Springer Nature 2018 Abstract Particle swarm optimization (PSO) has high convergence speed yet with its major drawback of premature convergence when solving large-scale optimization problems. We argue that it can be empowered by adaptively adjusting its convergence speed for the problems. In this paper, a convergence speed controller is proposed to improve the performance of PSO for large-scale optimization. As an additional operator of PSO, the controller is applied periodically and independently. It has two conditions and rules for adjusting the convergence speed of PSO, one for premature convergence and the other for slow convergence. The effectiveness of the PSO with convergence speed controller is evaluated by calculating the benchmark functions of CEC’2010. The numerical results indicate that the proposed controller helps PSO to keep a balance between convergence speed and swarm diversity during the optimization process. The results also support our argument that PSO can on average outperform other PSOs and cooperative coevolution methods for large-scale optimization when working with the convergence speed controller. Keywords Large-scale optimization · Particle swarm optimization · Convergence speed controller · Numerical optimization 1 Introduction Many real-world optimization problems (Afshar 2012; Akay and Karaboga 2012) contain a great number of variables, and they can be abstracted into the large-scale optimization prob- lems (LOPs), such as the engineering designing optimization problem (Akay and Karaboga 2012), design of airoil (Vicini and Quagliarella 1999), classification (Gu et al. 2018), image matting (Cai et al. 2017), resource allocation (Zhou and Communicated by V. Loia. B Shujin Ye [email protected] Han Huang [email protected] Liang Lv [email protected] Zhifeng Hao [email protected] 1 School of Software Engineering, South China University of Technology, Guangzhou 510006, China 2 Hong Kong Baptist University, Kowloon Tong, Hong Kong, China 3 Foshan University, Foshan 528000, China Zhang 2016) and capacitated arc routing problems (Mei et al. 2014). Most of the existing heuristic algorithms like parti- cle swarm optimization (Clerc and Kennedy 2002; Huang et al. 2012) have difficulty in tackling these complex prob- lems. PSO is one of the effective meta-heuristic algorithms for continuous optimization by its fast-convergence property. However, it loses its efficiency when solving large and com- plex problems, such as optimization problem instances with high dimensions. Therefore, our research focuses on how to improve PSO for large-scale optimization. Recently, owing to “curse of dimensionality” (Omidvar et al. 2015), more and more researchers of meta-heuristic algorithm have paid their attention to the solution of the large- scale optimization problems. The reason for the difficulty lies in the fact that complexity of the problem usually increases with the size of problem and that the solution space of the problem increases exponentially with the problem size (Tang et al. 2009). For the research of LOPs, there are three frequently tested benchmark instances of large-scale optimization problem provided by the competition of CEC’2008 (Tang et al. 2007), CEC’2010 (Tang et al. 2009) and CEC’2013 (Li et al. 2013). Among them, the problems of CEC’2010 mainly include various kinds of partially separable problems between the separable and fully nonseparable cases. Real-world opti- 123

Upload: others

Post on 27-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Soft Computinghttps://doi.org/10.1007/s00500-018-3098-9

METHODOLOGIES AND APPL ICAT ION

Particle swarm optimization with convergence speed controllerfor large-scale numerical optimization

Han Huang1 · Liang Lv1 · Shujin Ye2 · Zhifeng Hao3

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

AbstractParticle swarm optimization (PSO) has high convergence speed yet with its major drawback of premature convergence whensolving large-scale optimization problems. We argue that it can be empowered by adaptively adjusting its convergence speedfor the problems. In this paper, a convergence speed controller is proposed to improve the performance of PSO for large-scaleoptimization. As an additional operator of PSO, the controller is applied periodically and independently. It has two conditionsand rules for adjusting the convergence speed of PSO, one for premature convergence and the other for slow convergence. Theeffectiveness of the PSOwith convergence speed controller is evaluated by calculating the benchmark functions of CEC’2010.The numerical results indicate that the proposed controller helps PSO to keep a balance between convergence speed and swarmdiversity during the optimization process. The results also support our argument that PSO can on average outperform otherPSOs and cooperative coevolution methods for large-scale optimization when working with the convergence speed controller.

Keywords Large-scale optimization · Particle swarm optimization · Convergence speed controller · Numerical optimization

1 Introduction

Many real-world optimization problems (Afshar 2012; Akayand Karaboga 2012) contain a great number of variables, andthey can be abstracted into the large-scale optimization prob-lems (LOPs), such as the engineering designing optimizationproblem (Akay and Karaboga 2012), design of airoil (Viciniand Quagliarella 1999), classification (Gu et al. 2018), imagematting (Cai et al. 2017), resource allocation (Zhou and

Communicated by V. Loia.

B Shujin [email protected]

Han [email protected]

Liang [email protected]

Zhifeng [email protected]

1 School of Software Engineering, South China University ofTechnology, Guangzhou 510006, China

2 Hong Kong Baptist University, Kowloon Tong, Hong Kong,China

3 Foshan University, Foshan 528000, China

Zhang 2016) and capacitated arc routing problems (Mei et al.2014). Most of the existing heuristic algorithms like parti-cle swarm optimization (Clerc and Kennedy 2002; Huanget al. 2012) have difficulty in tackling these complex prob-lems. PSO is one of the effective meta-heuristic algorithmsfor continuous optimization by its fast-convergence property.However, it loses its efficiency when solving large and com-plex problems, such as optimization problem instances withhigh dimensions. Therefore, our research focuses on how toimprove PSO for large-scale optimization.

Recently, owing to “curse of dimensionality” (Omidvaret al. 2015), more and more researchers of meta-heuristicalgorithmhave paid their attention to the solution of the large-scale optimization problems. The reason for the difficulty liesin the fact that complexity of the problem usually increaseswith the size of problem and that the solution space of theproblem increases exponentially with the problem size (Tanget al. 2009).

For the research of LOPs, there are three frequently testedbenchmark instances of large-scale optimization problemprovided by the competition of CEC’2008 (Tang et al. 2007),CEC’2010 (Tang et al. 2009) and CEC’2013 (Li et al. 2013).Among them, the problems of CEC’2010 mainly includevarious kinds of partially separable problems between theseparable and fully nonseparable cases. Real-world opti-

123

Page 2: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

mization problemswillmost likely consist of different groupsof parameters with strong dependence within but little inter-action between the groups (Tang et al. 2009). Furthermore,the problem will be more complex when there are non-separable dimensions. Therefore, we choose the benchmarkproblems of CEC’2010 to test the algorithms under compar-ison in our research.

PSO easily suffers from the premature convergence insolving the multimodal and nonseparable optimization prob-lems (Liang et al. 2006). Moreover, PSO was shown toperform poorly, when the dimensionality of the problemincreases (Li and Yao 2012). Compared with other heuristicalgorithms such as differential evolution (DE) and memeticalgorithm (MA), this perception of PSO’s inability to han-dle high-dimension problems seems to be widely held (VanDen Bergh 2006; Yang et al. 2008a). To improve the per-formance of PSO, we attempt to help PSO keep its rapidconvergence advantage and guard against the premature con-vergence.

We propose a convergence speed controller (CSC) toachieve this goal. The premature convergence can be consid-ered to be a sick status of PSO, and CSC plays a diagnosticrole for the PSO. Different from other strategies (Chen et al.2013; Li and Yao 2012; Liang et al. 2006) of preventing pre-mature convergence by revising the PSO operators directly,CSC is applied independently from the host PSO, and it con-ditionally impacts on PSO according to the indexes (e.g.,lbest and gbest) that reflect the degree of premature conver-gence. The proposed CSC can produce an early warning tothe host PSO for large-scale optimization problems before itfalls into the premature convergence.

To verify our hypothesis,we take particle swarmoptimiza-tion algorithm (PSO) (Shi and Eberhart 1998) as a case withCSC. It is the first-version PSO (Eberhart andKennedy 1995)with an inertia weight which impacts on subsequent PSOs.It was usually shown as a weaker algorithm for complexoptimization problem in the competition of the publishedresults (Chen et al. 2013; Clerc and Kennedy 2002; Huanget al. 2012; Li and Yao 2012; Liang et al. 2006). We havean argument that PSO can be empowered with the proposedCSC by adaptively adjusting its convergence speed. There-fore, PSO is finally modified to be a PSO with the proposedCSC (PSO–CSC). The argument is supported by the findingthat PSO–CSC greatly improves the performance of PSO forLOPs.

The rest of this paper is organized as follows. Section 2reviews related work of meta-heuristic algorithms for large-scale optimization. Section 3 introduces the proposed CSCand PSO-CSC in detail. Section 4 introduces an investigationof parameter setting and an analysis of search behavior forCSC. Section 5 presents the experimental results and com-parison. Conclusions are drawn in Sect. 6.

2 Related work

Since several real-world problems (Mei et al. 2014; Viciniand Quagliarella 1999) were considered as optimization of alarge number of variables, various meta-heuristic algorithmshave been proposed to handle the large-scale optimizationproblems. The existing algorithms for LOPs can be mainlyclassified into the following three categories (Mahdavi et al.2015).

2.1 PSO hybridized with other algorithmictechnique for LOPs

Thefirst category is thePSOwith other algorithmic techniqueto enhance its performance.

Zhao et al. (2008) introduced a dynamic multi-swarm par-ticle swarm optimizer (DMS-PSO) with a technique of localsearch for LOPs. DMS-PSO was tested by seven benchmarkproblems of CEC’2008 (Tang et al. 2007). Their work Zhaoet al. (2008) shows that DMS-PSO can find reasonable solu-tions for all of the problems. To improve the results, Marco etal. Oca Montes et al. (2011) proposed an incremental parti-cle swarm for large-scale continuous optimization problems.Their results also indicate how to tune the parameters of PSOfor LOPs. Li andYao (2012) proposed a newcooperative coe-volving particle swarm optimization algorithm (CCPSO2) toscale up particle swarm optimization (PSO) algorithms forLOPs. CCPSO2 adopts a new updating rule of PSO positionthat relies on Cauchy and Gaussian distributions to samplenew points in the search space, and a scheme to dynami-cally determine the coevolving subcomponent sizes of thevariables (Li and Yao 2012). CCPSO2 was shown to bemore effective than DMS-PSO and other three algorithmsfor LOP by the experimental results on the eleven bench-mark functions of 100D, 500D, 1000D and three functionsof 2000D which were all introduced from CEC’2008 (Tanget al. 2007).

Since the benchmark functions provided by CEC’2010(Tang et al. 2009) were presented, there have been two hybridPSOs proposed for solving the functions. One is a hybridswarm intelligence optimizer (Vicini and Quagliarella 1999)based on particles and artificial bees for high dimensionalsearch spaces. It was proved to achieve better performancethan PSO (Eberhart and Kennedy 1995) and artificial beecolony algorithm (ABC) (Basturk and Karaboga 2006) inmost of the cases by calculating 20 benchmark functionsof CEC’2010 (Tang et al. 2009). Another is a hybrid PSOwith imperialist competitive algorithm (ICA) (Ghodrati et al.2012) that was shown to be superior to PSO (Eberhart andKennedy 1995) and ICA (Atashpaz-Gargari and Lucas 2007)in solving LOPs. Furthermore, some new swarm intelligencealgorithm was also proposed for LOPs. For example, the

123

Page 3: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

competition swarm optimizer (CSO) (Cheng and Jin 2015)was proved to be efficient for more than 1000D large-scaleoptimization problems.

2.2 Improved DE for LOPs

The second category introduces additional strategies toimprove the performance of DE for LOPs.

Brest et al. (2010) modified the original DE (Storn andPrice 1997) with cooperative coevolution as a dimensiondecomposition mechanism. The DE (Brest et al. 2010)was shown to be robust for seven benchmark functions ofCEC’2008 (Tang et al. 2007). They improved the workin advance by setting a small and varying population size(Brest et al. 2012) to solve 20 benchmark functions ofCEC’2010 (Tang et al. 2009). Their DE (Brest et al. 2012)was proved to be highly competitive in comparison withthe algorithms presented at similar CEC’2010 (Tang et al.2009) competition. Similarly, Wang et al. (2011) employsan adaptive population size mechanism to enhance the per-formance of DE for LOPs by the experimental results onthe benchmark functions of CEC’2008. Takahama and Sakai(2012) developed a differential evolution with landscapemodality detection and a diversity archive (LMDEa) forLOPs. It was shown that LMDEa can improve the searchefficiency, comparedwith two cooperative coevolutionmeth-ods (Yang et al. 2008a, b) in 15 problems on average outof 20 problems provided by CEC’2010. In general, thiscategory of DEs has several tuning parameters that can sig-nificantly make DE robustly avoid premature convergence inthe solution of LOPs, but this is largely ad hoc. Yang et al.2008a put forward a difference evolution with cooperativecoevolution (DECC-G). Omidvar et al. (2014) proposed acooperative coevolution with differential grouping (DECC-DG) whose high performance was proved for large-scaleoptimization.

2.3 Combination of multiple algorithms for LOPs

The third category is to combine multiple algorithms whosestrengths can jointly prevent premature convergence for solv-ing LOPs.

Memetic algorithm based on local search chains (MA-SW-Chains) (Molina et al. 2010) is also a combinatorialmethod of global and local operators for LOPs. MA-SW-Chainswas shown to obtain superior results in the benchmarkproblems (Tang et al. 2009) defined in the special sessionof CEC’2010. Besides meta-heuristic algorithms, differentlocal search methods can be also combined to tackle LOPslike multiple trajectory search (MTS) (Tseng and Chen2008). Yang et al. (2008b) subsequently put forward a frame-work of multilevel cooperative coevolution (MLCC) forlarge-scale optimization. Their proposed methods outper-

formed several existing algorithms for the benchmark func-tions of CEC’2010 significantly. After that, CC frameworkwas modified with supplementary strategy for improving thesolution of LOPs, like cooperative coevolution with deltagrouping (Omidvar et al. 2010) and cooperative coevolutionwith global search (Zhang and Li 2012). Cooperative coevo-lution orthogonal artificial bee colony (CCOABC) (Ren andWu 2013) are demonstrated to be high performing for LOPsby the experimental results of computing the benchmarkfunctions of CEC’2008 (Tang et al. 2007) and CEC’2010(Tang et al. 2009).

From the discussion above, preventing premature con-vergence is not only a challenging task in the research oflow-dimension optimization (Chen et al. 2013) but also animportant issue of LOPs especially to PSOs.

As the reviews of the three categories, the proposedstrategy is necessary to be significantly different from the pre-vious studies and higher-performance for LOPs. Therefore,CSC is designed to be nonhybrid, low-complexity, indepen-dent to local search and easy-to-implement while keeping thefast-converging property of PSO.

3 PSO based on convergence speedcontroller

The terminology convergence for PSOwas defined in severaltheoretical researches (Bergh and Engelbrecht 2010; Schmitt2015; Schmitt and Wanka 2015), two types of which arediscussed in this paper. The first one is swarm convergence(Bergh and Engelbrecht 2010) which means that all of theparticles in the swarm are nearly the same in the position.It can be measured when the distance between two arbitraryparticles is near to zero. The second one is global attractorconvergence (Schmitt and Wanka 2015) which means thatthe fitness of the best-so-far particle is approximate to theone of the optimal solution. It can be measured by the fitnessvariation (being smaller for minimized problems and largerformaximized problems) of the best-so-far particle. The con-vergence speed discussed in this paper contains two types.The first one is the speed of swarm convergence (Bergh andEngelbrecht 2010) that measures how fast the position of theparticles become very similar. It also reflects the reductionof the swarm diversity. The second one is the speed of globalattractor convergence (Schmitt and Wanka 2015) that mea-sures how fast the fitness of the best-so-far particle closesto the one of the optimal. Usually, the speed of swarm con-vergence should not be too fast because it makes againstthe swarm diversity which is considered to greatly impacton the performance of PSO by many researchers (Clerc andKennedy 2002; Huang et al. 2012; Li and Yao 2012; Lianget al. 2006; Shi and Eberhart 1998). The speed of globalattractor convergence is expected to be fast with the condi-

123

Page 4: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

tion of good swarm diversity. Therefore, this paper proposesa convergence speed controller of PSO to adjust the con-vergence speeds of swarm and global attractor for keepingthe balance between convergence speed and swarm diver-sity (Li and Yao 2012; Liang et al. 2006; Shi and Eberhart1998).

3.1 Procedure of PSO-CSC

The proposed CSC contains two conditions and two rules.First, Condition I and Rule I are designed by consideringthe property of swarm convergence (Bergh and Engelbrecht2010). Condition I is to examine whether the host PSO isgoing to fall into premature convergence. Rule I is corre-spondingly to regenerate the swarm to slow down the speedof swarm convergence. Second, Condition II and Rule II aredesigned to adjust the global attractor convergence (Schmittand Wanka 2015). Condition II is to check whether the hostPSO converges at a slow speed. Rule II is to renew the swarmto accelerate the speed of global attractor convergence. In theview of algorithm description, the proposed CSC is an addi-tional pseudocode to the procedure of the host PSO (PSOin this study). The pseudocode of PSO-CSC is illustrated byAlgorithm 1, and the corresponding steps of Algorithm 1 areintroduced as follows.

Step 1 and Step 2 directly follow the procedure of PSO(Shi and Eberhart 1998) without any modification. Accord-ing to Shi and Eberhart (1998), Xi (x1i , x

2i , . . . , x

ni ) and

Vi (v1i , v2i , . . . , v

ni ) denote the i-th particle’s position vector

and velocity vector, where n is the dimension size respectiveto the objective function, i = 1, 2, . . . , K and K is the swarmsize (number of particles). pBesti is the historically bestposition of the i-th particle, and gBest is the historically bestposition of the entire swarm,where i = 1, 2, . . . , K . Accord-ing to the parameter setting of PSO (Shi and Eberhart 1998),χ is the parameter of inertia weight. c1 and c2 are the param-eters of acceleration constant. r j

1 and r j2 are random numbers

uniformly distributed in [0,1], where j = 1, 2, . . . , n. f isthe evaluation function of particles.

With the CSC, we aim to keep the rapid convergenceadvantage of PSO (Shi and Eberhart 1998) by using a param-eter setting that χ = 0.4, c1 = 1.5 and c2 = 2. The settingcan help PSO converge at fully rapid speed according tothe analysis conclusion (Shi and Eberhart 1998). However,rapid convergence can easily cause local-optimization con-vergence. Hence, during the search process, Step 3 adjuststhe convergence speed to help PSO avoid premature conver-gence.

Step 3 is the procedure of the proposedCSC. τ is a positiveinteger. The CSC examines two conditions of the originalPSO every τ iteration times. Condition I is whether the PSOis or going to be premature convergence, and Condition IIis whether PSO converges slowly. Each condition triggers a

Algorithm 1: PSO-CSCInput: an optimization problem of n dimensionsOutput: gBest as the best-so-far solution for the problem//Step 1: InitializationRandomly initialize the entire swarm of particles’ position andvelocity, t = 0, D = 0 and gBest = X1;for each particle i ∈ {1, 2, . . . , K } do

pBesti = Xi ;if f (pBesti ) < f (gBest) then

gBest = pBestiend

end//End of Step 1while termination criterion is not met do

t = t + 1;//Step 2: Particle Updatingfor each particle i ∈ {1, 2, . . . , K } do

for each dimension j ∈ {1, 2, . . . , n} dovji ← χv

ji +c1r

j1 (pBest ji −x j

i )+c2rj2 (gBest j −x j

i );

x ji ← x j

i + vji ;

endif f (Xi ) < f (pBesti ) then

pBesti = Xi ;endif f (pBesti ) < f (gBest) then

gBest = pBesti ;end

end//End of Step 2//Step 3:convergence speed controller

if t mod τ = 0 thenD = D + τ ;if condition I is met then

Run Rule I to slow down the speed of swarmconvergence;

endif condition II is met then

Run Rule II to accelerate the speed of global attractorconvergence;D = 0;

endend//End of Step 3

end

rule to refresh the entire swarm and other index items likepBest1, . . . , pBestK and gBest . D is an adaptive parameterand designed for controlling the range of local search forswarm.Thenext two subsections present the specific contentsof the conditions and rules.

The CSC is adaptively applied according to the investi-gated vectors of pBest1, . . . , pBestK and gBest . When thevector implies that PSO may converge prematurely (Con-dition I), the swarm is directly abandoned and a new oneis generated randomly (Rule I), which is considered to be astrategy of slow swarm convergence. When the value reflectsthat PSO may convergence too slowly (Condition II), theswarm is renewed based on gBest and stochastic varianceto lead the particles to move fast to the positions of betterevaluation value (Rule II).

123

Page 5: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

3.2 Condition I and Rule I

Condition I is designed to check whether the convergencespeed of the host PSO is necessary to be slowed down, andRule I is a procedure of swarm regeneration to slow down theconvergence speed. The problem of premature convergencecan be solved by adaptively slowing down the convergencespeed of PSO.

Condition I is a detection logic for swarm convergence(Bergh and Engelbrecht 2010). One of the most acceptableideas to detect premature convergence is that the entire swarmof particles are very similar in their positions. Therefore,we use a cosine similarity of two randomly selected parti-cles’ position to express Condition I. Given two particles’positions Xa and Xb, their cosine similarity is calculated byexpression (1), where a and b are random integers from 1 toK .

cos(Xa, Xb) =∑n

j=1 xja × x j

b√∑n

j=1(xja )2 ×

√∑nj=1(x

jb )2

(1)

Hence, the condition of premature convergence detection(Condition I in Algorithm 1) is designed as expression (2):

Condition I : cos(Xa, Xb) > δ1 (2)

where 0 < δ1 < 1 is a threshold value of the particle similar-ity. If the PSO has not converged to the optimal solution yet,the convergence is possibly premature when the condition ofexpression (2) holds during the PSO search process. There-fore, the condition can be considered as a detection conditionof swarm convergence in most cases.

The procedure of Rule I in Algorithm 1 is described inAlgorithm 2. Rule I is similar to an initialization of parti-cle position and velocity, but it differs from the initializationof Step 1 in Algorithm 1 since pBest1, . . . , pBestK andgBest remain unchanged. First, Rule I can produce a parti-cle swarm with new position for rest generations to increasethe diversity of the swarm. Hence, the convergence speed ofthe host PSO can be slowed after Rule I is applied. Second,the velocities of particles are set to zero. Besides, the cur-rent pBest1, . . . , pBestK and gBest are unchanged. Theseoperators help the particlesmove faster to the promising area.Generally, Rule I may help the host PSO get another chanceto achieve the global optimization by outputting different-position particles toward the potential neighborhood of theoptimal solution.

Figures 1 and 2 demonstrate two cases of Condition I andRule I. Figure 1 illustrates that the swarm remains the samewhen two randomly selected particles are not similar to eachother in the position. Figure 2 displays that the swarm is

Algorithm 2: sub-algorithm Rule IInput: X1, . . . , XK ; V1, . . . , VK ; pBest1, . . . , pBestK and

gBestOutput: new X1, . . . , XK ; new V1, . . . , VK ; unchanged

pBest1, . . . , pBestK and gBestfor each particle i ∈ {1, 2, . . . , K } do

for each dimension j ∈ {1, 2, . . . , n} dox ji ← random(x j

min, xjmax ); v

ji ← 0;

endend

regenerated but gBest keeps unchanged when Condition I ismet.

3.3 Condition II and Rule II

In addition to Rule I’s role as convergence speed slower,an acceleration of global attractor convergence speed is alsonecessary tomakePSO-CSCconverge rapidly for better solu-tions. Rapid convergence for better solutions is regarded as astrength of PSO.Hence,Condition II andRule II are designedto make use of the strength.

Rule II will be applied when condition II is satisfied.Condition II differs from Condition I because the evalua-tion function value is preferentially investigated instead ofposition value. To save the computational time, gBest is theunique investigated index and marked as a time sequencegBest(t) of generation time t , where t = Nτ and N =1, 2, . . .. Therefore, Condition II is presented as Expression(3).

Condition II : f (gBest(t − τ)) − f (gBest(t))

< δ2| f (gBest(t − τ))| (3)

Fig. 1 An example of the case that Condition I is not met. The Rule Iis not applied

Fig. 2 An example of the case that Condition I is met. The Rule I isapplied

123

Page 6: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

where δ2 > 0 is a threshold value of evaluation functionvalue difference. Because gBest(t) is the best-so-far particleafter t iterations, for minimized problem, it always holdsthat f (gBest(t − r)) ≥ f (gBest(t)). Therefore, the leftside of Eq. (3) is more than or equal to zero. Expression(3) demonstrates that Rule II will run if the improvementof evaluation function value is not significant enough in acycle of τ generations. In fact, it is necessary to enhancethe particles to move fast toward gBest when the host PSOcannot make an obvious progress in τ generations.

When Condition II is satisfied, Rule II is applied for con-vergence speed acceleration. The procedure of Rule II inAlgorithm 1 is introduced by Algorithm 3.

Algorithm 3: sub-algorithm Rule IIInput: X1, . . . , XK ; V1, . . . , VK ; pBest1, . . . , pBestK and

gBestOutput: new X1, . . . , XK ; new pBest1, . . . , pBestK ; updated

gBest ; unchanged V1, . . . , VKfor each particle i ∈ {1, 2, . . . , K } do

for each dimension j ∈ {1, 2, . . . , n} dox ji ← gBest j + N (0, σ j );

endpBesti = Xi ;if f (pBesti ) < f (gBest) then

gBest = pBestiend

end

Rule II has the same items of input and output asRule I, butthe items are renewed except for the vectors of particle veloc-ityV1, . . . , VK . The particle position vectors X1, . . . , XK areupdated by the sum of the j-dimension component of gBestand a normal random number. The standard deviation of thenormal distribution is calculated adaptively by expression(4).

σ j = (x jmax − x j

min)/D (4)

where [x jmin, x

jmax] is the feasible interval for the particle

position in the j-th dimension. According to the formula ofStep 3 in Algorithm 1, the parameter D of expression (4)is always a multiple of τ . It will be reset to zero after RuleII is applied. Therefore, D reflects how many generationsthe evaluation value of gBest keeps a significant improve-ment. On the basis of the assumption that the higher-qualitysolution is near to the global optimal solutions with higherprobability than the lower-quality one, the Rule II is appliedto generate the novel swarm of particles in the neighborhoodof gBest . The expression (4) is used to self-adaptive controlthe scale of the neighborhood. The larger D is, the smallerthe neighborhood is.

Figure 3 presents that the swarmand gBest are unchangedwhen the evaluation function value of gBest is decreased (for

Fig. 3 An example of the case that Condition II is not met. The Rule IIis not applied

Fig. 4 An example of the case that Condition II is met. The Rule II isapplied

the case ofminimized optimization) by a value larger than thethreshold δ2. Figure 4 indicates a result that most particlesof the swarm are generated to be in the neighborhood ofthe original gBest when Condition II is met. After that, thegBest is replaced with the best particle position of the newswarm.

4 Empirical analysis of the proposed CSC

This section presents two empirical results for analyzing theCSC. The first analysis will explain how the recommendedparameters of the proposed CSC are confirmed. The secondone indicates how the CSC works on PSO for large-scaleoptimization.

4.1 Recommended Parameter Setting

The proposed CSC has three parameters; they are the cycleparameter τ , threshold parameters δ1 and δ2. Because CSCis not applied if the conditions are unsatisfied, PSO mayrun without CSC in this case. Therefore, we use the recom-mended parameters (γ , c1 and c2) of PSO (Shi and Eberhart1998) for PSO–CSC. Under this setting, we introduce arecommended setting of the three parameters (τ , δ1, δ2) toPSO–CSC in this subsection.

The effect of the parameters is investigated by testing fiveselected 1000-dimension benchmark functions. The func-tions are F2 (a separable function), F6 (a single-group-m-nonseparable function), F9 (a D/2m-groupm-nonseparablefunction), F14 (a D/m-groupm-nonseparable function) andF20 (a nonseparable function) from the special session ofCEC’2010 from Tang et al. (2009), respectively. There are

123

Page 7: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

Table 1 Comparison of three CC Methods and PSO–CSC in the index “mean (standard deviation) of minimal evaluation function value”

Algorithms F1 F2 F3 F4 F5 F6 F7

CCDML 1.27E−11 4.97E−01 9.33E−10 2.85E+12 4.97E+08 1.81E+07 1.96E+06

(2.52E−12) (1.05E−01) (1.58E−10) (5.20E+11) (1.10E+08) (3.03E+06) (3.75E+05)

CCGS 1.74E−22 3.90E−02 2.21E−01 2.07E+12 1.12E+07 2.84E+06 1.26E+02

(2.84E−23) (7.23E−03) (4.05E−02) (4.04E+11) (1.42E+06) (6.64E+05) (2.48E+01)

DECC-DG 8.57E+05 4.33E+03 1.11E+01 4.63E+11 6.69E+07 1.54E+01 2.71E+04

(2.50E+06) (4.95E+02) (9.51E−01) (1.91E+11) (1.33E+07) (1.59E+00) (2.11E+04)

PSO-CSC 3.16E−04 5.53E+00 4.53E−06 9.79E+10 5.52E+07 4.45E−02 1.87E−02

(9.93E−04) (2.26E+00) (1.96E−05) (2.88E+10) (7.71E+06) (1.94E−02) (2.39E−02)

Algorithms F8 F9 F10 F11 F12 F13 F14

CCDML 5.89E+07 5.21E+07 5.06E+03 1.98E+02 1.27E+04 1.17E+03 1.70E+08

(1.27E+07) (8.47E+06) (8.82E+02) (3.29E+01) (2.34E+03) (2.87E+02) (3.80E+07

CCGS 3.69E+07 8.55E+07 7.37E+03 1.70E+01 5.56E+03 5.93E+02 5.98E+07

(6.16E+06) (1.40E+07) (1.33E+03) (5.05E+00) (8.53E+02) (1.49E+02) (1.09E+07)

DECC-DG 2.91E+07 3.45E+07 3.16E+03 2.68E+01 2.64E+04 2.75E+07 1.99E+07

(2.64E+07) (1.55E+07) (1.50E+02) (3.42E+00) (8.93E+03) (1.54E+07) (1.97E+06)

PSO-CSC 9.24E+00 3.74E+07 4.38E+03 1.43E+02 1.73E+02 9.94E+02 1.19E+08

(1.96E+01) (3.89E+06) (3.07E+02) (6.63E+00) (1.80E+01) (5.16E+02) (8.40E+06)

Algorithms F15 F16 F17 F18 F19 F20

CCDML 8.84E+03 4.10E+02 8.13E+04 3.27E+03 1.38E+06 1.53E+03

(2.36E+03) (7.59E+01) (1.47E+04) (6.68E+02) (2.38E+05) (3.63E+02)

CCGS 1.73E+03 5.49E+01 9.10E+03 3.71E+03 5.26E+05 2.04E+03

(3.01E+02) (1.12E+01) (1.93E+03) (1.09E+03) (1.06E+05) (3.37E+02)

DECC-DG 2.82E+03 1.94E+01 6.53E+00 1.87E+10 9.26E+05 7.51E+08

(3.05E+02) (3.27E+00) (1.40E+00) (4.74E+09) (5.93E+04) (5.49E+08)

PSO-CSC 5.35E+03 3.70E+02 6.39E+03 2.15E+03 5.89E+05 1.21E+03

(4.52E+02) (6.55E+00) (4.97E+02) (6.40E+02) (2.28E+04) (5.55E+02)

For each function, the result that is significantly (by the Wilcoxon test) better than others is marked in boldface

fivegroups in the benchmark functions ofCEC’2010, andonefunction of each group is selected for testing. Table 1 indi-cates the comparison results obtained by PSO–CSCs withdifferent cycle parameters and threshold parameters.

The investigated cycle parameter τ is selected from theinterval [1, 500]. According to the pseudocode of Algorithm1, a small τ can cause a frequent calculation of two selectedindividuals’ similarity. Expression (2) may be carried outwith a high probability by frequently selecting two randomparticles to run Expression (1). As a result, the particle swarmcan be easily regenerated no matter whether it is necessary.Thus, smaller τ (like 0 < τ < 100) is not facilitating for theperformance of PSO–CSC. On the other side, a large τ maylead to a delay checking of Condition I and Condition II. Thedelay checkingmaycause that PSO–CSCfalls into prematureconvergence before Condition I is examined. Hence, largerτ (like τ > 200) is not recommended for PSO–CSC either.

As Table 1 shows, τ=150 is the best tuning parameter in theinterval [1, 500].

The threshold parameter δ1 is designed to evaluate thecosine similarity of two selected particles. The value of δ1is suggested to be larger and close to 1 since higher similar-ity of the particles is more representative to the convergenceof the particle swarm. Therefore, the candidate set of theinvestigated δ1 is {0.9, 0.99, 0.999, 0.9999}. The experimen-tal results indicate that the high-precision δ1 values like 0.999and 0.9999 are a little worse for F6, F9, F14 and F20 thanother δ1 values. As a whole, δ1 = 0.9 is the best setting of allthe investigated candidate value.

The threshold parameter δ2 reflects the requirement of theimprovement degree of gBest in evaluation function value.When δ2 is a large real number, gBest is also required tobe improved to a great extent in evaluation function value.If the condition of expression (3) is satisfied, the swarm andgBest will be updated by Rule II. Hence, a larger δ2 also

123

Page 8: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

Table 2 Parameter tuning for PSO–CSC

τ /δ1/δ2 F2 F6 F9 F14 F20 Recommended choice

50/0.9/0.01 2.39E+03 49.2493 1.15E+08 3.44E+08 4.82E+03

50/0.9/0.001 88.0727 44.9161 5.35E+07 1.81E+08 1.89E+03

50/0.9/0.0001 5.0519 48.4303 4.33E+07 1.29E+08 2.09E+03

50/0.9/1E−05 5.0662 47.1774 4.82E+07 1.59E+08 1.55E+03

50/0.9/0.01 2.38E+03 42.7023 1.15E+08 2.90E+08 1.97E+03

50/0.99/0.001 57.3435 47.0907 5.37E+07 1.66E+08 1.72E+03

50/0.9/0.0001 4.0732 49.2815 4.73E+07 1.37E+08 1.49E+03

50/0.99/1E−05 2.0455 45.9415 4.65E+07 1.35E+08 1.85E+03

50/0.999/0.01 2.40E+03 22.0105 1.29E+08 3.05E+08 5.58E+03

50/0.999/0.001 52.9968 32.4937 5.62E+07 1.61E+08 1.62E+03

50/0.999/0.0001 2.0787 21.5836 4.07E+07 1.38E+08 1.52E+03

50/0.999/1E−05 5.1085 21.4262 4.88E+07 1.31E+08 1.70E+03

50/0.9999/0.01 1.27E+03 19.7749 1.74E+08 3.25E+08 6.01E+03

50/0.9999/0.001 52.3133 16.6187 5.87E+07 1.92E+08 1.81E+03

50/0.9999/0.0001 1.4418 19.5271 4.51E+07 1.40E+08 1.82E+03

50/0.9999/1E−05 0.0012 19.7038 4.05E+07 1.37E+08 1.88E+03

100/0.9/0.01 834.7091 21.2221 9.95E+07 3.18E+08 2.76E+03

100/0.9/0.001 5.0212 21.3353 4.10E+07 1.18E+08 1.27E+03

100/0.9/0.0001 8.9675 5.6818 4.22E+07 1.04E+08 1.58E+03

100/0.9/1E−05 1.0166 5.4375 3.43E+07 1.20E+08 2.12E+03

100/0.99/0.01 1.20E+03 21.2957 8.72E+07 3.25E+08 1.54E+03

100/0.99/0.001 3.9363 21.2859 4.09E+07 1.21E+08 1.62E+03

100/0.99/0.0001 3.9855 4.7761 3.35E+07 1.13E+08 2.00E+03

100/0.99/1E−05 3.9822 8.5336 3.89E+07 1.10E+08 1.91E+03

100/0.999/0.01 721.0549 20.9718 1.24E+08 2.97E+08 2.09E+03

100/0.999/0.001 6.0162 20.7679 4.75E+07 1.56E+08 1.75E+03

100/0.999/0.0001 3.9868 3.6796 3.67E+07 1.12E+08 1.72E+03

100/0.999/1E−05 1.9964 8.194 3.37E+07 1.23E+08 1.94E+03

100/0.9999/0.01 407.9736 7.5996 1.40E+08 3.16E+08 2.75E+03

100/0.9999/0.001 1.4235 2.6196 5.23E+07 1.41E+08 1.29E+03

100/0.9999/0.0001 0.9981 2.5579 4.99E+07 1.33E+08 1.80E+03

100/0.9999/1E−05 3.28E−05 1.1989 3.85E+07 1.28E+08 1.91E+03

150/0.9/0.01 1.23E+03 20.8886 7.82E+07 2.70E+08 1.21E+03

150/0.9/0.001 1.0015 0.0509 3.67E+07 1.21E+08 1.17E+03√

150/0.9/0.0001 5.0646 0.0357 3.85E+07 1.25E+08 1.71E+03

150/0.9/1E−05 4.9762 0.048 3.76E+07 1.08E+08 2.02E+03

150/0.99/0.01 882.5251 21.0078 6.12E+07 2.66E+08 1.05E+03

150/0.99/0.001 5.9906 0.0367 3.91E+07 1.21E+08 1.73E+03

150/0.99/0.0001 4.9849 0.0474 3.75E+07 1.40E+08 1.73E+03

150/0.99/1E−05 6.9748 0.0344 3.27E+07 1.23E+08 1.78E+03

150/0.999/0.01 927.5306 19.9201 8.99E+07 2.96E+08 1.85E+03

150/0.999/0.001 6.1102 0.083 3.51E+07 1.23E+08 1.24E+03

150/0.999/0.0001 3.9825 0.0871 3.99E+07 1.27E+08 1.89E+03

150/0.999/1E−05 0.9996 0.0302 3.87E+07 1.21E+08 1.86E+03

150/0.9999/0.01 198.2415 11.6044 9.72E+07 2.87E+08 1.94E+03

150/0.9999/0.001 7.45E−05 0.0499 5.77E+07 1.75E+08 1.80E+03

150/0.9999/0.0001 2.9862 0.0098 4.67E+07 1.34E+08 1.82E+03

123

Page 9: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

Table 2 continued

τ /δ1/δ2 F2 F6 F9 F14 F20 Recommended choice

150/0.9999/1E−05 1.40E−05 0.0158 5.57E+07 1.43E+08 1.78E+03

200/0.9/0.01 711.6059 20.8025 6.76E+07 2.50E+08 1.31E+03

200/0.9/0.001 9.021 0.0559 3.89E+07 1.22E+08 1.62E+03

200/0.9/0.0001 5.972 0.0998 3.89E+07 1.18E+08 1.91E+03

200/0.9/1E−05 5.9741 0.0208 3.68E+07 1.19E+08 1.89E+03

200/0.99/0.01 894.2787 20.9198 7.43E+07 1.85E+08 2.19E+03

200/0.99/0.001 6.0247 0.0555 3.70E+07 1.53E+08 1.82E+03

200/0.99/0.0001 2.986 0.046 3.60E+07 1.25E+08 1.83E+03

200/0.99/1E−05 8.9572 0.0653 4.59E+07 1.35E+08 1.74E+03

200/0.999/0.01 416.0857 20.3685 6.77E+07 2.22E+08 1.07E+03

200/0.999/0.001 7.9874 0.0285 3.91E+07 1.38E+08 1.80E+03

200/0.999/0.0001 1.991 0.0766 4.21E+07 1.24E+08 1.75E+03

200/0.999/1E−05 4.975 0.0298 4.04E+07 1.32E+08 2.12E+03

200/0.9999/0.01 14.9637 2.491 1.31E+08 2.14E+08 1.89E+03

200/0.9999/0.001 2.0025 0.2658 4.40E+07 1.64E+08 1.70E+03

200/0.9999/0.0001 0.9959 4.3033 4.55E+07 1.45E+08 1.80E+03

200/0.9999/1E−05 1.08E−04 4.3567 5.93E+07 1.57E+08 1.69E+03

500/0.9/0.01 9.9909 0.2418 6.77E+07 2.12E+08 1.88E+03

500/0.9/0.001 7.0069 0.7304 5.17E+07 1.63E+08 1.70E+03

500/0.9/0.0001 6.9714 1.3326 5.86E+07 2.03E+08 1.69E+03

500/0.9/1E−05 12.9448 0.2211 5.38E+07 1.83E+08 1.51E+03

500/0.99/0.01 15.2872 0.7936 6.87E+07 2.50E+08 1.90E+03

500/0.99/0.001 8.9777 0.4314 5.51E+07 1.76E+08 1.86E+03

500/0.99/0.0001 7.9701 0.0642 5.25E+07 1.86E+08 1.87E+03

500/0.99/1E−05 5.9723 0.3957 5.35E+07 1.81E+08 1.92E+03

500/0.999/0.01 8.9903 2.167 6.56E+07 2.12E+08 1.77E+03

500/0.999/0.001 10.3261 3.0817 6.00E+07 1.84E+08 1.87E+03

500/0.999/0.0001 11.0157 1.4215 5.89E+07 1.84E+08 1.70E+03

500/0.999/1E−05 9.9786 1.7728 6.15E+07 1.90E+08 1.37E+03

500/0.9999/0.01 3.9949 1.16E+06 9.06E+07 2.49E+08 1.96E+03

500/0.9999/0.001 11.0076 1.03E+06 7.23E+07 1.92E+08 1.55E+03

500/0.9999/0.0001 5.9757 8.79E+05 7.30E+07 2.12E+08 1.44E+03

500/0.9999/1E−05 6.9672 1.56E+06 7.75E+07 2.08E+08 1.75E+03

Coefficient of Variation 2.45E+00 4.50E+00 4.75E−01 3.62E−01 4.06E−01

determines a higher probability that Rule II runs. It is nec-essary to run Rule II to accelerate the convergence speedwhen the evaluation function value of gBest is not improvedenough in τ generations. Therefore, the investigated valuesof δ2 are set to be small real numbers of different precision{1e−02, 1e−03, 1e−04, 1e−05}. Finally, the experimentalresults demonstrate that δ2 = 1e−03 is better than otherinvestigated values for PSO–CSC to solve the benchmarkfunctions.

Table 2 shows the results of parameter tuning for PSO–CSC. We use coefficient of variation (CV) to investigate thediscrete degree of different settings of three parameters for

each tested function. CV is equal to the ratio of standarddeviation to mean. Large CV means that the parameters hada great influence on the result. As Table 2 indicates, the CVsof F2, F6, F9, F14 and F20 are, respectively, 2.45E+00,4.50E+00, 4.75E−01, 3.62E−01 and 4.06E−01. The param-eters have exerted a significantly greater influence on theresults for F2 and F6 than the ones for F9, F14 and F20,sowe give preference to the best parameter setting for F2 andF6. Therefore, the recommended parameters for PSO-CSCare τ = 150, δ1 = 0.9 and δ2 = 1E−03.

123

Page 10: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

4.2 The impact of the convergence speed controlleron PSO

The subsection presents numerical results to indicate howthe CSC works during the search process of the host PSO.We use the same selected benchmark functions F2, F6, F9,F14 and F20 to investigate the results obtained by PSO (Shiand Eberhart 1998) and the proposed PSO–CSC. Further-more, the dotted line represents the results obtained by PSO.The solid line represents the results obtained by PSO–CSCwhen the Rule I and Rule II are not applied. If the Rule I isapplied, we will mark the circle on the solid line. If the RuleII is applied, we will mark the triangle on the solid line. It isdemonstrated by Fig. 5. Rule I plays a part in slowing downthe convergence speed of PSO (the host PSO) in all of thecases, but Rule II only works to accelerate the convergencespeed in F2, F6 and F20.

In the case of F2, only Rule I impacts on the first half ofthe generation process, and both of the two rules affect therest of the generations. F2 is multimodal, shifted, separableand scalable (Tang et al. 2009), which results in PSO easilyfalling into the premature convergence. Thus, Rule I helpsthe host PSO to avoid premature convergence and Rule IIobviously improves the solution of the host PSO.

On the contrary to the case of F6, Rule II is only appliedin the first half of the generations, but Rule I runs frequentlyin the whole PSO generations. F6 is a multimodal, shifted,m-rotated and m-nonseparable function. According to theresults of Fig. 5, Rule II is effective for the nonseparablefunction since it can promote the host PSO to obtain bettersolutions in the first half of the generations. Furthermore,Rule I is valid for the multimodal function because the solu-tion is improved after it is applied.

The situation is very different in the case of the results ofF9 and F14.During the search process for F9 and F14, RuleI usually runs while Rule II is seldom applied. F9 and F14are both unimodal, shifted, m-rotated and m-nonseparable.The rotated and nonseparable property leads the function tohave many traps, which further results in the solution of PSO(Shi and Eberhart 1998) easily falling in a local optimum.To tackle the problem of the traps, Rule I is required to runfor convergence speed deceleration and swarm diversity. Theresults of Fig. 5 indicate that Rule I help the host PSO escapefrom the traps.

The case of F20 is another special instance. Rule I runsfrequently in the generations, and the implementation ofRuleII distributes in some discrete points of the curve. F20 is afull-nonseparable, multimodal and shifted function. Rule I ispositive for PSO to solve the problems from the multimodaland full-nonseparable complexity. Furthermore, some runsof Rule II are also necessary for mutilmodal problems likethe cases of F2 and F20.

Fig. 5 Investigating the running of Rule I and Rule II during PSOgenerations for five selected benchmark functions. The vertical axis isthe evaluation function value, and the horizontal axis is the number ofFEs

From above it is clear that Rule I is effective for all of theselected functions of five styles, but Rule II is valid for themultimodal functions like F2, F6 and F20. The running timeof Rule II for nonseparable functions is more than the onefor separable functions. Generally, the two rules of the CSCare able to help the host PSO (PSO) control its convergencespeed according to the properties of the tackled functions.

5 Experimental results and comparisons

In order to verify the effectiveness of the proposed CSCfor PSO (Shi and Eberhart 1998), we present the resultsof two experiments on 20 1000-dimension benchmark func-tions (Tang et al. 2009) of CEC’2010 special session. Thefirst experiment (introduced in Sect. 5.2 is a comparison ofPSO–CSC with PSO algorithms for large-scale optimizationproblems. The comparison is to show how PSO (Shi andEberhart 1998) as the host PSO is empowered by the pro-posedCSC. The second experiment (presented in Sect. 5.3) isto compare the proposed PSO-CSCwith several CCmethodson the 1000D benchmark functions (Tang et al. 2009). CCmethodswere showed to be a class of high-performance algo-

123

Page 11: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

rithms for large-scale continuous optimization. Therefore,the motivation of the second comparison is to prove whetherCSC can empower the PSO–CSC to be an effective PSOmethod for large-scale continuous optimization. The sourcecodes of the compared algorithms are obtained from the cor-responding authors, and the results coincide completely withthe corresponding references.

5.1 Experimental settings

The test 1000D benchmark functions (Tang et al. 2009) canbe classified into categories. The first category has three sep-arable functions F1–F3. The second has five single-groupm-nonseparable functions F4–F8. The third has five D/2m-group m-nonseparable functions F9–F13. The fourth hasfive D/m-group m-nonseparable functions F14–F18. Thelast has two fully nonseparable functions F19–F20. In gen-eral, separable problems are considered to be the easiest ofthe five categories, while the fully nonseparable ones usuallyare themost difficult (Tang et al. 2009). The five categories offunctions are used to test whether the performance of PSO–CSC is steadily efficient enough to solve the functions fromseparable to fully nonseparable. The benchmark function isconsidered to be the evaluation function for each comparedalgorithms.

In our experiments, the maximum function evaluation(FE) number to solve benchmark functions is set uniformlyto be 3.0E+06 times for all of the compared algorithms. Inorder to obtain statistical results, each test of an algorithmicsolution to a benchmark function is carried out repeatedlyand independently 30 times. Furthermore, we adopted therank sum method of Wilcoxon test (García et al. 2009) toanalyze numerical results.

In the experiment, the parameters of PSO–CSC are setas follows: the swarm size is K = 30, the accelerationcoefficients of Algorithm 1 is c1 = 1.5 c2 = 2, and theinertia weight χ = 0.4 according to the results of Brattonand Kennedy (2007). Based on the recommended parame-ter results of Subsection IV.A, the three parameters of theproposed CSC are set to be that τ = 150, δ1 = 0.9 andδ2 = 1e − 03.

5.2 Comparing PSO–CSC with five PSOs for LOPS

This subsection introduces the comparison results among sixPSO methods including the proposed PSO–CSC. The firstone is proposed by Shi and Eberhart (1998), denoted as PSOofwhich the parameter setting strongly impacts the later PSOresearch. The parameter setting of PSO follows the sugges-tion of Shi and Eberhart (1998). Because PSO was designedfor small-scale benchmark problems, it was proved to beweak performance for large-scale continuous optimizationproblems (Cai et al. 2017; Eberhart and Kennedy 1995; Gar-

cía et al. 2009; Ghodrati et al. 2012; Gu et al. 2018; Huanget al. 2012; Li et al. 2013; Li and Yao 2012; Liang et al. 2006;Mahdavi et al. 2015; Mei et al. 2014; Molina et al. 2010; OcaMontes et al. 2011). The second–fifth compared PSOs arehybrid ICAPSO (Ghodrati et al. 2012), CCPSO2 (Li andYao 2012), DESR-PSO (Zhao et al. 2010), and DMS-PSO-SHS (Cheng et al. 2012). Recently, they have been verifiedto be obviously efficient to solve the large-scale optimiza-tion problems by testing the 1000D benchmark functions ofCEC’2010. The last PSO is PSO–CSC which is a PSO (Shiand Eberhart 1998) with the proposed CSC. The experimentresult of this subsection is to demonstrate that the improve-ment of the CSC to PSO for large-scale problems is efficientand significant.

The parameter setting of the compared PSOs is imple-mented following their original designing (Bratton andKennedy 2007; Cheng et al. 2012; Ghodrati et al. 2012; Liand Yao 2012; Zhao et al. 2010). All six PSO algorithms arecarried out 30 times, and their means and standard deviationsof the evaluation function value are shown in Table 3.

The first category is a set of separable functions(F1–F3),which is the easiest of all to tackle since the variable ofevery dimension can be optimized independently to achievea global optimal (Tang et al. 2009). As Table 3 shows, DMS-PSO-SHS and ICAPSO perform the best among all of thePSOs for the solution of the first class. For the rankingof the obtained evaluation function value, the PSO–CSC isthe second powerful, while DESR-PSO and CCPSO2 arethe third powerful. The results on the separable functions(F1–F3) indicate that DMS-PSO-SHS and ICAPSO havehigher convergence speed than PSO–CSC in each indepen-dent dimension of the optimized vector. Furthermore, theresults of PSO–CSC show that the proposed CSC obviouslyempowers PSO with higher-performance than DESR-PSOand CCPSO2. The improvement is based on the convergencespeed acceleration of Rule II.

The sing-group m-nonseparable functions (F4–F8) arerelatively more difficult than the first-category functionsbecause an m-size group (m = 50) of variables is rotatedto be nonseparable. Table 3 indicates that DMS-PSO-SHSand PSO–CSC are performing significantly better than othercompared PSOs since they have adaptive strategies (subre-gional harmony search, Cheng et al. 2012) for the propertyof nonseparable. The results on the second-class functionsalso prove that the CSC enhances the performance of PSOfor LOPs. The solution of PSO–CSC is the best of all forF4, F5, F7 and F8 according to their means and standarddeviations, and only DMS-PSO-SHS performs better thanPSO–CSC in solving F6.

The third–fifth categories of functions are incrementallynonseparable in numerical order. Table 3 shows that theadvantage of CCPSO2 is distinct for the nonseparable func-tions because the CC strategy (Li and Yao 2012; Potter and

123

Page 12: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

Table 3 Comparing PSO–CSC with other PSOs in terms of “mean (standard deviation) of minimal evaluation function value”

Algorithms F1 F2 F3 F4 F5 F6 F7

DMS-PSO-SHS 1.16E−12 3.38E+02 5.46E−07 4.99E+11 8.72E+07 2.30E−06 2.12E+03

(2.92E−12) (5.16E+00) (6.03E−08) (2.48E+10) (1.88E+07) (5.48E−06) (4.39E+03)

ICAPSO 2.80E−03 6.54E−14 3.23E−10 2.34E+12 4.07E+08 1.90E+07 4.57E+05

(2.60E−03) ( 2.72E−13) (1.56E−10) (7.27E+11) (4.52E+07) (2.04E+05) (2.32E+05)

PSO 1.51E+11 2.00E+04 2.14E+01 4.67E+13 4.20E+08 1.73E+07 7.46E+10

(1.48E+10) (3.73E+02) (2.80E−02) (2.22E+13) (6.04E+07) (3.02E+06) (2.55E+10)

DESR-PSO 4.86E+06 8.62E+03 1.95E+01 1.62E+12 3.57E+08 1.55E+07 7.47E+04

(6.95E+05) (1.59E+02) (1.36E−01) (5.26E+11) (3.27E+07) (3.83E+06) (1.42E+04)

CCPSO2 3.95E+05 6.84E+02 3.07E−02 4.27E+12 5.19E+08 1.74E+07 1.34E+09

(1.37E+06) (3.16E+02) (8.96E−02) (4.26E+12) (2.74E+08) (5.65E+06) (2.41E+09)

PSO–CSC 3.16E−04 5.53E+00 4.53E−06 9.79E+10 5.52E+07 4.45E−02 1.87E−02

(9.93E−04) (2.26E+00) (1.96E−05) (2.88E+10) (7.71E+06) (1.94E−02) (2.39E−02)

Algorithms F8 F9 F10 F11 F12 F13 F14

DMS-PSO-SHS 9.55E+07 6.80E+06 5.48E+03 3.89E+00 1.21E+04 7.74E+02 1.72E+07

(4.93E+07) (1.20E+06) (3.80E+02) (3.80E−01) (1.68E+03) (1.12E+02) (2.77E+07)

ICAPSO 8.40E+07 5.01E+07 4.19E+03 1.94E+02 1.79E+04 4.79E+03 1.62E+08

(3.95E+07) (5.91E+06) (1.26E+02) (2.59E−01) (2.00E+03) (3.83E+03) (1.52E+07)

PSO 1.85E+14 1.55E+11 2.04E+04 2.34E+02 9.90E+06 1.03E+12 1.52E+11

(3.05E+14) (1.48E+10) (4.97E+02) (1.81E−01) (1.17E+06) (1.41E+11) (1.88E+10)

DESR-PSO 1.47E+07 6.97E+08 8.86E+03 2.14E+02 3.13E+05 3.53E+05 2.17E+09

(2.92E+07) (6.80E+07) (1.74E+02) (6.75E−01) (3.11E+04) (2.30E+04) (1.23E+08)

CCPSO2 6.17E+07 8.60E+07 5.00E+03 1.98E+02 9.13E+04 1.33E+03 3.12E+08

(4.49E+07) (1.93E+07) (1.09E+03) (3.52E−01) (4.21E+04) (6.09E+02) (4.90E+07)

PSO–CSC 9.24E+00 3.74E+07 4.38E+03 1.43E+02 1.73E+02 9.94E+02 1.19E+08

(1.96E+01) (3.89E+06) (3.07E+02) (6.63E+00) (1.80E+01) (5.16E+02) (8.40E+06)

Algorithms F15 F16 F17 F18 F19 F20

DMS-PSO-SHS 6.31E+03 3.35E+01 1.07E+03 2.53E+03 1.91E+06 2.18E+03

(7.42E+02) (1.73E+01) (2.29E+02) (1.67E+03) (2.10E+05) (3.09E+02)

ICAPSO 7.72E+03 3.88E+02 7.11E+04 2.30E+04 9.20E+05 1.91E+03

(1.69E+02) (1.18E+00) (5.00E+03) (4.98E+03) (7.09E+04) (8.41E+02)

PSO 2.02E+04 4.27E+02 1.82E+07 3.67E+12 3.43E+07 4.00E+12

(3.97E+02) (2.56E−01) (2.37E+06) (2.17E+11) (6.87E+06) (2.22E+11)

DESR-PSO 8.93E+03 3.90E+02 9.17E+05 2.85E+06 2.24E+06 3.38E+06

(1.96E+02) (1.65E+00) (6.21E+04) (7.00E+05) (2.14E+05) (6.93E+05)

CCPSO2 1.14E+04 3.97E+02 2.24E+05 6.80E+05 2.46E+06 1.77E+03

(2.09E+03) (7.11E−01) (4.67E+04) (3.31E+06) (3.59E+05) (2.08E+02)

PSO–CSC 5.35E+03 3.70E+02 6.39E+03 2.15E+03 5.89E+05 1.21E+03

(4.52E+02) (6.55E+00) (4.97E+02) (6.40E+02) (2.28E+04) (5.55E+02)

For each function, the result that is significantly (by the Wilcoxon test) better than others is marked in boldface

De Jong 1994) is helpful in solving the problems causedby the nonseparable property. Moreover, the performance ofDMS-PSO-SHS is superior to other PSOs for F9, F11, F13,F14 and F16. The proposed PSO–CSC performs the bestfor F12, F15, F18, F19 and F20. Therefore, CSC is able to

dynamically optimize each group of nonseparable variablesin the benchmark functions no matter how nonseparable thevariables are.

Generally, the effectiveness of the proposed CSC has beenproved by its significant improvement to PSO. According

123

Page 13: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

Fig. 6 Comparison of DMS-PSO-SHS, ICAPSO, PSO, DESR-PSO, CCPSO2 and PSO–CSC in solving 20 benchmark functions. The results wereaveraged over 30 runs. The vertical axis is the function value and the horizontal axis is the number of FEs

to Table 3, PSO–CSC has nine best-performance func-tions, DMS-PSO-SHS has eight, ICAPSO has two. The highperformance of PSO–CSC is obvious for the benchmarkfunctions of the second and fifth categories. For the first,third and forth categories, the performance of PSO–CSC isclose to DMS-PSO-SHS and ICAPSO. However, as the hostPSO of the CSC, PSO performs theworst of all the PSO algo-rithms under comparison. Hence, we can draw a conclusionthat the propose CSC strongly empowers PSO for large-scaleoptimization.

5.3 Comparing PSO–CSC with CCmethods for LOPs

This subsection presents comparison results of the samebenchmark functions (Tang et al. 2009) to show the perfor-mance difference between the proposed PSO-CSC and theCC methods (CCDML Omidvar et al. 2010, CCGS Zhangand Li 2012, DECC-DG Omidvar et al. 2014). The CCframework (Potter and De Jong 1994) is widely used todesign meta-heuristic algorithm for large-scale optimizationbecause it has its strength in dealing with the nonsepara-ble variables. All of the CC methods (Omidvar et al. 2010;

123

Page 14: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

Fig. 7 Comparison of CCDML, CCGS, DECC-DG, CCPSO2 and PSO–CSC in solving 20 benchmark functions. The results were averaged over30 runs. The vertical axis is the function value, and the horizontal axis is the number of FEs

Ren and Wu 2013; Zhang and Li 2012) are implemented 30times, and the results are presented in Table 1. The parametersettings of CC methods directly refer to the correspondingreferences (Omidvar et al. 2010; Ren and Wu 2013; Zhangand Li 2012).

Table 1 indicates that none of the four algorithms performthe best for all of the benchmark functions. CCDML (Omid-var et al. 2010) performs the best in one first-category (F3).CCGS (Zhang and Li 2012) is the best for four instances thatinclude two first-category functions (F1 and F2), one third-

category function (F13) and one fourth-category function(F15). The proposed PSO–CSC performs the best among allfor eight instances including five second-category functions(F4–F8), one third-category function (F12), one fourth-category function (F18) and one fifth-category function(F20). In general, PSO–CSC is the most effective among thealgorithms under comparison for the 20 1000D benchmarkfunctions on average.

123

Page 15: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

5.4 Analyzing the tendency charts

In order to make further investigation, we provided 20 ten-dency charts of the algorithmsunder comparison for F1–F20by Figs. 6 and 7. The horizontal axis of each chart is the com-putation times of evaluation function from 0 to 3E+06. Thevertical coordinate is the value of objective function underdifferent accuracy for different functions.

Figures 6 and 7 indicate that the tendency curve of PSO–CSC is smooth for nearly all of the benchmark functionsexcept that there are some jumping points in its curve for F5and F6. Furthermore, the most curves of PSO–CSC are notonly smooth but also obviously decline. The proposed CSCuses condition I and Rule I to increase the diversity of theswarm and implements condition II and Rule II to help theswarm converge fast. The comprehensive effect of the con-ditions and rules can help the host PSO converge steadily,which is demonstrated by the smooth curves of Figs. 6 and 7.Therefore, the smooth curves verify the steady performanceof PSO–CSC. Furthermore, compared to the flat curves pro-duced by PSO, the smooth and declining curves produced byPSO–CSC also prove the steady improvement of CSC to itshost PSO (PSO).

Therefore, the tendency charts clearly show the advantageof PSO–CSC for all of the few-group functions and somemulti-group functions (F17, F18). Moreover, the weaknessof PSO–CSC for the multi-group functions is also indicatedby the charts. We use the performance ranking of PSO–CSC among six PSOs and three CC methods to analyze thestrength and the weakness as shown by Table 4. All of thebenchmark functions (Tang et al. 2009) are produced basedon five functions that are shifted elliptic function (F1, F4,F9 and F14), shifted rastrigin’s function (F2, F5, F10 andF15), shifted ackley’s function (F3, F6, F11 and F16),shifted schwefel’s function (F7, F12, F17 and F19) andshifted rosenbrock’s function (F8, F13, F18 and F20).Table 4 shows that the performance of the PSO–CSC isalmost the best for the shifted elliptic function, shifted rastri-gin’s function and shifted ackley’s function when the numberof the separable variables is small. However, with the num-ber of the groups increasing, the performance of PSO–CSCdegraded. It means that PSO–CSC has advantage in solvingthe nonseparable function and almost nonseparable function.In our view, the strength lies in the property that CSC canachieve a balance of swarm diversity and convergence speedthrough its conditions and rules. On the other side, becausethere is no information of variable group used in the con-ditions and rules of the proposed CSC, PSO–CSC performsa little worse than DMS-PSO-SHS (Cheng et al. 2012) andCCGS (Zhang and Li 2012) for the three functions of D/mgroups.

Table4

Investigationof

performance

rankingof

PSO–C

SCforfiv

estyles

ofbenchm

arkfunctio

nsfrom

separableto

nonseparable

Nam

eof

functio

nOnlyonegroup/nonseparable

One

groupandseparablevariables

D/2m

groups

D/m

groups

Dgroups/separable

Func

Ranking

Func

Ranking

Func

Ranking

Func

Ranking

Func

Ranking

Shiftedellip

ticfunctio

nNull

Null

F4

1st

F9

2nd

F14

3rd

F1

4th

Shiftedrastrigin’

sfunctio

nNull

Null

F5

2nd

F10

1st

F15

3rd

F2

4th

Shiftedackley’sfunctio

nNull

Null

F6

2nd

F11

3rd

F16

3rd

F3

3rd

Shiftedschw

efel’sprob

lem

F19

2nd

F7

1st

F12

1st

F17

1st

Null

Null

Shiftedrosenbrock’sfunctio

nF20

1st

F8

1st

F13

2nd

F18

1st

Null

Null

123

Page 16: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

H. Huang et al.

Generally, the results of the tendency charts demonstratethat PSO–CSC really empowers PSO for different types ofbenchmark functions.

6 Conclusion and discussion

Considering PSO’s poor performance in solving large-scaleoptimization problems, we tried to improve its performancethrough automatically controlling the convergence speed.Hence, we proposed a convergence speed controller embed-ded in the procedure of PSO. The proposedCSC contains onerule to slow down the swarm convergence speed of PSO, andthe other to accelerate its speed of global attractor conver-gence. The CSC can help the host PSO adaptively achieve abalance between convergence speed and diversity. The exper-iment results show that CSC has three positive impacts onPSO which is taken as a host PSO in our study. First, CSCcan increase PSO’s effectiveness for large-scale optimiza-tion with limited generations. Second, CSC can lead PSOto avoid premature convergence by keeping the diversity ofthe swarm. Third, CSC is able to maintain the high qual-ity of the particles while keeping the diversity. Furthermore,the results of comparison experiment support our argumentthat the performance of PSO for large-scale optimization canbe empowered greatly with the proposed CSC. Moreover,the PSO–CSC on average performs better than PSO algo-rithms and CC methods to solve the benchmark functions.The numerical results also indicate the obvious strength ofPSO–CSC for the few-group functions, but marginal for themulti-group functions.

The successful case that PSO is improved by CSC boostsour confidence to apply CSC to improve the effectivenessof meta-heuristic algorithms in our future studies. SinceCSC is an additional framework, it can be added to otherheuristic algorithms. In our future studies, we will designother CSCs for differential evolution and evolutionary pro-gramming and investigate the changes by controlling theirconvergence speed.

Acknowledgements This work is supported by National Natural Sci-ence Foundation of China (61370102), Guangdong Natural ScienceFunds for Distinguished Young Scholar (2014A 030306050), the Min-istry of Education—China Mobile Research Funds (MCM20160206)and Guangdong High-level personnel of special support program(2014TQ01X664).

Compliance with ethical standards

Conflict of interest All authors of this paper declare that we have noconflict of interest.

Human and animal rights This paper does not contain any studies withhuman participants or animals. This paper has not been submitted tomore than one journal and it has not been published previously.

References

AfsharM (2012) Large scale reservoir operation by constrained particleswarm optimization algorithms. J Hydro Environ Res 6(1):75–87

Akay B, Karaboga D (2012) Artificial bee colony algorithm for large-scale problems and engineeringdesignoptimization. J IntellManuf23(4):1001–1014

Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm:an algorithm for optimization inspired by imperialistic competi-tion. In: IEEE congress on evolutionary computation, 2007. CEC2007. IEEE, pp 4661–4667

Basturk B, Karaboga D (2006) An artificial bee colony (abc) algorithmfor numeric function optimization. In: IEEE swarm intelligencesymposium, pp 12–14

Bratton D, Kennedy J (2007) Defining a standard for particle swarmoptimization. In: Swarm Intelligence Symposium, 2007. SIS 2007.IEEE, pp 120–127

Brest J, Boskovic B, Zamuda A, Fister I, Maucec MS (2012) Self-adaptive differential evolution algorithm with a small and varyingpopulation size. In: 2012 IEEE congress on evolutionary compu-tation (CEC). IEEE, pp 1–8

Brest J, Zamuda A, Fister I, Maucec MS (2010) Large scale globaloptimization using self-adaptive differential evolution algorithm.In: 2010 IEEE congress on evolutionary computation (CEC), pp1–8. IEEE

Cai Z, LvL,HuangH,HuH,LiangY (2017) Improving sampling-basedimage matting with cooperative coevolution differential evolutionalgorithm. Soft Comput 21(15):4417–4430

Chen WN, Zhang J, Lin Y, Chen N, Zhan ZH, Chung HSH, Li Y, ShiYh (2013) Particle swarm optimization with an aging leader andchallengers. IEEE Trans Evolut Comput 17(2):241–258

Cheng R, Jin Y (2015) A competitive swarm optimizer for large scaleoptimization. IEEE Trans Cybern 45(2):191–204

Cheng S, Shi Y, Qin Q (2012) Dynamical exploitation space reductionin particle swarmoptimization for solving large scale problems. In:2012 IEEE congress on evolutionary computation (CEC). IEEE,pp 1–8

ClercM, Kennedy J (2002) The particle swarm-explosion, stability, andconvergence in a multidimensional complex space. IEEE TransEvolut Comput 6(1):58–73

de Oca Montes MA, Aydın D, Stützle T (2011) An incremental particleswarm for large-scale continuous optimization problems: an exam-ple of tuning-in-the-loop (re) design of optimization algorithms.Soft Comput 15(11):2233–2255

Eberhart RC, Kennedy J (1995) A new optimizer using particle swarmtheory. In: Proceedings of the sixth international symposium onmicro machine and human science, vol. 1, pp. 39–43. New York,NY

García S, Molina D, Lozano M, Herrera F (2009) A study on the useof non-parametric tests for analyzing the evolutionary algorithms’behaviour: a case study on the CEC’2005 special session on realparameter optimization. J Heuristics 15(6):617–644

Ghodrati A, Malakooti MV, Soleimani M (2012) A hybrid ICA/PSOalgorithm by adding independent countries for large scale globaloptimization. In: Intelligent information and database systems.Springer, pp 99–108

Gu S, Cheng R, Jin Y (2016) Feature selection for high-dimensionalclassification using a competitive swarm optimizer. Soft Comput22(3): 811–822

123

Page 17: Particle swarm optimization with convergence speed ... · scale up particle swarm optimization (PSO) algorithms for LOPs. CCPSO2 adopts a new updating rule of PSO position that relies

Particle swarm optimization with convergence speed controller for large-scale numerical…

Huang H, Qin H, Hao Z, Lim A (2012) Example-based learning par-ticle swarm optimization for continuous optimization. Inf Sci182(1):125–138

Li X, Yao X (2012) Cooperatively coevolving particle swarms for largescale optimization. IEEE Trans Evolut Comput 16(2):210–224

LiX, TangK,OmidvarMN,YangZ,QinK, ChinaH (2013) Benchmarkfunctions for the CEC 2013 special session and competition onlarge-scale global optimization. Gene 7:33

Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensivelearning particle swarm optimizer for global optimization of mul-timodal functions. IEEE Trans Evolut Comput 10(3):281–295

Mahdavi S, Shiri ME, Rahnamayan S (2015) Metaheuristics in large-scale global continues optimization: a survey. Inf Sci 295:407–428

Mei Y, Li X, Yao X (2014) Cooperative coevolution with route distancegrouping for large-scale capacitated arc routing problems. IEEETrans Evolut Comput 18(3):435–449

Molina D, Lozano M, Herrera F (2010) Ma-sw-chains: Memetic algo-rithmbased on local search chains for large scale continuous globaloptimization. In: 2010 IEEE congress on evolutionary computa-tion (CEC). IEEE, pp 1–8

Omidvar MN, Li X, Yao X (2010) Cooperative co-evolution with deltagrouping for large scale non-separable function optimization. In:2010 IEEE congress on evolutionary computation (CEC). IEEE,pp 1–8

Omidvar MN, Li X, Mei Y, Yao X (2014) Cooperative co-evolutionwith differential grouping for large scale optimization. IEEE TransEvolut Comput 18(3):378–393

Omidvar MN, Li X, Tang K (2015) Designing benchmark problems forlarge-scale continuous optimization. Inf Sci 316:419–436

Potter MA, De Jong KA (1994) A cooperative coevolutionary approachto function optimization. In: Parallel problem solving from naturePPSN III. Springer, pp 249–257

Ren Y, Wu Y (2013) An efficient algorithm for high-dimensional func-tion optimization. Soft Comput 17(6):995–1004

Schmitt BI (2015) Convergence analysis for particle swarm optimiza-tion. FAU University Press, Boca Raton

Schmitt M, Wanka R (2015) Particle swarm optimization almost surelyfinds local optima. Theor Comput Sci 561:57–72

Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: The1998 IEEE international conference on evolutionary computationproceedings, 1998. IEEEWorld Congress on Computational Intel-ligence, pp 69–73. IEEE

Storn R, Price K (1997) Differential evolution-a simple and efficientheuristic for global optimization over continuous spaces. J GlobalOptim 11(4):341–359

Takahama T, Sakai S (2012) Large scale optimization by differen-tial evolution with landscape modality detection and a diversityarchive. In: 2012 IEEE congress on evolutionary computation(CEC). IEEE, pp 1–8

Tang K, Li X, Suganthan NP, Yang Z, Weise T (2009) Benchmarkfunctions for the CEC 2010 special session and competition onlarge-scale global optimization. Technical report, University ofScience and Technology of China

TangK,YáoX, Suganthan PN,MacNishC,ChenYP,ChenCM,YangZ(2007) Benchmark functions for the CEC 2008 special session andcompetition on large scale global optimization. In: Nature InspiredComputation and Applications Laboratory, USTC, China

Tseng LY, Chen C (2008) Multiple trajectory search for large scaleglobal optimization. In: IEEE congress on evolutionary computa-tion, 2008. CEC 2008 (IEEE World Congress on ComputationalIntelligence). IEEE, pp 3052–3059

Van den Bergh F, Engelbrecht AP (2010) A convergence proof for theparticle swarm optimiser. Fundam Inform 105(4):341–374

Van Den Bergh F (2006) An analysis of particle swarm optimizers.Ph.D. thesis, University of Pretoria

Vicini A, Quagliarella D (1999) Airfoil and wing design through hybridoptimization strategies. AIAA J 37(5):634–641

Wang H, Rahnamayan S, Wu Z (2011) Adaptive differential evolutionwith variable population size for solving high-dimensional prob-lems. In: 2011 IEEE congress on evolutionary computation (CEC).IEEE, pp 2626–2632

Yang Z, Tang K, Yao X (2008) Large scale evolutionary optimizationusing cooperative coevolution. Inf Sci 178(15):2985–2999

Yang Z, Tang K, Yao X (2008) Multilevel cooperative coevolutionfor large scale optimization. In: IEEE congress on evolutionarycomputation, 2008. CEC 2008 (IEEEWorld Congress on Compu-tational Intelligence). IEEE, pp 1663–1670

Zhang K, Li B (2012) Cooperative coevolution with global search forlarge scale global optimization. In: 2012 IEEE congress on evolu-tionary computation (CEC). IEEE, pp 1–7

Zhao SZ, Liang JJ, Suganthan PN, Tasgetiren MF (2008) Dynamicmulti-swarm particle swarm optimizer with local search for largescale global optimization. In: IEEE congress on evolutionarycomputation, 2008. CEC 2008 (IEEE World Congress on Com-putational Intelligence). IEEE, pp 3845–3852

Zhao SZ, Suganthan PN, Das S (2010) Dynamic multi-swarm particleswarmoptimizerwith sub-regional harmony search. In: 2010 IEEEcongress on evolutionary computation (CEC). IEEE, pp 1–8

Zhou A, Zhang Q (2016) Are all the subproblems equally important?resource allocation in decomposition-based multiobjective evolu-tionary algorithms. IEEE Trans Evol Comput 20(1):52–64

123