a novel hybrid cultural algorithms framework with ...rehab/j12.pdf · between cultural algorithms...
TRANSCRIPT
Information Sciences 334–335 (2016) 219–249
Contents lists available at ScienceDirect
Information Sciences
journal homepage: www.elsevier.com/locate/ins
A novel hybrid Cultural Algorithms framework with
trajectory-based search for global numerical optimization
Mostafa Z. Ali a,b,∗, Noor H. Awad c, Ponnuthurai N. Suganthan c, Rehab M. Duwairi a,Robert G. Reynolds d,e
a Jordan University of Science & Technology, Irbid 22110, Jordanb Princess Sumayya University for Technology, Amman, Jordanc Nanyang Technological University, Singapore 639798, Singapored Wayne State University, Detroit, MI 48202, USAe The University of Michigan-Ann Arbor, Ann Arbor MI 48109-1079, USA
a r t i c l e i n f o
Article history:
Received 29 October 2014
Revised 30 October 2015
Accepted 13 November 2015
Available online 7 December 2015
Keywords:
Cultural Algorithms
Global numerical optimization
Hybrid algorithm
Knowledge source
Multiple trajectory search
a b s t r a c t
In recent years, Cultural Algorithms (CAs) have attracted substantial research interest. When
applied to highly multimodal and high dimensional problems, Cultural Algorithms suffer
from fast convergence followed by stagnation. This research proposes a novel hybridization
between Cultural Algorithms and a modified multiple trajectory search (MTS). In this hy-
bridization, a modified version of Cultural Algorithms is applied to generate solutions us-
ing three knowledge sources namely situational knowledge, normative knowledge, and to-
pographic knowledge. From these solutions, several are selected to be used by the modified
multi-trajectory search. All solutions generated by both component algorithms are used to
update the three knowledge sources in the belief space of Cultural Algorithms. In addition, an
adaptive quality function is used to control the number of function evaluations assigned to
each component algorithm according to their success rates in the recent past iterations. The
function evaluations assigned to Cultural Algorithms are also divided among the three knowl-
edge sources according to their success rates in recent generations of the search. Moreover,
the quality function is used to tune the number of offspring these component algorithms are
allowed to contribute during the search. The proposed hybridization between Cultural Algo-
rithms and the modified trajectory-based search is employed to solve a test suite of 25 large-
scale benchmark functions. The paper also investigates the application of the new algorithm
to a set of real-life problems. Comparative studies show that the proposed algorithm can have
superior performance on more complex higher dimensional multimodal optimization prob-
lems when compared with several other hybrid and single population optimizers.
© 2015 Elsevier Inc. All rights reserved.
1. Introduction
Many real-life problems can be formulated as single objective global optimization problems. In single objective global opti-
mization, the objective is to determine a set of state-variables or model parameters that offer the globally optimum solution of
an objective or cost function. The cost function usually involves D decision variables: �X = [x1, x2, x3, . . . , xD]T . The optimization
task is essentially a search for a parameter vector �X∗ that minimizes the cost function f (�X ) ( f : � ⊆ �D → �) where � is a
∗ Corresponding author. Tel.: +962 79 743 3080; fax: +962 2 7095046/22783.
E-mail address: [email protected] (M.Z. Ali).
http://dx.doi.org/10.1016/j.ins.2015.11.032
0020-0255/© 2015 Elsevier Inc. All rights reserved.
220 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
non-empty, large but bounded set that represents the domain of the decision variable space. In other words, f (�X∗) < f (�X ), ∀�X ∈�. The focus on minimization does not result in a loss of generality since max{ f (�X )} = − min{− f (�X )}.
Recently there has been a growing interest in utilizing population-based stochastic optimization algorithms for the solution
of global optimization problems, due to the emergence of important real-world problems of high complexity [27,35,47]. These
stochastic search algorithms do not require that the fitness landscape be differentiable.
While many population-based stochastic algorithms have been developed for the solution of real-valued optimization prob-
lems, they can often get trapped in locally optimal basins of attraction when solving problems with complex landscapes. One
approach to the solution of this problem is through the combination, hybridization, of algorithms with complementary proper-
ties in a synergistic fashion. Hybridization offers a great potential for developing stochastic optimization algorithms with search
properties that are superior in performance to their constituent algorithms in terms of both resiliency and robustness [6,7,16,32].
For example, a new optimization algorithm called the Big Bang-Big Crunch (BB-BC) algorithm was introduced which is based
on both the Big Bang and the Big Crunch Theories [12]. The algorithm is used to generate random points in the Big Bang phase
that will be substituted for a single representative point via a center of mass in the Big Crunch phase. The algorithm exhibited an
enhanced performance over a modified Genetic Algorithm that was also developed by the same authors. More recently, a novel
heuristic optimization method namely, Charged System Search (CSS), was proposed [25]. The algorithm is based on principles
from statistical mechanics and physics, especially Coulombs law from Newtonian laws of mechanics and electrostatics. The the-
ory behind this hybrid algorithm makes it suitable for non-smooth or non-convex domains as it does not need information about
the continuity nor the gradient of the search space.
Other evolutionary algorithms embrace the concept of hybridization in different ways. In these approaches multiple opti-
mization algorithms are run concurrently, as in AMALGAM-SO [43]. This algorithm merges the strengths of Covariance Matrix
Adaptation [4], Genetic Algorithms, and Particle Swarm Optimization. It employs a self-adaptive learning strategy that deter-
mines the number of individuals for each algorithm to use in each generation of the search process. The algorithm was tested on
the IEEE CEC2005 real parameter optimization [40] and was shown to generate promising results on complex high dimensional
multimodal problems.
Still other hybrid algorithms also demonstrate competitive performance when one of the algorithms is used to tune the pa-
rameters for the other. [46]. In [46], CoBiDE utilizes covariance matrix adaptation in order to establish an appropriate coordinate
system for the crossover operator that is used by the Differential Evolution Component (DE). This helps to relieve the depen-
dency of the DE on the coordinate system to a certain extent. Moreover, bimodal distribution parameter setting were proposed
to control the crossover and mutation parameters of the DE. The algorithm demonstrated improved results on a set of standard
functions and a wide range of engineering optimization problems.
The authors of PSO6-Mtsls [17] utilized an improved version of PSO where a multiple trajectory search algorithm was used to
coordinate the search of individuals in the population. Each particle received local information from 6 neighbors and was guided
by a trajectory.
Data intensive hybrid approaches frequently use Cultural Algorithms. Nguyen and Yao [32] proposed a hybrid framework
consisting of Cultural Algorithms and iterated local search in which they used a shared knowledge space that is responsible
for integrating the knowledge produced from pre-defined multi-populations. Knowledge migration in this context was used to
guide the search in new directions with less communication cost. Another technique that hybridized Cultural Algorithms with
an improved local search is presented in [5]. Coelho and Mariani [7] suggested using PSO as a population space in the cultural
framework for numerical optimization over continuous spaces in order to increase the efficiency of the search. Another approach
used an improved particle swarm algorithm with Cultural Algorithms was introduced by Wang et al. [44]. Bacerra and Coello [6]
proposed an enhanced version of Cultural Algorithms with differential evolution so as to enhance diversity in the population of
problem solvers during the optimization process. Although the results obtained by their algorithm were similar (in quality) to
other approaches to which it was compared, it was able to achieve such results with a fewer number of function evaluations. Xue
and Guo [50] introduced a hybridized Cultural Algorithms with Genetic Algorithms in order to solve multi-modal functions.
Another hybrid approach employs Cultural Algorithms to extract useful knowledge from a Genetic Algorithm population
space for the solution of job shop scheduling problems [45]. A similar hybridization was used in [21] for the optimization of real
world applications. In [3], the authors introduced a hybrid approach that combines Cultural Algorithms with a niching method
for solving engineering applications. An improved Cultural Algorithms based on balancing the search direction is also presented
in [2]. Other significant examples of hybridizing Cultural Algorithms with other techniques can be found in [16].
While hybridization has its advantages it also comes with a potential cost. First, there needs to be a way to balance the compo-
nent algorithms in terms of exploration and exploitation [18,35]. Second, with more algorithms under the hood, the optimization
engine may require more computational resources in the worst case. It is important Thus, keeping the hybrid algorithm simple
can help to limit the number of Function Evaluations (FE) needed to solve a problem [1,6].
In this paper, we propose a simple yet powerful, hybrid evolutionary algorithms that synergistically combines the features of
two global optimizers: Cultural Algorithms (CA) and multiple trajectory search (MTS) for multimodal optimization. The Cultural
Algorithms (CA) is an evolutionary algorithm (EA) that provides a powerful tool for solving data intensive problems [1,37] and
has successfully handled many optimization problems and applications [1,20,34,37]. It can be defined as an evolutionary model
that consists of both a belief and population space with a set of communication protocols that combine the interaction of the
two spaces.
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 221
Fig. 1. Framework and pseudo-code of the Cultural Algorithms.
Multiple trajectory search (MTS) [42] is a metaheuristic that guides the search process by finding near optimal solutions. MTS
was proposed to solve unconstrained real parameter optimization problems. MTS was shown to be successful in the solution of
large-scale single objective global optimization problems, and when tested on a variety of benchmarks [42]. It presents a suitable
choice to merge with Cultural Algorithms for many reasons. Multiple trajectory search has been proven to be an efficient opti-
mizer on non-separable and large-scale optimization problems [42]. MTS has been successfully used to hybridize other swarm
intelligent approaches [17]. MTS with its local search seems to complement the functionality of CAs during the exploration and
exploitation stages of the search, using less number of extra computations that are needed to enhance recently found solutions.
As a result, the knowledge sources with multiple local searches can be used as one compatible engine to guide the evolutionary
search towards promising regions, and to refine obtained solutions. This should help in producing a basin-hopping-like algo-
rithm to make global jumps between local basins [28]. The coherency between the two algorithms, using the nature of the local
searches in both of them, will help knowledge sources apply some type of extrapolation to the previously found optima to pre-
dict the shape of the landscape. This will help in guiding the search towards more successful solutions. Individuals can then
exchange a richer repository of information as stored in the belief space of the cultural adaptation paradigm of CAs, in a manner
that facilitates evolving behavior instead of merely evolving solutions.
A successful optimizer should always exhibit exploitative power in addition to explorative power, especially during later
stages of the search. To preserve such characteristics, we present here a two-stage optimization algorithm. This involves a mod-
ified version of Cultural Algorithms utilizing a shared knowledge component with a quality function. This quality function is
used to update the membership of the knowledge sources. The modified CA is hybridized with an improved version of multiple
trajectory search optimizer. The hybridized algorithm, which we call (CA-MMTS) consists of different search stages. In the first
stage the CA uses the three KSs to find the initial set of points that will be used as the start points for MMTS. Then MMTS will
generate a new set of solutions for the next stage of the search. The influence function serves as a way to switch to the most
appropriate optimizer based on its success during the search process. In addition, it determines the percentage of offspring of
each component algorithm for later generations based on their success in previous generations. In order to empirically validate
the effectiveness of CA-MMTS, we selected a benchmark set of 25 functions with a diverse range of complexity and a set of
real-life problems. The algorithm will be compared with CA hybrid variants and other significant state-of-the-art optimization
techniques. The remaining sections complete this paper as follows. In Section 2, we briefly introduce Cultural Algorithms and
multiple trajectory search. In Section 3, the proposed method is elaborated. Section 4 describes optimization problems, param-
eter settings and the simulation results along with comparison with state-of-the-art algorithms. Finally, Section 5 summarizes
the conclusion of this paper.
2. Preliminaries
2.1. Cultural Algorithms
Reynolds [27,37] introduced Cultural Algorithms (CA) as an evolutionary model that is derived from the cultural evolution
process in nature. It consists of belief and population spaces and a set of communication channels between these spaces to
control the quality of the shared knowledge and its type. The basic pseudo-code of the CA framework is shown in Fig. 1. The
figure shows how the main steps of CA are performed in each generation. The Obj() function generates the individuals in the
population space and the Accept() function selects the best individuals that are used to update the belief space knowledge using
the function Update(). The Influence() function uses the roulette wheel selection to choose one knowledge source to perform the
evolution of the next generation.
Residing in the belief space, five cultural knowledge sources are responsible for collecting information about the search space
and the problem domain in order to guide the individuals in the search landscape. These knowledge sources include situational
knowledge (SK), topographic knowledge (TK), domain knowledge (DK), normative knowledge (NK) and history knowledge (HK).
222 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Situational knowledge represents a structure that contains a list of exemplars from the population of problem solvers, and
each contains the decision variables’ values for the objective function and their corresponding fitness value. This knowledge
source is updated by either obtaining a new individual that is more fit than the current best during the search, or by reinitializing
it upon a change events in the search landscape.
Topographic knowledge is represented as a multi-dimensional grid. Each cell in the grid is described in terms of its size and
the number of dimensions. This knowledge is created by first sampling a solution in each cell. When an individual is found in
a cell with a better fitness value than that in the previous generation, the structure is updated by dividing that cell into smaller
cells. The roulette wheel is then used for the selection of the influence functions for these new individuals again based on the
past performance.
Domain knowledge is characterized by the domain ranges of all parameters and the best examples from the population along
with any constraints on their relationships. The update mechanism for DK is similar to that of SK, except that re-initialization of
DK does not happen after every change of in a dynamic search landscape. The difference between the fitness value of the current
best and the fitness of the best found so far is considered as the generator for the mutation step size that will be comparable to
the magnitude of the landscape change. This will then be mapped into the variable range.
Normative knowledge is represented as a set of intervals. These intervals correspond to the range that is currently believed
to be the best for each parameter. Each parameter has a performance value and an upper and lower bounds for its values. These
ranges can be adjusted as more information on individual performance is collected.
Historic or temporal knowledge monitors shifts in the distance and direction of the optimum in the environment and records
all such environmental changes as averages. The directional shift between the current best solution and the previously recorded
one is used to determine an environmental change. This can take values of −1, 0, or 1 depending on a decrease, no difference or
an increase in the parameter values.
More information on Cultural Algorithms, the knowledge sources and how they are integrated into the belief space to perform
the search will be discussed in Section 3.
2.2. Multiple trajectory search
The multiple trajectory search (MTS) algorithm was previously used to solve large-scale optimization problems [42]. The idea
behind this technique is to search for improved solutions by moving in the parameter space based on different steps sizes that
are applied to the original parameters. These step sizes are used to move in the parameter space from the original positions in
each dimension. Each step size is applied according to a proper local search method. MTS utilizes simulated orthogonal arrays
SOAM×N to generate M initial solutions within the lower and upper bounds of decision variables (lb, ub) [49].
The number of factors corresponds to the number of dimensions D and the number of levels of each factor is M. These M
initial solutions are taken to be uniformly distributed over the feasible search landscape. Local search methods have an initial
search range (SR) that is equal to (b − a)/2, where a and b are the lower bound and upper bound, respectively. Each local search
has a test grade (TG) parameter in which the local search is chosen based on the predicted best value for the next generation.
The MTS starts by conducting repetitive local searches, until a pre-determined number of function evaluations is reached. The
major idea behind search in this algorithm is based on the sequence of step sizes that are applied to the original parameters in
order to generate new backward and forward movements in the search space.
The pseudo-code for MTS is shown in Fig. 2. MTS starts by using simulated orthogonal arrays to generate M initial solutions
where the number of dimensions D corresponds to the factors and M is the number of levels for each factor as shown in line
5. Next, it defines the search range for each initial solution to be half of the difference between the upper and lower bounds as
shown in line 11. Afterwards, the local search methods are used to change the search range in every iteration.
The original MTS uses three local search methods. Local search 1 tries to search from the first dimension to the last as shown
in lines 21–26. Local search 2 mimics the same idea of local search 1 but its search is focused on one quarter of the dimensions
according to the equation shown in lines 32 and 35. The first two local search methods re-start the search range if it goes below
1 × E−15 as shown in lines 16 and 17. Local search 3 works in a different manner from the first two. It considers three moves for
each dimension to determine the best move for each dimension as shown in lines 39–41.
In this paper, a modified version of MTS is integrated with an improved version of the CA in order to enhance the CAs knowl-
edge update, step size, and influence functions. The MTS complements Cultural Algorithms as both algorithms use different
search moves that can complement each other’s work when called appropriately to select the appropriate search scale. This
helps to produce more promising solutions and to escape local optima and stagnation during the search. As a result, the hybrid
search algorithm utilizes the power of both component algorithms in order to increase the diversity of the population and guide
the search for better solutions. The details are presented in the next section.
3. A novel hybridization of Cultural Algorithms with a modified multiple trajectory search
3.1. The belief space
In the proposed approach, the modified belief space uses three of the five knowledge sources described previously. These
knowledge sources represent the repository of the best-acquired knowledge during the entire optimization process. In what
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 223
Fig. 2. Pseudo-code for the original MTS.
Fig. 3. Situational knowledge as implemented in CA-MMTS.
follows, mutate(v) is a Gaussian-random number generator with mean v, and Rnd(r1, r2) is a function that uniform-randomly
generate a number in the range (r1, r2).
(1) Situational Knowledge (SK): As mentioned previously, SK is a structure that contains a set of the best exemplars that
were found during the evolutionary process so far. In this manner, individuals will always follow exemplars of the popu-
lation. Situational knowledge is responsible for guiding the search toward the exemplars by generating new offspring. It is
224 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Fig. 4. Modified topographic knowledge as implemented in CA-MMTS.
Fig. 5. Normative knowledge as implemented in CA-MMTS.
implemented as given in Fig. 3. D is the dimensionality of the problem. The global best decision variable found so far is
represented as <gbest1, gbest2… gbestD>.
(2) Topographic Knowledge (TK): topographical (spatial) knowledge uses spatial characteristics to divide the problem landscape
into cells or regions, where each cell keeps track of the best individual in it. Topographic knowledge reasons about cell-
based functional patterns in the search space [34]. Individuals influenced by topographic knowledge will imitate the cell-
best in future generations. For the sake of efficiently managing memory for complex optimization problems with higher
dimensions, the k-d tree (k-dimensional binary tree) is used to modify the implementation of this structure where each
node can only have two children. This space-partitioning data structure should simplify the process of utilizing spatial
characteristics to divide any of the dimensions in half during the optimization process. The topographical knowledge
source uses an update methodology as given in Fig. 4, where p_cbh is the parent cell of the best agent.
(3) Normative Knowledge (NK): Normative knowledge deals with the guidelines for individual behaviors through a set of
promising parameter ranges [34]. This will lead individuals to remain in or move on to better ranges throughout the
search space. The Normative knowledge source consists of a memory structure to store acceptable behavior of individu-
als and their ranges in the feasible search regions. Offspring individuals are generated as shown in Fig. 5, assuming D is
the dimensionality of the problem, xi is the current solution, yi is the generated offspring, and lbi, ubi are the lower and
upper bounds, respectively. Rnd(a, b) is a uniform random number generated within the interval (a, b) and mutate(xi) is
generated from a Gaussian distribution with a mean of xi.
3.2. Acceptance function
The acceptance function regulates the number of accepted individuals into the belief space. The design of the acceptance
function is based on the one presented by Peng [34]. The number of accepted individuals into the belief space decreases as
time elapses. The fraction of accepted individuals from the population is normally taken from the interval [0, 1). The number
of accepted individuals is derived from this percentage as given in Eq. (1). In the equation G represents the total number of
generations, NP represents the total number of individuals, and p%ind represents the percentage of individuals recruited into the
belief space at time (iteration) t. The total number of accepted individuals at any time in the modified CA is given as:
Nt%accep =
⌊NP (p%ind t + (1 − p%ind))
t
⌋(1)
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 225
evi
tamr
oN
l an
oita
uti
S
Sp
atial
Belief space
Population
space
Accept()
Up
dat
e_p
op
()
Generate new agents
using MMTS
Quality function
Create neighborhood
region N(s): M points
Evaluate solutions and
choose the best solution s’
) < f(s') newIf f(MTS
news'=MTS
Influence()
Fig. 6. Framework CA-MMTS algorithm.
3.3. Modified trajectory-based search in the context of CA
A modified version of multiple trajectory search (MMTS) is introduced to complement the exploration and exploitation capa-
bilities of the Cultural Algorithms. Instead of using the simulated orthogonal arrays to generate M initial solutions as in the basic
multiple trajectory model, the knowledge sources of Cultural Algorithms are used. Those initial solutions represent the neigh-
borhood region N(s), and they are used as the starting points for the multiple trajectory search. Using knowledge sources for this
task enables us to generate better solutions by benefiting from the knowledge of previous generations. After generating those
initial solutions, the best solution Sbest is then selected and a new solution is generated according to the following equation:
XiMT S, j = Xi
best, j + δ (2)
where δ is the MTS step size. δ is defined as follows:
δ = ED ∗ LRF (3)
LRF is the linear reducing factor [42] that is normally chosen to be in the interval [0.02, 1]. D is the dimension of the problem
and 1 ≤ j ≤ D.
ED is the Euclidean distance between the current best solution of the population space Sbest-i at generation i, and the best
solution of the generated solutions by the knowledge sources (Sbest),
ED =√(
S j
best− S j
best−i
)2|Dj=1 (4)
The MTS step size (δ) is the Euclidean Distance represented as a difference vector between the two best solutions Sbest-i and
Sbest. The step size will then be applied as needed for each dimension.
3.4. Hybridization of enhanced cultural learning using a modified multiple trajectory search
The performance of an optimization algorithm normally deteriorates as the complexity of the problem landscape increases,
and as the solution space of the problem increases. In order to solve problems with increased complexity, the modified cultural
framework will be used with three knowledge structures in the belief space to guide the individuals in their search. The three
knowledge sources include situational knowledge, normative knowledge, and topographic knowledge. These three knowledge
sources were chosen for their well-known performance [34,37].
The modified MTS is used periodically for a certain number of function evaluations of the objective function. The success
rate of MMTS is calculated at the end of the search in order to find out how successful the algorithm was in terms of generating
improved solutions. That determines when it will be used again.
The hybrid framework of the Cultural Algorithms with Modified Multiple Trajectory Algorithm (CA-MMTS) is shown in Fig. 6,
and the detailed pseudo-code is given in Fig. 7. Our version of Cultural Algorithms consists of the population and belief spaces.
The modified belief space structure combines the three knowledge sources described earlier. As will be shown in later sections,
our technique requires minimal configuration and tweaks to work efficiently. Moreover, it uses the same basic parameters used
in Cultural Algorithms as reported in [37].
226 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Fig. 7. Pseudo-code of the CA-MMTS algorithm.
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 227
The CA-MMTS commences by initializing a population space of fixed size according to the exploration and exploitation ca-
pabilities of the knowledge sources. After the knowledge sources generate a new set of individuals, the best individual will be
chosen and the modified version of multiple trajectory search is begun. The algorithm benefits from the self-adaptation applied
by the knowledge sources to enhance the quality of solutions, and the new search directions. This process helps in determining
the most suitable knowledge beacons for guiding the search of individuals in future generations. This will help in extracting
different characteristics of the problem landscape and the optimization process requirements of the evolution phases.
The influence function specifies the manner in which knowledge sources work together depends on a modified version of the
roulette wheel selection process. The probability of selecting each knowledge source equals (1/nKS), where nKS is the number of
knowledge sources used. Hence, they all have equal probabilities of being selected at the beginning of the search. The number of
generated individuals by the ith knowledge source that will be accepted (successful individuals) into the next generation (t + 1)
is given as nt+1s (ksi), and the number of discarded individuals for that particular knowledge source is given as nt+1
f(ksi).
The archive of experiences that will store the updates has a fixed size and is denoted as EA. This value is the number of
previous generations used to modify the probability of selection of each knowledge source. In case of experience overflow in the
archive, the oldest memory will be removed in order to allow space for the newest successful experience.
The probabilities of selecting the different knowledge sources for a particular individual are updated for subsequent genera-
tions based on the sizes of the archives for successful and discarded individuals at generation t. The probability of selecting the
ith knowledge source at generation t is calculated as follows:
pt (ksi) = SRt (ksi)∑nKS
i=1SRt (ksi)
, (5)
where,
SRt (ksi) =∑G−1
t=G−EA nts(ksi)
nts f
(ksi)+ λ (6)
and,
nts f (ksi) =
G−1∑t=G−EA
nts(ksi) +
G−1∑t=G−EA
ntf (ksi) (7)
Here nts f
(ksi) is the sum of those individuals ksi, that are selected to enter the next generation in addition to the discarded
ones for a given knowledge source. SRt (ksi) represents the success rate of the individuals guided by the ith knowledge source
that will be accepted into the next generation. This information is obtained from the last EA generations. In order to overcome the
zero success rate issue, a quantity λ with a value of 0.05 is added to avoid a null success rate. At the beginning of the search, both
the modified CA and the (MMTS) are associated with an equal number of function evaluations (FEs). Subsequently, Eqs. (5)–(7)
are used to divide up the FEs between the algorithms based on their relative success rate. An external archive is employed to
record the best-found solution, which will be used when the system goes into a stagnation state for a certain period wi.
After that, the best solution out of these M solutions is chosen and compared against the best solution in the current popula-
tion space, Sbest and Si-best respectively. If Sbest is better than the current best, then it replaces it. Otherwise, a new solution SiMT S
is generated using the Euclidean distance between Sbest and Si-best multiplied by a linear reducing factor.
The fitness of the newly created individual via MMTS (SiMT S
) will be compared with the current best individual in the pop-
ulation. If the performance is improved then SiMT S
replaces Sbest. Otherwise, the current best will be retained for the following
generation.
4. Experiments and analysis
In the field of stochastic optimization, it is important to compare the performance of different algorithms using established
benchmarks. In this work, the IEEE CEC 2005 special session on real-parameter optimization [40] benchmarks in 30D, 50D and in
a scalability study with 100D are used. Those benchmarks have different characteristics such as regularity, non-separability, ill-
conditioning and multimodality. These hybrid composition functions consist of a combination of some basic functions. These
benchmarks are based on a set of classical functions such as the well known Schwefel, Rosenbrock, Rastrigin, Ackley, and
Griewank functions. More details can on these functions can be found in [40], and will not be repeated here. In the experi-
ments described in this section the performance of the CA-MMTS is compared with the results of several other well-known
CA-hybridizations and other state-of-the-art algorithms.
4.1. Experimental setup
The experiments reported in this paper were performed on an Intel (R) Core i7 2720QM processor @ 2.20 GHz, and 8 GB RAM
operating on Windows 7 professional. All of the programs were written in Java 1.7.0_05-b05.
For each problem, a total of 50 independent runs were performed. All the functions tested in 30D, 50D, and in a scalability
study that used 100D. Functions f1 to f25 are tested in 30D and 50D. In the scalability study, the performance of the algorithm
228 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
f1 f2 f6 f7 f13 f15 f16 f21 f250
5
10
15
20
25
SK
NK
TK
DK
HK
Fig. 8. The average number of controlled agents by each knowledge source during the runs averaged over 50 runs for 30D.
in functions f1 to f14 in 100D was tested. We exclude the last ten hybrid composition functions because they incur excessive
computational time in 100D. The initial percentage of individuals with selected experiences (p%ind) into the belief space is
25% and the population size (NP) is 50 individuals. The size of archive of experiences (EA) is 25. The influence and acceptance
functions were adaptively used and adjusted as specified in Sections 3.1–3.3. The number of the new individuals generated by the
local search is set to five and the LRF in Eq. (3) is randomly selected from the interval [0.02, 1]. The maximum number of fitness
evaluations (FEs) is set at 3 × 105 for 30D, 5 × 105 for 50D, as specified for the IEEE CEC 2005 special session and competition on
real-parameter optimization [40]. A maximum number of FEs of 1 × 106, was used for 100D in the scalability study. After several
trials, stagnation count (st_c) as mentioned in Eqs. (5)–(7) that is used in CA-MMTS was set at 50.
More details on the choice of some parameters and the assessment of the algorithm’s performance based on the choices
of these parameters are presented in Section 4.2. All of the optimization algorithms employed the same initial population of
randomly generated solutions from a uniform distribution as specified by the CEC 2005 rules. An exception to this was made for
problems 7 and 25 (same as specified in the competition), for which initialization ranges are specified in a technical report and
associated codes for benchmark problems.
4.2. Sensitivity analysis
In this subsection, the sensitivity of the performance of the algorithm is assessed with respect to related parameters, and
the choice of major search structures and knowledge sources. This will help save computation time and obtain the best results
during the search. We first test the performance of the algorithm with respect to the choice of knowledge sources. Fig. 8 shows
the average number of controlled agents by each knowledge source during the runs averaged over 30 runs for 30D and 50D. The
functions were randomly selected from each category of functions in the benchmark suite. The numbers are indicators of the
area of each of the knowledge sources on the roulette wheel. The number of followers of DK and HK are much less than those
of the other knowledge sources and hence the computation that is used to check their influence after iteration can be neglected.
This will save computation and checking extra influence after overhead that can be used under the effect of a more productive
search direction. Table 1 shows the average number of individuals in the population controlled by each knowledge source (ACA)
and the average number of individuals produced by a knowledge source that made it into the next generation (ARAA).
It is apparent that the exploiter knowledge source (SK) controls most of these individuals in the basic and expanded versions
of the multimodal functions (a total of 9 functions). On the other hand, explorer knowledge sources become dominant in the
unimodal and hybrid composition functions (a total of 16 functions from the original benchmark suite). This guarantees a better
search radius during the search for such complex category of functions.
The parameters to be tuned in the hybrid algorithm include the percentage of individuals with selected experiences that
will be accepted into the belief space (p%ind), the number of the new individuals generated by the local search (M), and the
population size (NP). The rest of the parameters, such as δ, and HD, CSelectionRate (proportion of individuals to be retained in
the following generation [37]), are either specified in Section 4.1 or specified in the canonical algorithm [37] and/or calculated
as specified in earlier subsections. Sensitivity tests on 30D have been used to fine-tune the values of one parameter at a time
while fixing the values of the rest of the parameters to values as discussed in earlier subsections. Table 2 shows the results of the
sensitivity analysis. The mean statistical results and standard deviation (in parentheses) over 30 independent runs, for every set
of parameters were recorded here.
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 229
Table 1
Sensitivity analysis with respect to average number of controlled
agents and average rate of accepted agents for the knowledge source
types averaged over 30 independent runs for 30D and 50D problems.
Knowledge sources
Function SK NK TK DK HK
f1 ACA 23 11 13 2 1
ARAA (48%) (19%) (25%) (5%) (3%)
f2 ACA 19 16 13 1 1
ARAA (43%) (28%) (22%) (4%) (3%)
f6 ACA 15 13 18 2 2
ARAA (31%) (28%) (39%) (1%) (1%)
f7 ACA 14 14 17 3 2
ARAA (29%) (31%) (38%) (1%) (1%)
f13 ACA 17 12 18 2 1
ARAA (40%) (18%) (40%) (1%) (1%)
f15 ACA 19 9 18 2 2
ARAA (43%) (16%) (38%) (2%) (1%)
f16 ACA 17 13 17 2 1
ARAA (40%) (19%) (39%) (1%) (1%)
f21 ACA 19 13 15 1 2
ARAA (48%) (23%) (29%) (0%) (0%)
f25 ACA 20 10 16 2 2
ARAA (48%) (14%) (38%) (0%) (0%)
The results show that the effect of varying the percentage of accepted individuals is best when set at 25%. It can also be noted
that the algorithm exhibits the best behavior when M = 5 for most of the problems.
4.3. Comparison of numerical results of different Cultural Algorithms variants
The performance of CA-MMTS was compared with a variations on the enhanced CA and many state-of-the-art Cultural Algo-
rithms from the literature. We compare the proposed CA-MMTS algorithm with the following Cultural Algorithms:
1. The canonical CA algorithm [37].
2. Improved CA algorithm (part of the proposed work)
3. Multi-population Cultural Algorithms adopting knowledge migration (MCAKM) [20].
4. Multi-population cultural differential evolution (MCDE) [49].
5. Harmony search with CA (HS-CA) [16], which were shown by the respective authors to perform better than homomor-
phous mappings (HM) [27], and self-adaptive differential evolution (SaDE) [35] on similar benchmarks.
6. CA with iterated local search (CA-ILS) [32].
The last three algorithms are representative hybrid algorithms. For these algorithms, almost all of the parameters are the
same as those in the original papers. For the improved CA, the percentage of individuals with selected experiences into the belief
space is 25%. In CA-ILS [32], the population size is set to 3, the reduction rate of the global temperature (β) is set as 0.838, the
maximum moves that an individual can make per generation (μ) is 5, the initial move length proportion (τ ) is 0.5/0.1/0.02, and
the global temperature (T) is 100. The population size for the canonical CA and the improved CA was set to 50 individuals. The
rest of the algorithm’s parameters were set as in the canonical CA [37]. For HS-CA [16], the population size is set to 150 based
upon several preliminary experiments. The harmony memory size (HMS) is set to 100 and the harmony memory considering rate
(HMCR) is set as 0.8. In MCAKM [20], the population size is set to 200, the number of subpopulations (M) is set as 3, while the
rest were used as specified by the original authors.
Tables 3–6 show the mean and standard deviations of the best-of-run errors for 50 independent runs of the aforementioned
Cultural Algorithms for 30D, 50D, respectively. The error is the absolute value of the difference between the actual optimum
value of the objective function fopt, and the best result f (⇀
Xbest ), i.e., | fopt − f (⇀
Xbest )|. Table 7 reports the same values for 100D.
In order to evaluate the statistical significance of the observed performance differences between the algorithms, a two-sided
Wilcoxon rank sum test was applied [48] between the CA-MMTS algorithms and the other state-of-the-art CA algorithms. The
null hypothesis (H0) in each conducted test was that the compared samples are independent ones from identical continuous
distributions. At the 5% significance level, a “+” marks the cases where the compared-with algorithm exhibits a superior perfor-
mance, and a “–” marks inferior performance. In both cases the null hypothesis is rejected. The cases with “=” indicate that the
performance difference is not statistically significant. The total number of the aforementioned cases are displayed at the end of
the second table of each dimension, for each of the competitor algorithms as (+/=/–).
Tables 3 and 4 indicate that, in terms of the mean of the error values for 30D problems, CA-MMTS outperformed all the
contestant algorithms in a statistically significant manner (as noted from the Wilcoxon test) over all 25 functions. An inspection
of Tables 5 and 6 reveals that the performance of CA-MMTS obtained the smallest best-of-the-run errors over all 25 functions
230 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 2
Sensitivity analysis with respect to percentage of selected experiences (p%ind), size of
the new individuals generated by the local search (M), and population size (NP) over 30
independent runs for 30D problems.
p% ind Prob. Optimal results over different parameter values
20% 25% 30% 35%
f1 5.6338E−38 7.9221E−59 1.4761E−46 5.4800E−32
(1.4583E−38) (9.5949E−58) (3.7180E−45) (4.9844E−32)
f5 1.5050E+02 1.4746E+02 1.3917E+02 1.4587E+02
(5.8427E+01) (3.1325E+01) (4.4457E+01) (6.6964E+01)
f6 1.0018E+00 4.6939E−01 9.7530E−01 1.2473E+00
(8.7722E−01) (1.2991E−02) (6.7743E−01) (4.7542E−01)
f12 1.8803E+02 1.8798E+02 1.8910E+02 1.9369E+02
(9.4161E+01) (7.7739E+01) (6.8287E+01) (8.2083E+01)
f13 2.7372E+00 2.0672E+00 1.3416E+00 2.3814E+00
(1.2071E−00) (4.7195E−01) (6.9875E−01) (6.7987E−01)
f25 2.1879E+02 2.0952E+02 2.2981E+02 2.4173E+02
(3.2873E+00) (5.1730E−01) (6.2747E+00) (1.9242E+00)
M Prob. 3 5 7 9
f1 2.8322E−42 7.9221E−59 4.4011E−45 3.6930E−38
(3.8220E−43) (9.5949E−58) (5.2240E−45) (2.0936E−38)
f5 1.4382E+02 1.3917E+02 1.4676E+02 1.5731E+02
(2.1722E+01) (4.4457E+01) (5.0253E+01) (3.9851E+01)
f6 8.1205E−01 6.8569E−01 6.1432E−01 4.6939E−01
(7.3232E−01) (6.7991E−02) (6.9818E−02) (1.2991E−02)
f12 1.9235E+02 1.8798E+02 1.8887E+02 1.9072E+02
(8.3024E+01) (7.7739E+01) (5.8204E+01) (8.3918E+01)
f13 1.9372E+00 1.3416E+00 1.8487E+00 2.2569E+00
(6.8848E−01) (6.9875E−01) (9.4795E−01) (7.1873E−01)
f25 2.2200E+02 2.0952E+02 2.1748E+02 2.2076E+02
(1.7980E+00) (5.1730E−01) (4.7245E+00) (6.8670E+00)
NP Prob. 10 30 50 70
f1 7.2406E−39 1.4105E−49 7.9221E−59 5.7665E−42
(5.6917E−40) (5.4777E−49) (9.5949E−58) (1.1376E−42)
f5 1.5354E+02 1.4428E+02 1.3917E+02 1.4139E+02
(2.3232E+01) (2.7247E+01) (4.4457E+01) (3.1779E+01)
f6 1.2816E+00 1.1704E+00 4.6939E−01 1.2420E+00
(3.3816E−02) (1.1224E−01) (1.2991E−02) (2.2690E−01)
f12 1.9213E+02 1.9002E+02 1.8798E+02 1.9518E+02
(5.2082E+01) (7.2116E+01) (7.7739E+01) (6.9939E+01)
f13 2.2516E+00 1.3416E+00 2.0104E+00 2.3902E+00
(7.1946E−01) (6.9875E−01) (8.9471E−01) (8.7803E−01)
f25 2.2317E+02 2.1488E+02 2.0952E+02 2.1646E+02
(2.4164E+00) (2.7074E+00) (5.1730E−01) (1.9822E+00)
in 50D. The proposed algorithm achieved statistically superior performance compared to all other contestant selected state-of-
the-art CA algorithms for all the functions in 50D as shown by the Wilcoxon rank sum test (signs in the tables). Table 7 indicates
that, in 100D, CA-MMTS performance was not substantially degraded when the search dimensionality was increased to 100. It
was able to outperform all the other CA algorithms in a statistically meaningful way.
In addition, an extensive statistical analysis was provided for the purpose of evaluating the statistical significance of the ob-
served performance differences [11]. Given a set of k algorithms, the first step in this analysis is to use a statistical test procedure
that can be used to rank the performance of the algorithms. Such test will answer whether there is a statistical significant differ-
ence in the performance ranking of at least two of these algorithms. If there was a significant difference, post hoc test analysis
(with different abilities and characteristics [11]) was used to decide on the cases in which the best performing algorithm (control
method) exhibits a significant variation.
In particular, the Friedman test [7] was used to test the differences between k related samples. The Friedman test is a non-
parametric multiple comparisons test, which is able to decide on significant differences between the behaviors of multiple sam-
ples. A statistical analysis is presented in Table 8 which depicts the rankings for the Friedman test. At the bottom of each column
in the table, the test-statistic for the Friedman test and its corresponding p-value is reported. These computed p-values strongly
suggest that there are significant differences among the selected algorithms for all dimensions, at α = 0.05, level of significance.
Table 8 also highlights the ranking of all algorithms. In this table, CA-MMTS was able to obtain the highest rank (higher rank is
better) for all dimensions.
Next, the post hoc analysis was used to detect the cases in which the best performing algorithm exhibited a significant
performance difference from the others. The results of the post hoc tests for Friedman are shown in Tables 9–11, for the 30D,
50D and 100D, respectively. These tables show the tests for all pairs of algorithms. Statistically significant entries are marked in
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 231
Table 3
Mean and standard deviation of the error values for functions f1–f21 @ 30D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05 significance
level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f1 f2 f3 f4 f5 f6 f7
Original CA 1.0231E−15 8.6254E−03 2.5031E+05 5.6754E−01 3.7132E+02 5.2362E+01 1.1625E−1
(3.4267E−15) (5.7734E−03) (1.4448E+05) (3.1241E−01) (5.1271E+01) (2.7729E+01) (2.3025E−01)
− − − − − − −Improved CA 2.5347E−26 4.7354E−13 6.3829E+04 7.0121E+00 2.7323E+02 8.3722E+01 8.3539E−07
(1.1736E−26) (1.1168E−13) (3.6235E+03) − (3.0709E−01) (8.1349E+01) (6.5283E+00) (1.4155E−07)
− − − − − − −MCAKM [20] 2.0362E−25 3.6521E−12 3.6103E+04 9.5681E−03 2.8036E+02 5.2048E+01 3.5218E−01
(1.1314E−25) (3.0042E−12) (1.8649E+04) (5.2784E−03) (1.2679E+02) (2.4297E+01) (2.8341E−01)
− − − − − − −MCDE [49] 3.6531E−24 4.3081E−06 2.5687E+05 4.2712E−02 2.1654E+02 8.5619E+01 8.0846E−01
(8.8164E−23) (5.3389E−06) (1.2976E+05) (8.7362E−01) (1.3132E+02) (3.5872E+01) (2.0060E−01)
− − − − − − −HS-CA [16] 1.2171E−25 3.1524E−06 2.1534E+05 3.5124E−02 3.1564E+02 8.6283E+01 6.5385E−01
(2.3205E−25) (1.5428E−05) (1.0207E+05) (1.2619E−02) (1.3812E+02) (3.2482E+01) (2.4038E−01)
− − − − − − −CA-ILS [32] 2.6628E−30 7.2749E−16 2.7609E+04 2.5652E−09 2.4956E+02 9.0804E+00 6.0263E−02
(4.1142E−30) (7.2451E−15) (1.0981E+04) (1.9835E−09) (1.7763E+02) (4.1638E+00) (3.6354E−03)
− − − − − − −CA-MMTS 4.2880E−57 8.4964E−29 3.6052E+03 4.5042E−12 1.4572E+02 6.0008E−01 1.1813E−21
(1.3628E−58) (1.0274E−31) (3.6027E+02) (3.8202E−13) (4.8111E+01) (2.1217E−02) (1.3382E−22)
Fun./Algorithms f8 f9 f10 f11 f12 f13 f14
Original CA 2.5830E+01 2.9911E+02 4.4419E+02 2.1735E+01 2.7220E+04 8.9003E+00 1.5802E+01
(9.4969E−02) (1.2004E+01) (2.1630E+01) (9.6791E+00) (6.9892E+03) (3.4231E+00) (3.1582E−01)
− − − − − − −Improved CA 2.2058E+01 8.3174E+00 2.3747E+01 1.9937E+01 3.4528E+03 5.9644E+00 1.4682E+01
(6.2139E−02) (1.6253E+00) (5.0062E+00) (5.2713E+00) (3.0528E+02) (6.6408E−01) (1.0837E+00)
− − − − − − −MCAKM [20] 2.2114E+01 1.1539E+02 7.2437E+01 1.9053E+01 5.7624E+03 4.7118E+00 1.5231E+01
(4.2782E−02) (4.2504E+01) (2.4186E+01) (5.2815E+00) (2.1179E+03) (1.8195E+00) (1.1853E+01)
− − − − − − −MCDE [49] 2.6291E+01 1.5699E+02 1.5162E+02 2.6068E+01 6.5887E+03 6.9306E+00 1.6820E+01
(7.5316E−03) (7.8725E+01) (6.3718E+01) (3.8629E+00) (3.6310E+03) (2.3808E+00) (3.2539E+00)
− − − − − − −HS-CA [16] 2.4399E+01 1.4389E+02 1.9147E+02 2.0058E+01 5.4531E+03 6.8428E+00 1.2004E+01
(4.2305E−02) (5.2576E+01) (2.4386E+01) (1.1488E+00) (5.1225E+03) (8.4260E−01) (5.6355E−01)
− − − − − − −CA-ILS [32] 2.1582E+01 6.2493E−01 2.2371E+00 1.8908E+01 2.3161E+03 5.7307E+00 1.3715E+01
(2.3073E−02) (5.6115E−01) (9.6388E−01) (4.8108E+00) (1.0018E+03) (3.7784E+00) (3.7213E−01)
− − − − − − −CA-MMTS 2.0007E+01 6.2573E−07 1.9172E+00 3.5031E+00 1.8521E+02 1.4113E+00 1.0302E+01
(9.0999E−02) (2.5581E−09) (7.6372E−01) (1.6078E−01) (7.6382E+01) (7.1100E−01) (9.7899E−02)
Fun./Algorithms f15 f16 f17 f18 f19 f20 f21
Original CA 3.7962E+02 2.0385E+02 2.8903E+02 9.4552E+02 9.2256E+02 9.2882E+02 5.6584E+02
(4.7681E+01) (1.6721E+01) (4.1922E+01) (5.3492E+00) (8.9344E+00) (2.9351E+01) (4.1625E+01)
− − − − − − −Improved CA 3.0065E+02 9.6376E+01 1.1539E+02 8.9939E+02 9.0365E+02 8.9048E+02 5.0000E+02
(1.3391E+01) (3.8417E+01) (2.7832E+01) (4.5926E+00) (1.8794E+01) (8.5283E+01) (4.2634E−02)
− − − − − − =MCAKM [20] 2.9013E+02 9.1822E+01 1.0373E+02 9.1837E+02 9.0143E+02 5.9052E+03 7.4666E+02
(2.3521E+01) (3.2814E+01) (4.1518E+01) (4.2681E+00) (2.4377E−01) (5.7652E+02) (2.4627E+01)
− − − − − − −MCDE [49] 3.0528E+02 1.2466E+02 1.0792E+03 9.1867E+02 9.1036E+02 6.0878E+03 6.3258E+02
(1.6304E+01) (5.5031E+01) (4.8865E+02) (5.9122E+00) (8.0739E+01) (8.0762E+02) (4.0369E+00)
− − − − − − −HS-CA [16] 3.0589E+02 1.3486E+02 1.3106E+02 9.1153E+02 8.9410E+02 9.1114E+02 7.7153E+02
(2.2111E+01) (3.6172E+01) (2.5424E+01) (9.7638E−01) (5.4621E+01) (7.4629E−01) (9.4287E+00)
− − − − − − −CA-ILS [32] 2.8164E+02 7.9075E+01 1.0301E+02 8.5664E+02 8.8298E+02 8.8898E+02 5.0003E+02
(1.2455E+02) (3.0982E+01) (1.6558E+00) (8.1122E−01) (4.6418E−01) (8.3252E−01) (9.3893E−03)
− − − − − − =CA-MMTS 2.2099E+02 5.2974E+01 8.1185E+01 7.9579E+02 8.0096E+02 7.3705E+02 5.0000E+02
(8.0670E+01) (5.0076E+00) (3.1911E+01) (6.0821E−01) (2.3929E−01) (5.2084E−01) (3.6603E−10)
232 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 4
Mean and standard deviation of the error values for functions f22–f25 @ 30D. Best entries are marked
in boldface. Wilcoxon’s rank sum test at a 0.05 significance level is performed between CA-MMTS and
each of other algorithms.
Fun./Algorithms f22 f23 f24 f25
Original CA 9.5430E+02 7.1856E+02 3.18905E+02 8.6648E+02 − 25
(1.7804E+01) (4.6141E+01) (3.8137E+01) (2.5002E+01) + 0
− − − − = 0
Improved CA 6.0452E+02 5.8832E+02 2.4183E+02 5.0148E+02 − 24
(4.5618E+01) (2.7351E+01) (1.9385E+01) (1.2679E+01) + 0
− − − − = 1
MCAKM [20] 7.1268E+02 7.6839E+02 2.6155E+02 3.8549E+02 − 25
(8.1642E+00) (5.2743E+01) (2.8934E+01) (2.4285E+01) + 0
− − − − = 0
MCDE [49] 1.4697E+03 8.1428E+02 5.3288E+02 6.2523E+02 − 25
(6.8237E+02) (3.5797E+01) (7.1125E+01) (2.4305E+01) + 0
− − − − = 0
HS-CA [16] 9.1385E+02 5.7394E+02 2.3077E+02 4.8975E+02 − 25
(1.4631E+01) (1.2645E+02) (2.8166E+01) (3.5437E+01) + 0
− − − − = 0
CA-ILS [32] 5.7091E+02 5.8318E+02 2.2908E+02 3.5052E+02 − 24
(9.9118E+01) (1.2837E+02) (9.8614E−02) (1.2717E+01) + 0
− − − − = 1
CA-MMTS 5.26038E+02 5.0000E+02 2.0048E+02 2.0866E+02
(8.0395E−02) (0.0000E+00) (6.5028E−06) (8.4376E−01)
Table 5
Mean and standard deviation of the error values for functions f1–f14 @ 50D. Best entries are marked in boldface. Wilcoxon’s rank sum test at
a 0.05 significance level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f1 f2 f3 f4 f5 f6 f7
Original CA 3.6524E−11 1.1042E+01 2.1712E+06 1.8894E+03 5.0891E+03 7.2676E+01 2.5574E+06
(2.8680E−11) (1.2715E+01) (1.8783E+06) (8.1168E+2) (6.1400E+02) (2.5280E+00) (3.5712E+05)
− − − − − − −Improved CA 4.0101E−17 4.2630E−01 8.2516E+04 4.2787E−01 3.1685E+03 3.2689E+01 4.0504E−10
(1.1883E−17) (9.6435E−02) (3.0153E+04) (6.8836E−01) (6.6549E+02) (7.3649E+00) (2.7352E−10)
− − − − − − −MCAKM [20] 1.8624E−17 1.6324E−03 8.4668E+05 5.2905E+04 5.6134E+03 7.8443E+01 6.7865E+05
(3.7457E−17) (2.6320E−04) (2.0101E+06) (8.6210E+03) (5.7683E+02) (7.8652E+00) (1.5762E+05)
− − − − − − −MCDE [49] 5.7624E−15 6.9437E−02 2.2145E+06 1.4096E+04 6.6429E+03 7.2937E+01 7.4313E+05
(1.7356E−15) (3.6667E−03) (5.6093E+05) (4.7953E+03) (9.9452E+02) (1.2324E+01) (3.9223E+04)
− − − − − − −HS-CA [16] 1.1067E−16 4.5384E−03 1.7960E+06 1.5571E+04 3.3413E+03 5.9461E+01 1.5547E+06
(5.6324E−17) (2.1311E−03) (5.7600E+05) (6.3382E+01) (6.8412E+02) (9.3754E+00) (1.0117E+06)
− − − − − − −CA-ILS [32] 5.1938E−19 4.0358E−05 9.2997E+05 1.2556E+03 2.8670E+03 4.8604E+01 6.0606E+04
(3.2394E−18) (6.8096e−06) (1.5541E+05) (7.8252E+01) (3.5988E+02) (3.8275E+00) (3.1835E+04)
− − − − = − −CA-MMTS 9.3517E−38 6.5168E−16 5.5479E+04 2.8309E−03 2.8602E+03 4.0832E−01 6.3624E−15
(3.1836E−39) (4.9020E−17) (1.5662E+04) (9.0355E−05) (3.6502E+02) (1.0035E+00) (5.7351E−16)
Fun./Algorithms f8 f9 f10 f11 f12 f13 f14
Original CA 4.4026E+04 2.7309E+01 5.3078E+02 2.7888E+02 3.9666E+02 9.9715E+02 9.4734E+02
(1.4448E+04) (4.7253E−01) (7.9272E+01) (5.5682E+01) (7.9235E+01) (4.4762E+01) (2.7809E+01)
− − − − − − −Improved CA 2.2436E+01 2.5183E+01 8.9917E+01 9.8315E+01 1.3625E+02 9.5729E+02 2.6384E+02
(8.9264E+00) (5.8254E+00) (5.6394E+01) (2.3497E+01) (9.3637E+01) (1.8837E+02) (6.7292E+01)
− − − − − − −MCAKM [20] 7.1954E+01 1.5279E+02 4.0388E+02 1.6217E+02 2.2015E+02 9.3489E+02 9.3486E+02
(1.1508E+01) (4.1852E+01) (1.8312E+02) (5.0552E+01) (2.9534E+01) (5.6682E+01) (3.6437E+01)
− − − − − − −MCDE [49] 1.8926E+02 2.4720E+01 3.5387E+02 2.4026E+02 2.5441E+02 9.8378E+02 9.3826E+02
(8.9327E+01) (8.1839E−01) (8.6836E+01) (4.7491E+01) (4.3668E+01) (3.1745E+01) (2.5307E+01)
− − − − − − −HS-CA [16] 5.8432E+01 1.6257E+02 4.9158E+02 2.3527E+02 2.7055E+02 9.4254E+02 9.7168E+02
(7.9437E+00) (3.8329E+01) (8.8596E+01) (7.2834E+01) (5.2937E+01) (2.0468E+01) (3.2638E+01)
− − − − − − −CA-ILS [32] 2.0988E+01 2.4002E+01 3.7830E+02 1.6281E+02 1.2185E+02 9.2367E+02 9.3387E+02
(7.8309E+00) (4.8352E−01) (8.3845E+01) (8.3747E+01) (4.7345E+01) (5.8609E+01) (3.2645E+01)
− − − − = − −CA-MMTS 2.0007E+00 1.9778E+01 5.0856E+01 1.2678E+01 1.2265E+02 8.3251E+02 2.0008E+01
(7.1739E−02) (3.9818E−02) (2.8356E+01) (2.7380E+00) (8.6452E+01) (5.2647E+01) (9.2364E−01)
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 233
Table 6
Mean and standard deviation of the error values for functions f15–f25 @ 50D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05
significance level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f15 f16 f17 f18 f19 f20 f21
Original CA 4.2360E+02 2.0713E+02 3.5793E+02 9.7764E+02 9.4163E+02 9.7782E+02 8.9431E+02
(6.4248E+01) (2.2635E+01) (1.6935E+01) (7.7949E+01) (2.6708E+01) (4.2741E+01) (1.6672E+02)
− − − − − − −Improved CA 3.5922E+02 1.6165E+02 2.0953E+02 9.1057E+02 8.9213E+02 9.5327E+02 6.9825E+02
(9.2634E+01) (9.2749E+01) (6.8324E+01) (3.6236E+01) (7.3651E+01) (9.1067E+01) (2.0078E+02)
− − − − − − −MCAKM [20] 4.0204E+02 1.8327E+02 2.8629E+02 9.2012E+02 9.3028E+02 9.8245E+02 8.5044E+02
(4.3424E+01) (4.0054E+01) (2.0385E+01) (5.3504E+01) (4.0540E+01) (1.8345E+02) (2.4305E+02)
− − − − − − −MCDE [49] 3.9312E+02 2.3200E+02 3.1183E+02 9.6867E+02 9.5362E+02 9.5472E+02 8.8278E+02
(4.5746E+01) (1.9462E+01) (4.2986E+01) (5.8643E+01) (1.9089E+01) (1.0036E+02) (2.4471E+02)
− − − − − − −HS-CA [16] 4.1997E+02 1.5861E+02 1.8824E+02 9.2418E+02 9.3714E+02 9.5185E+02 8.2943E+02
(6.7054E+01) (3.4186E+01) (1.9846E+01) (8.2534E+00) (5.6724E+01) (7.5327E+01) (1.5248E+02)
− − − − − − −CA-ILS [32] 3.7505eE+02 1.7409E+02 2.1217E+02 9.0949E+02 9.2816E+02 9.3074E+02 8.0173E+02
(8.3892E+01) (2.5442E+01) (4.5208E+01) (2.6381E+01) (1.8993E+01) (1.8631E+01) (3.2491E+02)
− − − − − − −CA-MMTS 2.7279E+02 1.1832E+02 1.1536E+02 8.3336E+02 8.1258E+02 8.3215E+02 5.5875E+02
(8.2607E+01) (9.7698E+01) (3.1744E+01) (9.8834E−04) (3.7213E+00) (1.4945E−02) (1.1953E+02)
Fun./Algorithms f22 f23 f24 f25
Original CA 9.4163E+02 8.8163E+02 5.7289E+02 1.7456E+03 − 25
(2.0362E+01) (1.6682E+02) (6.3738E+01) (8.8887E+00) + 0
− − − − = 0
Improved CA 9.0502E+02 7.6824E+02 2.0000E+02 5.9985E+02 − 24
(5.9235E+01) (2.0362E+01) (0.0000E+00) (9.3627E+01) + 0
− − = − = 1
MCAKM [20] 9.5143E+02 8.2257E+02 5.3628E+02 1.5953E+03 − 25
(3.4327E+01) (2.0943E+02) (2.9105E+01) (2.1350E+01) + 0
− − − − = 0
MCDE [49] 9.4920E+02 8.6002E+02 2.0000e+02 1.7420E+03 − 24
(5.4213E+01) (1.5406E+02) (0.0000e+02) (8.0032E+00) + 0
− − = − = 1
HS-CA [16] 9.4046E+02 8.1945E+02 6.4964E+02 1.7152E+03 − 25
(2.6127E+01) (2.1004E+02) (6.0053E+01) (2.8624E+01) + 0
− − − − = 0
CA-ILS [32] 9.2488E+02 8.2239E+02 2.0000E+02 1.4281E+03 − 23
(3.4645E+01) (1.4892E+02) (0.0000E+02) (6.0191E+00) + 0
− − = − = 2
CA-MMTS 8.6318E+02 5.0000E+02 2.0000E+02 2.0629E+02
(1.8697E+01) (0.0000E+00) (0.0000E+00) (2.9346E−01)
boldface. As the adjusted p-values in Table 9 (30D) suggest, for α = 0.05, Nemenyi’s, Holm’s, Shaffer’s reject hypotheses 1–12.
Bergmann’s procedure rejects hypotheses 1–13. Comparing the adjusted p-values in Table 10 (50D) show that all test procedures
rejected hypotheses 1–14. On the other hand, for the scalability test, Holm’s, Shaffer’s, and Bergmann’s procedures rejected
hypotheses 1–11, while Nemenyi’s procedure rejected hypotheses 1–9.
4.4. Comparison of the time complexity of the algorithms
The computation complexity of the hybrid algorithm is tested for dimensions and is calculated using the proposed methodol-
ogy in [40]. The resultant running times of the proposed algorithm are compared with other state-of-the-art variants and hybrids
of CA. Results are reported in Tables 11–13 for dimensions D = 30, D = 50 and D = 100, respectively. The CPU time necessary to
evaluate the mathematical operation as declared in [40] is denoted as T0. The required CPU time to perform 2 × 105 evaluations
of a certain dimension D without executing the algorithm is denoted as T1. The complete computing time is denoted as T2. The
mean of complete CPU time for the algorithm using 2 × 105 evaluations of the same dimension on the same benchmark opti-
mization problem is denoted as T̂2. A more complete discussion of the computational methodology for T0, T1 and T̂2 can be found
in [40]. All values in the tables are measured in CPU seconds.
As can be seen from Tables 12–14, CA-MMTS takes the least time at D = 30 D = 50 and D = 100, respectively. This shows
the potential of the algorithm when applied to large-scale optimization problems. This is due to several aspects of the hybrid
algorithm’s design. First, the enhanced implementation of CA with its related, reduced and modified knowledge sources may
have helped to reduce overhead in the data processing aspect. Second, the use of knowledge from the CA to seed the trajectory
generation process may have helped to more efficiently explore the search space As a result, it appears reasonable to conclude
234 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 7
Mean and standard deviation of the error values for functions f1–f14 @ 100D. Best entries are marked in bold-
face. Wilcoxon’s rank sum test at a 0.05 significance level is performed between CA-MMTS and each of other
algorithms.
Fun./Algorithms f1 f2 f3 f4 f5
Original CA 4.9969E−09 1.0246E+03 4.3675E+07 8.6083E+04 6.3263E+05
(2.0896E−09) (3.6240E+02) (9.2153E+06) (4.6566E+04) (3.9251E+03)
− − − − −Improved CA 5.2625E−19 7.3641E−07 8.9253E+04 1.2745E+00 1.1932E+05
(4.6955E−22) (8.2249E−08) (3.2678E+04) (6.6892E−01) (3.9754E+03)
− − − − −MCAKM [20] 1.2045E−09 8.7416E+02 2.0682E+06 8.3834E+04 5.5281E+05
(2.5124E−09) (5.0582E+01) (4.5197E+06) (2.9634E+04) (4.1003E+03)
− − − − −MCDE [49] 6.4834E−09 1.0037E+03 9.1368E+06 1.1538E+05 7.4236E+05
(1.2805E−09) (5.2718E−03) (4.5342E+05) (6.6280E+04) (6.0791E+03)
− − − − −HS-CA [16] 5.4527E−09 9.5739E+02 1.2679E+07 7.1294E+04 5.9836E+05
(2.8437E−09) (3.8939E+01) (4.3701E+06) (2.8637E+04) (3.2431E+03)
− − − − −CA-ILS [32] 3.4968E−12 6.2884E−04 6.5102E+06 7.4837E+04 5.0462E+05
(1.0034E−12) (6.2763E−06) (8.2453E+05) (3.3621E+04) (4.1127E+03)
− − − − −CA-MMTS 2.6348E−32 2.1934E−15 8.0350E+04 3.1836E−02 8.2724E+04
(1.7668E−33) (8.2733E−16) (2.2394E+04) (3.2222E−04) (8.3162E+02)
Fun./Algorithms f6 f7 f8 f9 f10
Original CA 9.6842E+03 7.3625E+06 4.4037E+04 1.1863E+02 8.6635E+02
(1.5318E+01) (4.2751E+05) (1.4562E+04) (7.3720E+00) (7.5271E+01)
− − − − −Improved CA 5.2780E+01 3.8319E−01 2.5269E+01 1.1248E+02 2.8293E+02
(1.0362E+01) (1.2915E−01) (5.9825E+00) (1.9375E+01) (7.3243E+01)
− − − − −MCAKM [20] 7.7433E+03 2.6378E+06 8.6345E+01 9.0057E+02 7.1043E+02
(2.9583E+01) (6.7523E+05) (1.1537E+00) (3.1305E+01) (5.9106E+01)
− − − − −MCDE [49] 8.9224E+03 3.2087E+06 1.9367E+02 1.1662E+02 6.7356E+02
(7.2518E+01) (5.5271E+05) (9.4778E+01) (1.7099E+01) (8.0965E+01)
− − − − −HS-CA [16] 8.0596E+03 8.2637E+06 4.2815E+01 1.1032E+02 1.7224E+03
(7.3752E+01) (6.5388E+06) (1.1085E+00) (1.5637E+01) (1.1763E+02)
− − − − −CA-ILS [32] 7.1638E+03 2.3515E+05 2.0889E+01 1.1064E+02 5.2578E+02
(9.4617E+00) (6.3715E+04) (6.2087E+00) (1.0266E+01) (7.2671E+01)
− − − − −CA-MMTS 1.1482E+00 4.4634E−10 1.2157E+01 9.4862E+01 1.1983E+02
(6.0365E−01) (8.3524E−10) (3.5621E−02) (1.0738E+01) (4.5782E+01)
Fun./Algorithms f11 f12 f13 f14
Original CA 2.8688E+02 2.8301E+05 1.2611E+03 9.5337E+02 − 25
(6.3928E+01) (6.8296E+04) (1.2548E+02) (3.4634E+01) + 0
− − − − = 0
Improved CA 7.0653E+01 1.8376E+03 9.5629E+02 1.0638E+02 − 25
(1.7491E+01) (2.6187E+02) (5.9851E+01) (4.2728E+01) + 0
− − − − = 0
MCAKM [20] 1.9007E+02 2.3067E+05 9.5100E+02 9.8055E+02 − 25
(6.5918E+01) (6.8395E+04) (1.1193E+02) (1.2352E−01) + 0
− − − − = 0
MCDE [49] 2.4930E+02 1.2674E+05 9.9624E+02 9.5270E+02 − 25
(7.6527E+01) (7.0045E+04) (7.7218E−01) (5.6317E+00) + 0
− − − − = 0
HS-CA [16] 2.6627E+02 1.2846E+05 9.6666E+02 9.8776E+02 − 25
(7.6035E+01) (8.0511E+04) (2.3928E+02) (5.1462E+01) + 0
− − − − = 0
CA-ILS [32] 1.6900E+02 2.7261E+05 9.3172E+02 9.4582E+02 − 25
(6.4093E+01) (8.6208E+04) (1.2815E−01) (9.6372E+00) + 0
− − − − = 0
CA-MMTS 1.4361E+01 9.7992E+02 8.4234E+02 2.2835E+01
(6.1365E+00) (7.8731E+01) (7.3959E−03) (9.6790E−04)
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 235
Table 8
Ranking of competitor algorithms, achieved by the Friedman test at dimensions
D = 30, D = 50 and D = 100.
Algorithm Ranking (D = 30) Ranking (D = 50) Ranking (D = 100)
Original CA 1.8400 1.6800 1.6429
Improved CA 4.3000 5.2200 5.6429
MCAKM 3.8800 3.2400 3.5000
MCDE 1.9600 2.5400 2.5714
HS-CA 3.2800 3.2400 2.8571
CA-ILS 5.7600 5.1800 4.7857
CA-MMTS 6.9800 6.9000 7.0000
Statistic 1.1479E+02 1.0693E+02 6.4408E+01
p-value 6.0101E−11 7.7271E−11 4.3046E−11
Table 9
Adjusted p-values when D = 30.
i Hypothesis Unadjusted p pNeme pHolm pShaf pBerg
1 Original CA vs CA-MMTS 4.02E−17 8.44E−16 8.44E−16 8.44E−16 8.44E−16
2 MCDE vs CA-MMTS 2.11E−16 4.42E−15 4.21E−15 3.16E−15 3.16E−15
3 Original CA vs CA-ILS 1.40E−10 2.95E−09 2.67E−09 2.10E−09 2.10E−09
4 MCDE vs CA-ILS 5.00E−10 1.05E−08 8.99E−09 7.49E−09 5.00E−09
5 HS-CA vs CA-MMTS 1.40E−09 2.94E−08 2.38E−08 2.10E−08 1.54E−08
6 MCAKM vs CA-MMTS 3.90E−07 8.20E−06 6.25E−06 5.86E−06 3.51E−06
7 Improved CA vs CA-MMTS 1.15E−05 2.42E−04 1.73E−04 1.73E−04 1.04E−04
8 HS-CA vs CA-ILS 4.93E−05 0.001036 6.90E−04 5.42E−04 3.45E−04
9 Original CA vs Improved CA 5.67E−05 0.001191 7.37E−04 6.24E−04 6.24E−04
10 Improved CA vs MCDE 1.28E−04 0.002694 0.001539 0.001411 8.98E−04
11 Original CA vs MCAKM 8.42E−04 0.017674 0.009258 0.009258 0.005891
12 MCAKM vs MCDE 0.001676 0.035197 0.01676 0.01676 0.006704
13 MCAKM vs CA-ILS 0.002092 0.043929 0.018827 0.018827 0.012551
14 Improved CA vs CA-ILS 0.016872 0.354311 0.134976 0.118104 0.067488
15 Original CA vs HS-CA 0.018435 0.387145 0.134976 0.129048 0.092177
16 MCDE vs HS-CA 0.030745 0.645646 0.18447 0.18447 0.092235
17 CA-ILS vs CA-MMTS 0.045858 0.963028 0.229292 0.229292 0.229292
18 Improved CA vs HS-CA 0.095045 1.995939 0.380179 0.380179 0.380179
19 MCAKM vs HS-CA 0.326109 6.848298 0.978328 0.978328 0.652219
20 Improved CA vs MCAKM 0.491839 10.32863 0.983679 0.983679 0.983679
21 Original CA vs MCDE 0.8443 17.7303 0.983679 0.983679 0.983679
Table 10
Adjusted p-values when D = 50.
i Hypothesis Unadjusted p pNeme pHolm pShaf pBerg
1 Original CA vs CA-MMTS 1.31E−17 2.74E−16 2.74E−16 2.74E−16 2.74E−16
2 MCDE vs CA-MMTS 9.63E−13 2.02E−11 1.93E−11 1.44E−11 1.44E−11
3 MCAKM vs CA-MMTS 2.10E−09 4.40E−08 3.98E−08 3.15E−08 2.31E−08
4 HS-CA vs CA-MMTS 2.10E−09 4.40E−08 3.98E−08 3.15E−08 2.31E−08
5 Original CA vs Improved CA 6.89E−09 1.45E−07 1.17E−07 1.03E−07 1.03E−07
6 Original CA vs CA-ILS 1.01E−08 2.13E−07 1.62E−07 1.52E−07 1.12E−07
7 Improved CA vs MCDE 1.15E−05 2.42E−04 1.73E−04 1.73E−04 1.15E−04
8 MCDE vs CA-ILS 1.56E−05 3.27E−04 2.18E−04 1.73E−04 1.15E−04
9 Improved CA vs MCAKM 0.001193 0.025054 0.01551 0.013124 0.008351
10 Improved CA vs HS-CA 0.001193 0.025054 0.01551 0.013124 0.008351
11 MCAKM vs CA-ILS 0.001498 0.031458 0.016478 0.016478 0.008351
12 HS-CA vs CA-ILS 0.001498 0.031458 0.016478 0.016478 0.008351
13 CA-ILS vs CA-MMTS 0.004878 0.102429 0.043898 0.043898 0.043898
14 Improved CA vs CA-MMTS 0.005968 0.125324 0.047742 0.043898 0.043898
15 Original CA vs MCAKM 0.010675 0.224183 0.074728 0.074728 0.074728
16 Original CA vs HS-CA 0.010675 0.224183 0.074728 0.074728 0.074728
17 Original CA vs MCDE 0.159278 3.344829 0.796388 0.796388 0.477833
18 MCAKM vs MCDE 0.251943 5.290793 1.00777 1.00777 1.00777
19 MCDE vs HS-CA 0.251943 5.290793 1.00777 1.00777 1.00777
20 Improved CA vs CA-ILS 0.947803 19.90387 1.895607 1.895607 1.895607
21 MCAKM vs HS-CA 1 21 1.895607 1.895607 1.895607
236 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 11
Adjusted p-values when D = 100.
i Hypothesis Unadjusted p pNeme pHolm pShaf pBerg
1 Original CA vs CA-MMTS 5.34E−11 1.12E−09 1.12E−09 1.12E−09 1.12E−09
2 MCDE vs CA-MMTS 5.83E−08 1.22E−06 1.17E−06 8.75E−07 8.75E−07
3 HS-CA vs CA-MMTS 3.90E−07 8.18E−06 7.40E−06 5.84E−06 4.29E−06
4 Original CA vs Improved CA 9.63E−07 2.02E−05 1.73E−05 1.45E−05 1.45E−05
5 MCAKM vs CA-MMTS 1.81E−05 3.81E−04 3.08E−04 2.72E−04 1.63E−04
6 Original CA vs CA-ILS 1.19E−04 0.002489 0.001896 0.001778 0.001304
7 Improved CA vs MCDE 1.69E−04 0.003544 0.002531 0.002531 0.001688
8 Improved CA vs HS-CA 6.45E−04 0.013553 0.009035 0.007099 0.004518
9 MCDE vs CA-ILS 0.006689 0.140473 0.086959 0.073581 0.046824
10 CA-ILS vs CA-MMTS 0.006689 0.140473 0.086959 0.073581 0.060203
11 Improved CA vs MCAKM 0.008679 0.182255 0.095467 0.095467 0.060203
12 HS-CA vs CA-ILS 0.018176 0.381701 0.181763 0.181763 0.090881
13 Original CA vs MCAKM 0.022934 0.481622 0.206409 0.206409 0.160541
14 Improved CA vs CA-MMTS 0.096482 2.026121 0.771856 0.675374 0.48241
15 MCAKM vs CA-ILS 0.115332 2.421976 0.807325 0.807325 0.48241
16 Original CA vs HS-CA 0.136965 2.876256 0.821788 0.821788 0.547858
17 MCAKM vs MCDE 0.255428 5.363995 1.277142 1.277142 1.021713
18 Original CA vs MCDE 0.255428 5.363995 1.277142 1.277142 1.021713
19 Improved CA vs CA-ILS 0.293819 6.170192 1.277142 1.277142 1.021713
20 MCAKM vs HS-CA 0.431085 9.052789 1.277142 1.277142 1.021713
21 MCDE vs HS-CA 0.726393 15.25426 1.277142 1.277142 1.021713
Table 12
Computational complexity results for D = 30 dimensions.
Algorithm T0 T1 T̂2 (T̂2 − T1)/T0
CA 5.3421E−01 9.2719E+00 1.6973E+01 1.4416E+01
Improved CA 4.6341E−01 8.0809E+00 1.3272E+01 1.1202E+01
MCAKM 4.4261E−01 1.0021E+01 1.8452E+01 1.9049E+01
CA-ILS 5.5638E−01 9.8215E+00 1.7118E+01 1.3114E+01
HS-CA 6.1538E−01 8.9103E+00 1.7275E+01 1.3593E+01
MCDE 4.2168E−01 9.1074E+00 1.6308E+01 1.7076E+01
CA-MMTS 4.0261E−01 8.2719E+00 1.3172E+01 1.2172E+01
Table 13
Computational complexity results for D = 50 dimensions.
Algorithm T0 T1 T̂2 (T̂2 − T1)/T0
CA 5.3421E−01 2.1548E+01 3.2785E+01 2.1034E+01
Improved CA 4.6341E−01 1.1406E+01 1.8382E+01 1.5054E+01
MCAKM 4.4261E−01 1.9343E+01 2.7317E+01 1.8018E+01
CA-ILS 5.5638E−01 2.0639E+01 2.8983E+01 1.4997E+01
HS-CA 6.1538E−01 2.4785E+01 3.4404E+01 1.5631E+01
MCDE 4.2168E−01 2.5354E+01 3.3361E+01 1.8988E+01
CA-MMTS 4.4261E−01 1.1703E+01 1.8189E+01 1.4654E+01
that the proposed hybrid algorithm achieved a good balance between improving the algorithm’s performance and computational
complexity, compared to the other state-of-the-art CA-based algorithms that consumed more computational resources yet did
not deliver any better results.
4.5. Comparison with other state-of-the-art evolutionary algorithms
In this section, the performance of the CA-MMTS is compared with other algorithms from the literature. These algorithms
represent recent and well-known algorithms. The following algorithms will be used for comparisons:
1. Multiple trajectory search (MTS) [42].
2. Memetic PSO algorithm (MPSO) [47].
3. Differential covariance matrix adaptation evolutionary algorithm (DCMA-EA) [18].
4. Multi-algorithm genetically adaptive method for single objective optimization (AMALGAM-SO) [43].
5. Differential evolution based on covariance matrix learning and bimodal distribution parameter settings (CoBiDE) [46]
6. Repairing the crossover rate in adaptive differential evolution (Rcr-JADE-s4) [19]
7. Improving Adaptive Differential Evolution with Controlled Mutation Strategy (ADE-CM) [38]
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 237
Table 14
Computational complexity results for D = 100 dimensions.
Algorithm T0 T1 T̂2 (T̂2 − T1)/T0
CA 5.3421E−01 2.8205E+01 4.2065E+02 7.3463E+02
Improved CA 4.6341E−01 1.8008E+01 6.6513E+01 1.0467E+02
MCAKM 4.4261E−01 2.4452E+01 8.8082E+01 1.4376E+02
CA-ILS 5.5638E−01 2.1947E+01 8.7205E+01 1.1729E+02
HS-CA 6.1538E−01 2.1479E+01 8.3619E+01 1.0098E+02
MCDE 4.2168E−01 2.3217E+01 7.3475E+01 1.1919E+02
CA-MMTS 4.4261E−01 1.9422E+01 6.1215E+01 9.4424E+01
8. Differential Evolution with Controlled Annihilation and Regeneration of Individuals and A Novel Mutation Scheme (CAR-
DE) [31]
For the modified MTS, M is set to 5, and the number of foreground solutions is set to 3. For MPSO [47], the cognitive and
social learning factors c1 and c2 were both set to 1.4962, the inertia weight ω was set to 0.72984, β = 0.5, pmaxls
= 1.0, pminls
= 0.1,
rs = 2, ls_num = 5, r1 = 0.01, and θ= 1.0E−06. The parametric set-up for all these algorithms matches their respective sources.
The population size was set to 50 for DCA-MA, 60 for CoBiDE and 100 for Rcr-JADE-s4, ADE-CM and CAR-DE.
In these comparisons, it is worth mentioning that among the competing algorithms, DCMA-EA [18] reported superior perfor-
mance to CMA-ES, which was the winner of the CEC 2005 competition [40]. Therefore, we will not compare with the results that
were reported from the competition in [40].
A careful scrutiny of the mean of the error values in Tables 15 and 16 indicates that, considering the mean of the error values
for all of the problems at 30D, the CA-MMTS showed a performance that is at least as good as other state-of-the-art contestant
algorithm in 21 functions. The algorithm showed a performance that is equal to the other algorithms for function f21. The CA-
MMTS algorithm managed to rank third for functions f3, f5, f6 and f9 while being outperformed by COBiDE and Rcr-JADE-s4. It
outperformed all the contestant algorithms in a statistically significant manner over the other 21 functions. A careful inspection
of Tables 17 and 18 reveals that the performance was enhanced when the search space dimensionality was increased to 50D.
CA-MMTS shows a performance that is at least as good as the other competent algorithms over 16 benchmark instances in
50D. It was second best for function f2 while being outperformed only by Rcr-JADE-s4 alone. It achieved a statistically superior
performance compared to all other competing algorithms over 16 functions. On the other hand, Table 19 demonstrates the results
of the scalability study with D = 100. CA-MMTS outperformed the other state-of-the-art algorithms in 8 functions out of the 14
functions considered. The ranking of the proposed algorithms was not affected while increasing the dimensionality to 100D and
obtained very competitive results.
In Fig. 9, we show the convergence graphs for the median run of the algorithms on four benchmarks in 30D. Its apparent from
those figures that the overall convergence speed of CA-MMTS is the best among the contestant algorithms. We restrained from
giving all the graphs in order to save space.
The overall rankings of all algorithms are shown in Table 20. The Friedman test suggests that there are statistically sig-
nificant differences among the selected algorithms. CA-MMTS was able to obtain the highest rank among all the competi-
tor algorithms. The p-values for all tests were not included as it would have produced a very large table of all possible pairs
of algorithms. These tests, could be helpful if we are to compare against a control algorithm that is selected as a base algo-
rithm in order to see how far these algorithms are from this base algorithm. This is illustrated in Tables 21–23. Using post-
hoc tests as specified in [11], Tables 21–23 illustrates how useful it is to compare performances with a base control method
and to measure the significance of their differences. The control method was MTS in Tables 21 (30D) and 22 (50D), while it
is MPSO in Table 23 (100D). Each of these algorithms was used as the baseline form comparison in each dimension as they
obtained the worst rank in each corresponding dimension. It is apparent that CA-MMTS was able to obtain statistically sig-
nificant differences with the best p-value in all cases. From the Wilcoxon tests (summarized in Tables and the Friedman test
of Table 20, it is CA-MMTS which demonstrates a competitive performance over such benchmark functions for all dimensions
employed.
4.6. Comparative performance over real-life optimization problems
In this sub-section, the proposed algorithm is applied to a set of real-world engineering optimization problems. A crucial
issue to consider when dealing with such problems is how to handle constraints. Constraint handling techniques can be either
penalty-based, techniques that preserve feasibility, and hybrid methods among others [30]. The proposed algorithm uses an
adaptive penalty-based technique as described in a previous work [1] .
4.6.1. Tension/compression string
The tension/compression string is a challenging mechanical design problem that consists of minimizing the weight of a ten-
sion/compression spring, subject to several constraints on minimum deflection, surge frequency, shear stress, restrictions on
23
8M
.Z.A
liet
al./In
form
atio
nScien
ces3
34
–3
35
(20
16)
219
–2
49
Table 15
Mean and standard deviation of the error values for functions f1–f14 @ 30D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05 significance level is performed between CA-MMTS and each
of other algorithms.
Fun./ Algorithms f1 f2 f3 f4 f5 f6 f7
MTS [42] 8.1663E−27 9.5410E−08 2.3876E+05 7.5482E+02 5.0853E+02 8.0736E+01 2.3768E−01
(9.1900E−26) (4.6341E−08) (1.1157E+05) (4.4726E+00) (1.1963E+02) (3.2984E+01) (4.6354E−02)
− − − − − − −MPSO [47] 6.3315E−22 9.2696E−09 6.5517E+04 2.1328E−01 4.6042E+02 1.0122E+01 1.1628E+00
(2.7467E−22) (6.9253E−10) (2.6282E+04) (6.3627E−01) (4.5317E+02) (9.3720E−01) (5.7382E−01)
− − − − − − −DCMA-EA [18] 2.5465E−34 8.1423E−11 8.3528E−02 4.3481E−09 3.5816E+01 5.7180E+00 7.3542E−19
(8.7262E−35) (6.6629E−11) (5.9261E−04) (2.4169E−09) (2.3735E+00) (2.8361E+00) (6.6822E−19)
− − + − + − −AMALGAM-SO [43] 1.5572E−15 6.1215E−15 8.8784E−14 1.0400E+03 2.4965E−05 9.7451E−01 2.8473E−03
(8.9895E−16) (2.7945E−15) (2.7317E−14) (2.1606E+03) (6.4416E−05) (1.7326E+00) (4.9263E−03)
− − + − + − −COBiDE [46] 0.0000E+00 1.7823E−12 7.4797E+04 1.3191E−03 1.0223E+02 3.0178E−02 3.6946E−03
(0.0000E+00) (2.7808E−12) (4.5801E+04) (2.0641E−03) (1.4098E+02) (4.9360E−02) (6.8239E−03)
+ − − − + + −Rcr-JADE-s4 [19] 0.00E+00 (0.00E+00) 3.78E−28 (1.98E−28) 1.50E+04 (1.29E+04) 6.37E−11 (3.17E−10) 2.04E−01 (8.02E−01) 1.59E−01 (7.89E−01) 5.12E−03 (6.94E−03)
+ − − − + + −CA-MMTS 4.2880E−57 8.4964E−29 3.6052E+03 4.5042E−12 1.4572E+02 6.0008E−01 1.1813E−21
(1.3628E−58) (1.0274E−31) (3.6027E+02) (3.8202E−13) (4.8111E+01) (2.1217E−02) (1.3382E−22)
Fun./ Algorithms f8 f9 f10 f11 f12 f13 f14
MTS [42] 2.4638E+01 3.9079E+01 2.3218E+02 1.9129E+01 2.9033E+04 6.1562E+00 1.6582E+01
(1.3499E−01) (1.9347E+01) (8.5370E+01) (7.9036E+00) (1.2861E+04) (9.9367E−01) (5.3175E+00)
− − − − − − −MPSO [47] 2.2139E+01 5.9240E+00 1.9274E+01 1.9897E+01 4.8890E+04 5.8113E+00 1.3020E+01
(4.1011E−04) (8.4698E−01) (5.2387E+00) (1.5816E+00) (1.7914E+04) (8.6508E−01) (8.1259E−02)
− − − − − − −DCMA-EA [18] 2.1339E+01 1.7004E+01 4.9336E+01 4.6168E+00 3.9552E+04 2.3066E+00 1.2804E+01
(1.3301E+00) (8.6356E+00) (1.1603E+01) (6.5997E−02) (3.6744E+04) (1.9878E+00) (5.3608E−01)
− − − − − − −AMALGAM-SO [43] 2.0088E+01 2.3790E+01 5.8167E+01 1.0940E+01 1.4416E+04 2.4729E+00 (5.5824E− 1.3001E+01 (3.9583E−
(2.4070E−01) (6.1652E+00) (1.6608E+01) (2.8866E+00) (2.1517E+04) 01) 01)
= − − − − − −COBiDE [46] 2.0805E+01 0.0000E+00 4.2300E+01 6.0499E+00 3.4115E+03 2.4401E+00 1.2248E+01
(3.3126E−01) (0.0000E+00) (1.2593E+01) (2.5307E+00) (4.3109E+03) (1.0108E+00) (5.3684E−01)
− + − − − − −Rcr-JADE-s4 [19] 2.04E+01 0.00E+00 2.47E+01 1.60E+01 1.51E+03 1.69E+00 (1.11E−01) 1.12E+01 (1.02E+00)
(4.56E−01) (0.00E+00) (9.35E+00) (3.25E+00) (2.77E+03) = −− + − − −
CA-MMTS 2.0007E+01 6.2573E−07 1.9172E+00 3.5031E+00 1.8521E+02 1.4113E+00 1.0302E+01
(9.0999E−02) (2.5581E−09) (7.6372E−01) (1.6078E−01) (7.6382E+01) (7.1100E−01) (9.7899E−02)
M.Z
.Ali
eta
l./Info
rma
tion
Sciences
33
4–
33
5(2
016
)2
19–
24
92
39
Table 16
Mean and standard deviation of the error values for functions f15–f25 @ 30D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05 significance level is performed between CA-MMTS and each
of other algorithms.
Fun./Algorithms f15 f16 f17 f18 f19 f20 f21
MTS [42] 3.2876E+02 1.3264E+02 1.2537E+02 9.4592E+02 9.1126E+02 8.9964E+02 5.1108E+02
(6.3606E+01) (8.6865E+01) (1.5560E+01) (1.6523E+01) (1.1302E+01) (4.8502E+00) (4.0889E+00)
− − − − − − −MPSO [47] 3.1893E+02 1.7443E+02 1.9789E+02 8.9927E+02 8.8670E+02 8.6725E+02 1.2819E+03
(1.0683E+02) (5.4798E+01) (4.2088E+01) (3.4278E+01) (6.9281E+01) (5.4641E+01) (5.5913E+02)
− − − − − − −DCMA-EA [18] 3.2436E+02 9.2891E+01 1.0752E+02 8.8329E+02 9.0463E+02 9.0077E+02 5.9507E+02
(4.2542E+01) (4.3097E+01) (7.4873E+01) (7.4825E−01) (2.3777E+00) (2.8004E+01) (3.2934E+00)
− − − − − − −AMALGAM-SO [43] 3.1164E+02 2.3610E+02 2.9866E+02 9.0849E+02 9.0984E+02 9.0928E+02 5.1333E+02 (6.2523E+01)
(1.3502E+02) (1.7673E+02) (1.9223E+02) (2.0083E+00) (3.3734E+00) (3.4276E+00)
− − − − − − −COBiDE [46] 4.0600E+02 7.1442E+01 7.1353E+01 9.0404E+02 9.0421E+02 9.0435E+02 5.0000E+02
(4.2426E+01) (1.8023E+01) (2.0485E+01) (3.7496E−01) (8.1042E−01) (8.8396E−01) (3.3786E−13)
− − + − − − =Rcr-JADE-s4 [19] 3.48E+02 (6.46E+01) 5.60E+01 (5.53E+01) 8.75E+01 (1.12E+02) 9.10E+02 (2.20E+00) 9.10E+02 (2.49E+00) 9.10E+02 (2.49E+00) 5.00E+02 (0.00E+00)
− − − − − − =CA-MMTS 2.2099E+02 5.2974E+01 8.1185E+01 7.9579E+02 8.0096E+02 7.3705E+02 5.0000E+02
(8.0670E+01) (5.0076E+00) (3.1911E+01) (6.0821E−01) (2.3929E−01) (5.2084E−01) (3.6603E−10)
Fun./ Algorithms f22 f23 f24 f25
MTS [42] 8.1902E+02 5.9158E+02 2.9574E+02 6.1816E+02 − 25
(1.0819E+02) (1.7246E+01) (5.8134E+01) (4.7691E+01) + 0
− − − − = 0
MPSO [47] 9.1955E+02 6.4207E+02 5.2118E+02 5.5737E+02 − 25
(1.5781E+02) (4.5780E+01) (6.1421E−01) (4.7431E+01) + 0
− − − − = 0
DCMA-EA [18] 8.0415E+02 5.0133E+02 2.0979E+02 2.1730E+02 − 23
(5.3283E+01) (6.6195E−02) (6.6606E+00) (6.3458E+00) + 2
− − − − = 0
AMALGAM-SO [43] 8.6173E+02 5.8348E+02 5.1093E+02 2.1140E+02 − 22
(2.0990E+01) (1.4380E+02) (3.7397E+02) (1.1663E+00) + 2
− − − − = 1
COBiDE [46] 8.5965E+02 5.3416E+02 2.0000E+02 2.1005E+02 − 19
(2.9738E+01) (9.8949E−05) (2.8710E−14) (7.8921E−01) + 4
− − = − = 2
Rcr-JADE-s4 [19] 8.63E+02 (1.47E+01) 5.34E+02 (3.71E−04) 2.00E+02 (0.00E+00) 2.09E+02 − 18
− − = (2.51E−01) + 4
= = 3
CA-MMTS 5.26038E+02 5.0000E+02 2.0048E+02 2.0866E+02
240 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 17
Mean and standard deviation of the error values for functions f1–f14 @ 50D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05
significance level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f1 f2 f3 f4 f5 f6 f7
MTS [42] 9.3487E−16 3.1512E+00 9.0050E+05 3.3692E+04 5.7822E+03 7.1562E+01 3.1913E+06
(5.3267E−17) (1.2637E+01) (7.5548E+06) (5.8483E+03) (8.7113E+02) (9.0036E+00) (2.3399E+05)
− − − − − − −MPSO [47] 4.0683E−12 4.6237E−03 4.0588E+06 2.0098E+03 4.0557E+03 6.1063E+01 7.7695E+04
(7.8328E−12) (3.5474E−04) (6.6740E+05) (1.8419E+02) (7.1201E+02) (3.8127E+00) (1.6178E+04)
− − − − − − −DCMA-EA [18] 6.7689E−22 6.3841E−04 1.6540E+05 6.8699E−02 3.4979E+03 1.2999E+01 7.3623E−14
(4.1735E−22) (5.3644E−05) (4.2988E+04) (4.0351E−03) (7.6881E+02) (6.7282E+00) (3.4718E−14)
− − − − − − −AMALGAM-SO [43] 3.7296E−15 1.4291E−14 1.5395E−13 1.0182E+04 1.4488E−03 4.3080E−01 9.8559E−04
(1.6375E−15) (5.8891E−15) (4.0523E−14) (9.8121E+03) (6.6931E−03) (1.2186E+00) (3.1082E−03)
− − + − + = −ADE-CM [38] 5.6843E−14 3.6815E−08 9.4416E+05 2.4608E+00 1.9463E+03 2.2437E+01 5.2295E−09
(0.0000E+00) (3.8240E−08) (2.7796E+05) (1.3786E+00) (9.9583E+01) (1.2226E+01) (6.3359E−09)
− − − − + − −CAR-DE [31] 5.8927E−36 5.1189E−13 8.5215E+ 04 1.5795E−02 4.0927E+02 1.0653E+01 1.1084E−12
(0.0000E+00) (1.4956E−14) (1.6874E+04) (2.2227E−02) (1.6839E+02) (3.9319E+00) (4.0164E−14)
− − − − − − −COBiDE [46] 0.0000E+00 1.7254E−06 2.4058E+05 2.1236E+02 2.6874E+03 2.9483E+01 2.8057E−03
(0.0000E+00) (2.1576E−06) (1.0693E+05) (1.9873E+02) (5.6828E+02) (2.4319E+01) (6.2850E−03)
+ − − − = − −Rcr-JADE-s4 [19] 0.00E+00 2.14E−26 2.46E+04 8.21E+02 1.74E+03 5.58E−01 1.87E−03
(0.00E+00) (1.64E−26) (1.35E+04) (5.80E+03) (3.74E+02) (1.40E+00) (5.36E−03)
+ + + − + − −CA-MMTS 9.3517E−38 6.5168E−16 5.5479E+04 2.8309E−03 2.8602E+03 4.0832E−01 6.3624E−15
(3.1836E−39) (4.9020E−17) (1.5662E+04) (9.0355E−05) (3.6502E+02) (1.0035E+00) (5.7351E−16)
Fun./Algorithms f8 f9 f10 f11 f12 f13 f14
MTS [42] 4.3876E+04 2.4725E+01 4.9721E+02 2.4103E+02 4.5289E+02 1.1999E+03 9.3346E+02
(2.8159E+04) (4.7255E−01) (5.6389E+01) (7.9627E+01) (8.8078E+01) (8.5734E+01) (3.7520E+01)
− − − − − − −MPSO [47] 4.7977E+01 1.5386E+01 4.6008E+02 2.0278E+02 1.7939E+02 9.5303E+02 9.4230E+02
(7.9320E+00) (6.7382E−01) (1.5684E+02) (9.8459E+01) (4.2561E+01) (6.7304E+01) (2.3766E+01)
− + − − − − −DCMA-EA [18] 2.9362E+00 1.9833E+02 1.9953E+02 3.1893E+01 2.3467E+02 9.1069E+02 2.7397E+01
(9.0362E−01) (2.9402E+01) (4.4272E+01) (2.0198E+01) (1.1848E+02) (9.8369E−01) (8.3628E−01)
− − − − − − −AMALGAM-SO [43] 2.0285E+01 3.9828E+01 8.7108E+01 1.7422E+01 4.1320E+04 4.4298E+00 2.2410E+01
(4.6312E−01) (8.8792E+00) (1.7834E+01) (4.0038E+00) (3.7896E+04) (6.5626E−01) (5.2845E−01)
= − − − − + −ADE-CM [38] 2.0648E+01 4.9748E+01 7.3627E+01 4.9014E+01 1.0643E+06 4.7727E+00 2.1897E+01
(6.2600E−02) (7.9907E+00) (1.1667E+01) (6.8108E+00) (3.8026E+05) (8.2710E−01) (3.7870E−01)
= − − − − + −CAR-DE [31] 2.1131E+01 1.5422E+02 1.7213E+02 4.1575E+01 1.4989E+06 1.4648E+01 2.1956E+01
(2.0287E−02) (2.1849E+01) (4.3521E+01) (1.4847E+00) (2.8857E+05) (4.0440E+00) (1.0006E+00)
− − − − − + −COBiDE [46] 2.0791E+01 4.4867E−13 8.6149E+01 1.9346E+01 1.5818E+04 4.3209E+00 2.1828E+01
(5.1209E−01) (1.7777E−12) (1.7459E+01) (4.0434E+00) (1.5052E+04) (9.1517E−01) (5.7241E−01)
= + − − − + −Rcr-JADE-s4 [19] 2.07E+01 0.00E+00 5.12E+01 4.32E+01 6.89E+03 3.04E+00 2.08E+01
(5.51E−01) (0.00E+00) (1.18E+01) (1.15E+01) (1.15E+04) (2.05E−01) (1.24E+00)
= + − − − + =CA-MMTS 2.0007E+00 1.9778E+01 5.0856E+01 1.2678E+01 1.2265E+02 8.3251E+02 2.0008E+01
(7.1739E−02) (3.9818E−02) (2.8356E+01) (2.7380E+00) (8.6452E+01) (5.2647E+01) (9.2364E−01)
design variables, and limits on outside diameter of the spring [41]. The design variables are the wire diameter d(= x1), the mean
coil diameter D(= x2), and the number of active coils N(= x3). This problem can be described mathematically as follows:
f (x) = x21x2x3 + 2x2
1x2,
subject to,
g1(x) = 1 − x32x3
71785x41
≤ 0,
g2(x) = 4x22 − x1x2
12566(x2x3 − x4)+ 1
5108x2− 1 ≤ 0,
1 1 1
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 241
Table 18
Mean and standard deviation of the error values for functions f15–f25 @ 50D. Best entries are marked in boldface. Wilcoxon’s rank sum test at a 0.05
significance level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f15 f16 f17 f18 f19 f20 f21
MTS [42] 3.6789E+02 1.6962E+02 3.5083E+02 9.3007E+02 9.2742E+02 9.6281E+02 8.9031E+02
(6.3863E+01) (3.1773E+01) (2.7589E+01) (4.8683E+01) (4.5376E+01) (6.3677E+01) (2.3768E+02)
− − − − − − −MPSO [47] 4.0816E+02 2.1902E+02 2.0133E+02 9.3595E+02 9.3871E+02 9.4660E+02 8.2379E+02
(5.9787E+01) (4.2106E+01) (5.8617E+01) (1.4725E+01) (1.4736E+01) (2.2078E+01) (2.8588E+02)
− − − − − − −DCMA-EA [18] 4.3387E+02 1.3005E+02 1.9997E+02 9.2918E+02 8.58350E+02 9.2052E+02 7.5234E+02
(8.7189E+01) (2.5127E+01) (1.7951E+02) (8.6358E−01) (3.2807E−01) (9.5276E−01) (1.4368E+02)
− − − − − − −AMALGAM-SO [43] 3.0232E+02 1.4853E+02 2.0435E+02 9.2250E+02 9.2409E+02 9.2292E+02 9.8074E+02
(9.6878E+01) (1.2952E+02) (1.5698E+02) (1.3121E+01) (3.6931E+00) (7.9616E+00) (1.2273E+02)
− − − − − − −ADE-CM [38] 2.7719E+02 4.7949E+01 8.1306E+01 8.3667E+02 8.3713E+02 8.3589E+02 7.2413E+02
(3.6051E+01) (2.0325E+00) (6.4336E+01) (5.7629E−03) (1.453E−01) (3.659E−02) (9.6580E−01)
= + − = − − −CAR-DE [31] 3.6446E+02 1.2552E+02 1.2393E+02 8.4017E+02 8.4094E+02 8.3885E+02 7.3459E+02
(1.2395E+01) (1.2970E+00) (1.3762E+00) (1.0514E+00) (3.1157E+00) (1.8098E+00) (8.9992E+00)
− − − − = −COBiDE [46] 3.8400E+02 7.4110E+01 8.2848E+01 9.1766E+02 9.1311E+02 9.1606E+02 5.5658E+02
(5.4810E+01) (2.0693E+01) (5.4770E+01) (2.8416E+00) (2.3775E+01) (1.7059E+01) (1.5710E+02)
− + − − − − =Rcr-JADE-s4 [19] 3.10E+02 5.02E+01 6.33E+01 9.30E+02 9.35E+02 9.35E+02 5.00E+02
(1.04E+02) (2.47E+01) (7.27E+01) (2.78E+01) (2.29E+01) (2.24E+01) (0.00E+00)
− + + − − − +CA-MMTS 2.7279E+02 1.1832E+02 1.1536E+02 8.3336E+02 8.1258E+02 8.3215E+02 5.5875E+02
(8.2607E+01) (9.7698E+01) (3.1744E+01) (9.8834E−04) (3.7213E+00) (1.4945E−02) (1.1953E+02)
Fun./Algorithms f22 f23 f24 f25
MTS [42] 9.3832E+02 8.1628E+02 4.2064E+02 1.7272E+03 − 25
(1.4173E+01) (1.1837E+02) (1.5079E+01) (9.7493E+00) + 0
− − − − = 0
MPSO [47] 9.6512E+02 8.5395E+02 2.0000E+02 1.5382E+03 − 23
(3.4755E+01) (2.2383E+02) (0.0000E+02) (1.0366E+01) + 1
− − = − = 1
DCMA-EA [18] 9.3606E+02 8.3115E+02 2.1620E+02 2.1437E+02 − 25
(4.8465E+01) (1.0634E+02) (4.0027E+00) (6.5873E+00) + 0
− − − − = 0
AMALGAM-SO [43] 8.6411E+02 9.7631E+02 4.7245E+02 2.1578E+02 − 19
(2.0823E+01) (1.3239E+02) (3.7571E+02) (1.2040E+00) + 3
= − − − = 3
ADE-CM [38] 5.0007E+02 7.2826E+02 2.1583E+02 2.1425E+02 − 18
(3.2000E−03) (1.3640E−01) (6.3602E−02) (1.6429E+00) + 4
+ − − − = 3
CAR-DE [31] 5.001E+02 7.0374E+02 2.000E+02 2.3126E+02 − 21
(8.2260E+00) (7.8156E+01) (0.0000E+00) (6.4983E+00) + 2
+ − = − = 2
COBiDE [46] 8.8125E+02 6.1481E+02 2.0000E+02 2.1547E+02 − 17
(2.1894E+01) (1.7518E+02) (8.9795E−13) (1.0417E+00) + 4
− − = − = 4
Rcr-JADE-s4 [19] 9.05E+02 5.39E+02 2.00E+02 2.14E+02 − 14
(1.33E+01) (8.89E−03) (0.00E+00) (5.07E−01) + 9
− − = − = 2
CA-MMTS 8.6318E+02 5.0000E+02 2.0000E+02 2.0629E+02
(1.8697E+01) (0.0000E+00) (0.0000E+00) (2.9346E−01)
g3(x) = 1 − 140.45x1
x22x3
≤ 0,
g4(x) = x1 + x2
1.5− 1 ≤ 0,
0.05 ≤ x1 ≤ 2, 0.25 ≤ x2 ≤ 1.3, 2 ≤ x3 ≤ 15 (12)
Table 24 presents the optimal design variables, constraints, optimal value, and required number of function evaluations (FEs)
to reach the optimal value, for CA-MMTS and other reported values in the literature. It also shows the statistical simulation results
over 50 independent runs. Results after applying CA-MMTS to this problem, as seen in Tables 24, show that although CA-MMTS
obtained the second best results for this problem. t was able to obtain the best mean value with the least variation between the
242 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 19
Mean and standard deviation of the error values for functions f1–f14 @ 100D. Best entries are marked in boldface.
Wilcoxon’s rank sum test at a 0.05 significance level is performed between CA-MMTS and each of other algorithms.
Fun./Algorithms f1 f2 f3 f4 f5
MTS [42] 5.3687E−11 8.9261E+02 2.6883E+06 8.3626E+04 6.6429E+05
(2.0839E−11) (2.6341E−02) (8.3519E+06) (2.9418E+04) (4.0235E+03)
− − − − −MPSO [47] 9.0063E−10 4.2571E−01 1.4528E+07 9.7666E+04 1.1117E+06
(7.6003E−10) (3.2609E−01) (4.4293E+06) (5.2716E+04) (7.5963E+03)
− − − − −DCMA-EA [18] 1.0773E−14 5.2517E−03 2.3622E+06 3.4732E−01 5.1004E+05
(2.2078E−15) (7.9751E−04) (6.6277E+05) 4.8461E−02 (3.6178E+03)
− − − − −AMALGAM-SO [43] 1.7012E−14 6.7001E−14 1.7421E−11 8.2423E+04 4.6445E+00
(9.5019E−15) (2.6106E−14) (3.1742E−11) (3.0925E+04) (1.0199E+01)
− − + − +ADE-CM [38] 1.1937E−13 8.4900E−01 6.2710E+06 3.4455E+03 7.4510E+03
(1.7980E−14) (6.1090E−01) (2.7720E+5) (2.0570E+2) (7.1330E+2)
− − − − +CAR-DE [31] 9.0990E−13 1.5207E+01 4.5832E+06 9.7403E+04 1.7284E+04
(7.1901E−14) (2.8769E+01) (2.9612E+04) (2.6157E+04) (2.7346E+03)
− − − − +COBiDE [46] 0.0000E+00 6.0835E−02 1.5171E+06 2.4288E+04 6.0781E+03
(0.0000E+00) (2.7718E−02) (3.4843E+05) (6.8821E+0) (1.0667E+03)
+ − − − +Rcr-JADE-s4 [19] 0.00E+00 3.09E+04 2.87E−06 5.73E+04 2.13E+03
(0.00E+00) (6.25E+04) (5.90E−06) (9.50E+04) (4.13E+02)
+ − + − +CA-MMTS 2.6348E−32 2.1934E−15 8.0350E+04 3.1836E−02 8.2724E+04
(1.7668E−33) (8.2733E−16) (2.2394E+04) (3.2222E−04) (8.3162E+02)
Fun./Algorithms f6 f7 f8 f9 f10
MTS [42] 8.7382E+03 8.0482E+06 4.3883E+04 1.0989E+02 5.6917E+03
(6.3724E+01) (1.6177E+05) (2.8159E+04) (1.4738E+01) (1.2748E+02)
− − − − −MPSO [47] 1.1526E+04 6.7592E+05 4.9365E+01 1.1730E+02 1.7443E+03
(6.3718E+02) (5.2716E+04) (9.0362E+00) (5.0356E+00) (2.4039E+02)
− − − − −DCMA-EA [18] 8.7395E+03 3.2764E−06 1.9306E+01 9.2161E+02 1.9108E+02
(6.1573E+00) (4.8299E−06) (8.0943E−01) (1.6350E+01) (1.9304E+01)
− − − − −AMALGAM-SO [43] 6.0808E+00 1.2324E−03 2.0538E+01 1.0775E+02 2.0134E+02
(2.3648E+01) (3.1591E−03) (6.2800E−01) (1.7505E+01) (2.5729E+01)
− − − − −ADE-CM [38] 1.5380E+02 9.9000E−03 2.0943E+01 1.8128E+02 2.4038E+02
(5.5430E+1) (1.1800E−3) (4.2900E−2) (7.8780E+0) (1.3000E+1)
− − − − −CAR-DE [31] 1.8396E+02 1.8914E+04 2.1674E+01 5.9232E+02 4.9012E+02
(9.8705E+01) (1.5657E+03) (3.0845E+01) (2.2333E+01) (8.4560E+01)
− − − − −COBiDE [46] 7.8143E+01 7.7323E−03 2.0767E+01 1.6825E+00 2.3491E+02
(1.1115E+01) (7.9362E−03) (6.0115E−01) (2.2137E+00) (3.8935E+01)
− − − + −Rcr-JADE-s4 [19] 1.91E+01 1.61E+00 2.13E+01 4.55E−02 8.34E+01
(6.82E+00) (1.71E−01) (3.74E−02) (2.84E−02) (1.32E+01)
− − − + +CA-MMTS 1.1482E+00 4.4634E−10 1.2157E+01 9.4862E+01 1.1983E+02
(6.0365E−01) (8.3524E−10) (3.5621E−02) (1.0738E+01) (4.5782E+01)
Fun./Algorithms f11 f12 f13 f14
MTS [42] 2.6381E+02 2.5536E+05 1.3625E+03 9.4176E+02 − 14
(8.9355E+01) (7.9039E+04) (4.6382E+02) (6.2679E+01) + 0
− − − − = 0
MPSO [47] 2.1135E+02 5.7372E+05 9.6154E+02 9.5537E+02 − 14
(5.8260E+01) (1.1407E+05) 1.9526E+01 (1.6243E+01) + 0
− − − − = 0
(continued on next page)
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 243
Table 19 (continued)
Fun./Algorithms f1 f2 f3 f4 f5
DCMA-EA [18] 3.4739E+01 1.9083E+05 9.1788E+02 4.0841E+01 − 14
(1.6482E+01) (6.3802E+04) (8.2763E+01) (2.1116E+00) + 0
− − − − = 0
AMALGAM-SO [43] 3.6113E+01 2.0718E+05 1.0253E+01 4.6210E+01 − 11
(5.6064E+00) (1.1620E+05) (1.1853E+00) (7.0787E−01) + 3
− − + − = 0
ADE-CM [38] 1.4533E+02 4.0002E+06 1.4151E+01 4.3444E+01 − 11
(4.2650E−1) (5.2260E+5) (1.6860E+0) (5.7130E−1) + 2
= − + − = 1
CAR-DE [31] 6.2910E+01 4.4677E+05 3.3179E+01 4.0225E+01 − 12
(4.1313E+00) (1.8670E+04) (4.9355E+00) (2.4122E−01) + 2
− − + − = 0
COBiDE [46] 5.9652E+01 9.1416E+04 8.8941E+00 4.5609E+01 − 10
(7.7862E+00) (5.0190E+04) (1.6792E+00) (5.3275E−01) + 4
− − + − = 0
Rcr-JADE-s4 [19] 7.65E+01 1.83E+04 1.32E+01 4.63E+01 − 8
(2.47E+01) (6.76E+04) (5.03E−01) (6.73E−01) + 6
− − + − = 0
CA-MMTS 1.4361E+01 9.7992E+02 8.4234E+02 2.2835E+01
(6.1365E+00) (7.8731E+01) (7.3959E−03) (9.6790E−04)
a b
dc
0 0.5 1 1.5 2 2.5 3
FES
)el
ac
sg
ol(s
eul
av
rorr
E
10-10
10-8
10-6
10-4
10-2
100
102
104
106
108
CA
CAmod
CMA-ES
MCDE
HS-CA
MCAKM
CA-ILS
CA-MMTS
x 1050 0.5 1 1.5 2 2.5 3
FES
)el
ac
sg
ol(s
eu l
av
rorr
E
10-4
10-3
10-2
10-1
100
101
102
103
104
105
CA
CAmod
CMA-ES
MCDE
HS-CA
MCAKM
CA-ILS
CA-MMTS
x 105
0 0.5 1 1.5 2 2.5 3
FES
)el
ac
sg
ol(s
eul
av
rorr
E
101
102
103
104
CA
CAmod
CMA-ES
MCDE
HS-CA
MCAKM
CA-ILS
CA-MMTS
x 105
0 0.5 1 1.5 2 2.5 3
FEs
)el
ac
sg
ol(s
eul
av
rorr
E
1x100
1x103
1x104
CA
CAmod
CMA-ES
MCDE
HS-CA
MCAKM
CA-ILS
CA-MMTS
x 105
Fig. 9. Advancement toward the optimum for median run of eight algorithms over four selected optimization benchmarks @ 30D. (a) Shifted Schwefel’s function
(f4). (b) Shifted Rotated Griewank’s function (f7). (c) Rotated Hybrid Composition function with Noise (f17). (d) Rotated Hybrid Composition function without
Bounds (f25).
runs, and used a relatively a small number of FEs. Enforcing diversity among the population in cases of stagnation throughout
the run is a key to escape from a scenario where individuals stuck at local optima. It also guarantees generating new previously
unobserved solution. CA-MMTS requires only 4,809 FEs to reach the best reported optimal by Gandomi et al. [14] as 0.012665
using 5,000 FEs. Tomassetti [41] reported the same optimal value but also requires a larger number of computations (10,000) to
reach this value.
244 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 20
Ranking of all algorithms based on Friedman test for dimensions D = 30, D = 50 and
D = 100.
Algorithm Ranking (D = 30) Ranking (D = 50) Ranking (D = 100)
MTS 2.0800 2.1600 2.2857
MPSO 2.6000 2.5200 1.9286
DCMA-EA 4.2000 4.4800 5.6429
AMALGAM-SO 3.4000 4.8000 6.6429
ADE-CM —— 5.8400 4.3571
CAR-DE —— 5.5600 3.7143
CoBiDE 4.6200 5.8200 6.5357
Rcr-JADE-s4 4.7400 5.9800 6.1071
CA-MMTS 6.3600 7.8400 7.7857
Statistic 6.7221E+01 8.4146E+01 6.2204E+01
p-value 4.6406E−11 4.4036E−11 2.1857E−10
Table 21
A comparison of adjusted p-values @ 30D for state-of-the-art-algorithms. (Control method: MTS).
i Algorithm Unadjusted p PBonf PHolm PHoch PHomm
1 CA-MMTS 2.473490E−12 1.484094E−11 1.484094E−11 1.484094E−11 1.484094E−11
2 Rcr-JADE-s4 1.340136E−05 8.040814E−05 6.700678E−05 6.700678E−05 6.700678E−05
3 COBiDE 3.223823E−05 1.934294E−04 1.289529E−04 1.289529E−04 1.289529E−04
4 DCMA-EA 5.211089E−04 3.126654E−03 1.563327E−03 1.563327E−03 1.563327E−03
5 AMALGAM-SO 3.074503E−02 1.844702E−01 6.149007E−02 6.149007E−02 6.149007E−02
6 MPSO 3.947417E−01 2.368450E+00 3.947417E−01 3.947417E−01 3.947417E−01
Table 22
A comparison of adjusted p-values @ 50D for state-of-the-art-algorithms. (Control method: MTS).
i Algorithm Unadjusted p PBonf PHolm PHoch PHomm
1 CA-MMTS 2.253116E−13 1.802493E−12 1.802493E−12 1.802493E−12 1.802493E−12
2 Rcr-JADE-s4 8.155930E−07 6.524744E−06 5.709151E−06 5.709151E−06 5.368310E−06
3 ADE-CM 2.025538E−06 1.620430E−05 1.215323E−05 1.150352E−05 1.012769E−05
4 COBiDE 2.300704E−06 1.840563E−05 1.215323E−05 1.150352E−05 1.150352E−05
5 CAR-DE 1.136737E−05 9.093896E−05 4.546948E−05 4.546948E−05 4.546948E−05
6 AMALGAM-SO 6.538687E−04 5.230950E−03 1.961606E−03 1.961606E−03 1.961606E−03
7 DCMA-EA 2.743485E−03 2.194788E−02 5.486969E−03 5.486969E−03 5.486969E−03
8 MPSO 6.421048E−01 5.136838E+00 6.421048E−01 6.421048E−01 6.421048E−01
Table 23
A comparison of adjusted p-values @ 100D for state-of-the-art-algorithms. (Control method:
MPSO).
i Algorithm Unadjusted p PBonf PHolm PHoch PHomm
1 CA-MMTS 1.53E−08 1.22E−07 1.22E−07 1.22E−07 1.22E−07
2 AMALGAM-SO 5.25E−06 4.20E−05 3.68E−05 3.68E−05 3.15E−05
3 CoBiDE 8.55E−06 6.84E−05 5.13E−05 5.13E−05 5.13E−05
4 Rcr-JADE-s4 5.42E−05 4.33E−04 2.71E−04 2.71E−04 2.71E−04
5 DCMA-EA 3.33E−04 0.002662 0.001331 0.001331 0.001331
6 ADE-CM 0.018965 0.151718 0.056894 0.056894 0.056894
7 CAR-DE 0.084498 0.675984 0.168996 0.168996 0.168996
8 MTS 0.73007 5.840558 0.73007 0.73007 0.73007
Table 24
Comparison of the solution quality for tension/compression string design problem.
Methods
Optimal design variables
x1 x2 x3 fcost FEs Worst Mean Std
Gao et al. [16] 0.055071 0.445656 7.913870 0.012989 10,000 N.A N.A N.A
Jaberipour & Khorram IPHS [24] 0.051860 0.360857 11.050339 0.012665798 200,000 N.A N.A N.A
Tomassetti [41] 0.051644 0.355632 11.35304 0.012665 10,000 N.A N.A N.A
He and Wang [22] 0.051728 0.357644 11.244543 0.0126747 N.A 0.012730 0.012924 5.1985E−05
Kaveh & Talatahari 2011 [26] 0.051432 0.35106 11.60979 0.0126385 4,000 0.0130125 0.0127504 3.948E−05
Kaveh & Talatahari 2010 [25] 0.051744 0.35832 11.165704 0.126384 N.A 0.013626 0.012852 8.3564E−05
Gandomi et al. [25] 0.05169 0.35673 11.2885 0.01266522 5,000 0.0168954 0.1350052 0.001420272
CA-MMTS (present study) 0.051795 0.359264 11.14128 0.012665 4809 0.0129900 0.0127110 2.9097E−05
∗N.A: Not Available
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 245
Table 25
Comparison of the solution quality for the welded beam design problem.
Methods
Optimal design variables Statistical results
x1 x2 x3 x4 fcost FEs Worst Mean Std
Gao et al. [16] 0.299005 2.744191 7.502979 0.311244 2.0932 10,000 N.A N.A N.A
Tomassetti [41] 0.205729 3.470489 9. 036624 0.205730 1.7248 10,000 N.A N.A N.A
Jaberipour & Khorram IPHS [24] 0.205730 3.470490 9.036620 0.205730 1.7248 65,300 N.A N.A N.A
Regsdell and Phillips [36] 0.2444 6.2189 8.2915 0.2444 2.3815 N.A N.A N.A N.A
Deb [9] 0.248900 6.173000 8.178900 0.253300 2.433116 N.A N.A N.A N.A
He and Wang [22] 0.202369 3.544214 9.047210 0.205723 1.728024 N.A 1.748831 1.782143 0.012926
Kaveh & Talatahari 2011 [26] 0.207301 3.435699 9.041934 0.205714 1.723377 4,000 1.762567 1.743454 0.007356
Kaveh & Talatahari 2010 [25] 0.205820 3.468109 9.038024 0.205723 1.724866 N.A 1.759479 1.739654 0.008064
Gandomi et al. [14] 0.2015 3.562 9.0414 0.2057 1.7312065 50,000 2.3455793 1.8786560 0.2677989
CA-MMTS (present study) 0.205673 3.471676 9.036692 0.205729 1.7249 3,721 1.7596205 1.7368892 3.328E−04
∗N.A: Not Available
4.6.2. Welded beam design
This typical engineering optimization design benchmark is a popular practical design problem [41]. The problem has four
design variables: h(= x1), l(= x2), t(= x3), and b(= x4). The structure of the welded beam consists of beam A and the weld that
is needed to clamp the beam to part B. The objective is to locate a feasible solution vector of dimensions h, l, t, and b to convey
a certain load (P) while sustaining the minimum total fabrication cost. The objective function for this problem is mainly the
total fabricating cost. The cost is composed of the welding labor, set-up, and material costs. This problem can be formulated
mathematically as follows:
f (x) = 1.10471x21x2 + 0.04811x3x4(14.0 + x2)
subject to,
g1(x) = τ (x) − τmax ≤ 0,
g2(x) = σ (x) − σmax ≤ 0,
g3(x) = x1 − x4 ≤ 0,
g4(x) = 0.10471x21 + 0.04811x3x4(14.0 + x2) − 5.0 ≤ 0,
g5(x) = 0.125 − x1 ≤ 0,
g6(x) = δ(x) − δmax ≤ 0,
g7(x) = P − Pc(x) ≤ 0
where:
τ (X ) =√
(τ ′)2 + 2τ ′τ ′′ x2
2R+ (τ ′′)2
,
τ ′ = P√2x1x2
, τ ′′ = M R
J,
M = P
(L + x2
2
), R =
√x2
2
4+
(x1 + x3
2
)2
,
J = 2
{√2x1x2
[x2
2
12+
(x1 + x3
2
)2]}
,
σ (X ) = 6 P L
x4x23
, δ(X ) = 4 P L3
Ex33x4
,
Pc(X ) =4.013E
√x2
3x6
4
36
L2
(1 − x3
2L
√E
4G
),
P = 6, 000 lb, L = 14 in, E = 30 × 106 psi, G = 12 × 106 psi
τmax = 30,600 psi, σmax = 30,000, δmax = 0.25 in. (13)
The comparison of the solution quality is shown in Table 25 and the statistical simulation results over 50 independent runs
are also summarized in this table. As Table 25 shows, although CA-MMTS scored second best among the reported results in the
literature in terms of the final objective value, it was able to obtain the best average value with the smallest standard devia-
tion when compared to the other algorithms. Moreover, the number of function evaluations needed by CA-MMTS is much less
246 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
Table 26
Mean and standard deviation (in parentheses) of the best-of-run results for 30 independent runs over the spread spectrum radar poly-phase
code design problem, for dimensions D = 19 and D = 20. The maximum number of FEs was set at 1 × 105.
D Mean best-of-run solution (Std. dev.)
CA MCDE HS-CA MCAKM CA-ILS CA-MMTS Statistical significance
15 8.2739E−01 6.9242E−01 7.5819E−01 6.6728E−01 6.1042E−01 4.1591E−01 +(5.2172E−02) (4.2733E−03) (3.1666E−03) (7.2932E−04) (5.2722E−03) (2.1283E−05)
20 9.7724E−01 8.0891E−01 8.9495E−01 7.9415E−01 7.3579E−01 5.3827E−01 +(2.1832E−02) (4.7829E−02) (4.8170E−02) (9.6713E−03) (4.4429E−02) (3.6298E−04)
than those of the other state-of-the-art algorithms. The optimal found using CA-MMTS is 1.7249 compared to the best reported in
literature of 1.7248 [26,41,24], where there is a difference of 1 × E−5. The CA-MMTS needed (3,721) FEs which is much less than
the 10,000 FEs needed by Tomassetti [23], the 65,300 needed by Jaberipour and Khorram [24] and the 4,000 needed by Kaveh
& Talatahari [26]. It is worth mentioning that CA-MMTS considered all of the constraints (g1-g7), and obtained such an optimal
value using smaller number of FEs than Tomassetti [41] who ignored constraint g7 in order to obtain their reported objective
function value.
4.6.3. Spread spectrum radar poly-phase code design
In this experiment, the proposed algorithm is applied to solve a well-known benchmark optimal design problem in the field of
spread spectrum radar poly-phase codes [8]. The spread spectrum radar poly-phase code design problem was selected according
to the level of difficulty that it presents to the hybrid algorithm in terms of dimensionality and enforced constraints. This problem
can be mathematically formulated as follows:
minx∈X
f (�X ) = max{ϕ1(�X ), . . . , ϕ2m(�X )
},
where
�X ={(x1, x2, . . . , xD) ∈ �D | 0 ≤ x j ≤ 2π, j = 1, 2, . . . , D
}, m = 2D − 1, (14)
and
ϕ2i−1(�X ) =D∑
j=i
cos
(j∑
k=|2i− j−1|−1
xk
), i = 1, 2, . . . , D
ϕ2i(�X ) = 0.5 +D∑
j=i+1
cos
(j∑
k=|2i− j−1|−1
xk
), i = 1, 2, . . . , D − 1
ϕm+i(�X ) = −ϕi(�X ), i = 1, 2, . . . , D
In this problem, the xk set represents the symmetric phase differences. The objective is to minimize the module of the
maximum amongst the samples of the auto-correlation function ϕ. This problem has no polynomial time solution as stated in
[8].
In Table 26, the mean and the standard deviation (within parentheses) of the best-of-run results for 30 independent runs
of CA-MMTS against 5 state-of-the-art algorithms over two of the most difficult instances of the radar poly-phase code design
problem (for dimensions 19D and 20D) are presented. The 8th column in Table 26 indicates the statistical significance level
that was obtained from a paired t-test between the best and next-to-best performing algorithms for each dimension. Fig. 10
graphically presents the rate of convergence of CA-MMTS against the canonical CA [37], MCDE [49], HS-CA [16], MCAKM [20] and
CA-ILS [32], for this problem, with parametric set-up as described in Section 5. Fig. 10 shows that CA-MMTS outperforms all the
other competent algorithms in terms of final accuracy and rate of convergence over two instances of the radar polyphase code
design problem.
4.6.4. Gear train design
The Gear train problem was introduced by Sandgran [39]. It is a discrete optimization problem whose goal is to minimize of
the gear ratio of a compound gear train. The gear train ratio can be mathematically expressed as follows:
Gear ratio = Angular velocity of the output shaft
Angular velocity of the input shaft
This problem has four integer variables denoted as Ti which defines the number of teeth in the of ith gear wheel. The objective
function as expressed in Eq. (15) requires the teeth numbers of wheel that generate a gear ratio that reaches to 1/6.931.
f (Ta, Tb, Td, Tf ) =(
1
6.931− Tb.Td
Ta.Tf
)(15)
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 247
0 1 2 3 4 5 6 7 8 9 104
10-1
100
101
102
Function evaluations (FEs)
)el
ac
sg
oL(
ss
enti
F
CA-MMTS
MCDE
HS-CA
CA-ILS
MCAKM
CA
Fig. 10. Convergence rate comparison for the radar polyphase code design problem.
Table 27
Optimal results of differnt methods for the gear train design problem.
Methods Ta Tb Td Tf Gear ratio fmin FEs
Deb & Goyal [10] 33 14 17 50 0.1442 1.362E−09 N.A
Loh & Papalambros [29] 42 16 19 50 0.1447 0.23E−06 N.A
Parsopoulos & Vrahatis [33] 43 16 19 49 0.1442 2.701E−12 100,000
Gandomi et al. [15] 43 16 19 49 0.1442 2.701E−12 5,000
Gandomi et al. [13] 49 19 16 43 0.1442 2.701E−12 2,000
CA-MMTS (present study) 43 19 16 49 0.1442 2.701E−12 1,500
∗N.A: Not Available
Where Ta, Tb, Td, Tf are integer variables between 12 and 60.
Table 27 shows the best results obtained by CA-MMTS algorithm over 50 independent runs and along with those for other
algorithms from the literature. This table shows that CA-MMTS obtained an optimal gear ratio equals to 0.14428 using 1,500
function evaluations with a cost equal to 2.701E−12. The best obtained solution from CA-MMTS is equally as good as the best
solution obtained by other algorithms in the literature and the best solution in terms of required function evaluations.
5. Conclusions
In this paper, we have proposed the cultural multiple trajectory algorithm, a novel hybridization between a modified ver-
sion of Cultural Algorithms (CA) and modified multi-trajectory search (MMTS) to solve global numerical optimization problems.
The CA integrates the information from the objective function in order to construct a cultural framework consisting of two
parts. These parts include a modified version of the classical CA with an ability to support inter-knowledge-group communica-
tion between the population space and the belief space. The belief space includes three knowledge sources namely, situational
knowledge, normative knowledge and topographic knowledge. The design of the topographic knowledge source was modified to
require less computation time and to save space during the search process. That adjustment made the engine of the CA suitably
scalable for higher dimensions. The work of the knowledge sources in the belief space is supported through the search process of
modified version of multiple trajectory search as demonstrated in the results of the statistical testes over numerous benchmark
functions.
The modified multiple trajectory search is intended to enhance the hybrid algorithm’s ability to locate better solutions around
the last-found solution after the three knowledge sources have finished their search. The newly acquired knowledge about the
search is based upon the cultural information retrieved from the different beacons of knowledge in the belief space. In this
proposed technique, the modified multiple trajectory search uses a local search method to find the foreground solutions accord-
ing to linearly reducing factor (LRF). The sub-local search is capable of generating new search agents with better fitness than the
248 M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249
current best individual’s fitness and fork new search directions toward promising regions. The feedback between communication
channels connects the population space with the belief space in order to increase the efficiency of the search process, compared
to using only an open-loop one, i.e., only population space. This increases the probability of producing new fruitful search di-
rections and enhances the solutions quality and reduces the required computational overhead. A participation function is used
to determine how the knowledge sources will be affecting the individuals and is used to determine the number of evaluations
for each of component algorithms of the proposed hybrid work. This can highlight the best features of each of the component
algorithms and facilitate escape local optima during the search process.
Various challenging numerical benchmark problems in 30D, 50D, a scalability study in 100D, and a set of real-world problems
were used to test the performance of the proposed approach. Simulation results confirm that the new hybrid algorithm signifi-
cantly advances the efficiency over the CA, MMTS and other state-of-the-art algorithms, in terms of quality of the solutions found
with reduced computation cost. The algorithm proved to be more promising when solving complex (non-separable) continuous
hybrid composition functions compared to solving simple unimodal functions when tested at different dimensionalities. Future
work will include the following: incorporation other types of searches into the algorithm; extension of the selection criterion in
the quality function; and the investigation of other success-based parameter adaptation methods.
References
[1] M.Z. Ali, N.H. Awad, A novel class of niche hybrid cultural algorithms for continuous engineering optimization, Inf. Sci. 267 (2014) 158–190.
[2] M.Z. Ali, N.H. Awad, R.G. Reynolds, Balancing search direction in cultural algorithm for enhanced global numerical optimization, in: Proceedings of the IEEESymposium on Swarm Intelligence (SIS), Dec, Orlando, Florida, 2014, pp. 336–342.
[3] M.Z. Ali, N.H. Awad, R.G. Reynolds, Hybrid niche cultural algorithm for numerical global optimization, in: Proceedings of the IEEE Congress of EvolutionaryComputations, Cancun, Mexico, June 2013, pp. 309–316.
[4] A. Auger, N. Hansen, A restart CMA evolution strategy with increasing population size, in: Proceedings of the IEEE Congress of Evolutionary Computations,2005, pp. 1769–1776.
[5] N.H. Awad, M.Z. Ali, R.M. Duwairi, Cultural algorithm with improved local search for optimization problems, in: Proceedings of the IEEE Congress of Evolu-
tionary Computations., Cancun, Mexico, June 2013, pp. 284–291.[6] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained optimization, Comput. Meth. Appl. Mech. Eng. 195 (2006) 4303–4322.
[7] L.S. Coelho, V.C. Mariani, An efficient particle swarm optimization approach based on cultural algorithm applied to mechanical design, in: Proceedings ofthe IEEE Congress of Evolutionary Computations, Canada, 2006, pp. 1099–1104.
[8] S. Das, A. Abraham, U.K. Chakraborty, A. Konar, Differential evolution using a neighborhood-based mutation operator, IEEE Trans. Evol. Comput. 13 (3)(2009) 526–553.
[9] K. Deb, Optimal design of awelded beam via genetic algorithms, Am. Inst.Aeronaut.Astronaut. J. 29 (11) (1991) 2013–2015.
[10] K. Deb, M. Goyal, A combined genetic adaptive search (GeneAS) for engineering design, Comput. Sci. Inform. 26 (1996) 30–45.[11] J. Demsar, Statistical comparisons of classifiers over multiple data sets, J. Mach. Learn. Res. 7 (2006) 1–30.
[12] O.K. Erol, I. Eksin I, New optimization method: Big Bang–Big Crunch, Adv. Eng. Softw. 37 (2006) 106–111.[13] A.H. Gandomi, G.J. Yun, X-S Yang, S. Talatahari, Chaos-enhanced accelerated particle swarm optimization, Commun. Nonlinear. Sci. Numer. Simulat. 18
(2013) 327–340.[14] A.H. Gandomi, X.-S. Yang, A.H. Alavi, S. Talatahari, Bat algorithm for constrained optimization tasks, Neural Comput. Appl. 22 (2013) 1239–1255.
[15] A.H. Gandomi, X.S. Yang, A.H. Alavi, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems, Eng. Comput. 29 (2013)17–35.
[16] X.Z. Gao, X. Wang, T. Jokinen, S.J. Ovaska, A. Arkkio, K. Zenger, A hybrid optimization method for wind generator design, Int. J Innov. Comput. 8 (6) (2012)
4347–4373.[17] J.M. García-Nieto, E. Alba, Hybrid PSO6 for hard continuous optimization, Soft Comput. (2014).
[18] S. Ghosh, S. Das, S. Roy, S.K. Minhazul Islam, P.N. Suganthan, A differential covariance matrix adaptation evolutionary algorithm for real parameter opti-mization, Inf. Sci. vol. 182 (Issue 1) (1 January 2012) 199–219 ISSN 0020-0255.
[19] W. Gong, Z. Cai, Y. Wang, Repairing the crossover rate in adaptive differential evolution, Appl. Soft Comput. 15 (2014) 149–168.[20] Y.-N. Guo, J. Cheng, Y.-y. Cao, Y. Lin, A novel multi-population cultural algorithm adopting knowledge migration, Soft. Comput. 15 (2011) 897–905.
[21] A. Haikal, M. El-Hosseni, Modified cultural-based genetic algorithm for process optimization, Ain. Shams. Eng. J. 2 (2011) 173–182.
[22] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (2007)89–99.
[23] J.H. Holland, Adaption in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975.[24] M. Jaberipour, E. Khorram, Two improved harmony search algorithms for solving engineering optimization problems, Commun. Nonlinear. Sci. Numer.
Simulat. 15 (2010) 3316–3331.[25] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged system search, Acta Mechanica 213 (2010) 267–289.
[26] A. Kaveh, S. Talatahari, Hybrid charged system search and particle swarm optimization for engineering design problems, Int. J. Comput. Aided Eng. Softw.
28 (2011) 423–440.[27] Z. Kobti, R.G. Reynolds, T. Kohler, A multi-agent simulation using cultural algorithms: the effect of culture on the resilience of social systems, in: Proceedings
of the IEEE Congress of Evolutionary Computations, 2003, pp. 1988–1995.[28] V. Kovacevic-Vujcic, M. Cangalovic, M. Drazic, N. Mladenovic, VNS-based heuristics for continuous global optimization, in: L.T.H. An, P.D. Tao (Eds.), Mod-
elling, Computation and Optimization in Information Systems and Management Sciences, Hermes Science Publishing Ltd, London, 2004, pp. 215–222.[29] H.T. Loh, P.Y. Papalambros, A sequential linearization approach for solving mixed-discrete nonlinear design optimization problems, J. Mech. Des. 113 (1991)
325–334.
[30] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evol. Comput. 4 (1996) 1–32.[31] S. Mukherjee, S. Chatterjee, D. Goswami, S. Das, Differential evolution with controlled annihilation and regeneration of individuals and a novel mutation
scheme, Swarm Evol. Memet. Comput. 8297 (2013) 286–297.[32] T.T. Nguyen, X. Yao, An experimental study of hybridizing cultural algorithms and local search, Int. J. Neural Syst. 18 (2008) 1–18.
[33] K.E. Parsopoulos, M.N. Vrahatis, Unified particle swarm optimization for solving constrained engineering optimization problems, Advances in natural com-putation, Springer-Verlag, 2005, pp. 582–591.
[34] B. Peng, Knowledge and Population Swarms In Cultural Algorithms For Dynamic Environments. PhD thesis, Detroit, MI, USA, 2005.
[35] AK Qin, VL Huang, PN Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol. Comput. 13(2009) 398–417.
[36] K.M. Ragsdell, D.T. Phillips, Optimal design of a class of welded structures using geometric programming, ASME J. Eng. Ind. Ser. B 98 (3) (1976) 1021–1025.[37] R.G. Reynolds, B. Peng, Cultural algorithms: computational modeling of how cultures learn to solve problems: an engineering example, Cybern. Syst. 36
(2005) 753–771.[38] S.B. Roy, M. Dan, P. Mitra, Improving adaptive differential evolution with controlled mutation strategy, Swarm Evol. Memetic Comput. 7677 (2012) 636–643.
M.Z. Ali et al. / Information Sciences 334–335 (2016) 219–249 249
[39] E. Sandgren, Nonlinear integer and discrete programming in mechanical design optimization, J. Mech. Des. 112 (1990) 223–229.[40] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.-P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on
Real-Parameter Optimization, Nanyang Technological University, Singapore, May 2005 Technical ReportAND KanGAL Report #2005005, IIT Kanpur, India.[41] G. Tomassetti, A cost-effective algorithm for the solution of engineering problems with particle swarm optimization, Eng. Optimiz. 42 (2010) 471–495.
[42] L-Y. Tseng, C. Chen, Multiple trajectory search for large global optimization, in: Proceedings of the IEEE Congress of Evolutionary Computations, 2008,pp. 3052–3059.
[43] J.A. Vrugt, B.A. Robinson, J.M. Hyman, Self-adaptive multimethod search for global optimization in real-parameter spaces, IEEE Trans. Evol. Comput. 13 (2)
(April 2009) 243–259.[44] L. Wang, C. Cao, Z. Xu, X. Gu, An improved particle swarm algorithm based on cultural algorithm for constrained optimization, Adv. Intell. Soft Comput. 135
(2012) 453–460.[45] W. Wang, T. Li, Improved cultural algorithms for job shop scheduling problem, Int. J. Ind. Eng. Theory 18 (2011) 4.
[46] Y. Wang, H.-X. Li, T. Huang, L. Li, Differential evolution based on covariance matrix learning and bimodal distribution parameter setting, Appl. Soft Comput.18 (2014) 232–247.
[47] H. Wang, I. Moon, S. Yang, D. Wang, A memetic particle swarm optimization algorithm for multimodal optimization problems, Inf. Sci. 197 (2012) 38–52.[48] F. Wilcoxon, Individual comparisons by ranking methods, Biometrics 1 (1945) 80–83.
[49] W. Xu, R. Wang, L. Zhang, X. Gu, A multi-population cultural algorithm with adaptive diversity preservation and its application in ammonia synthesis
process, Neural Comput. Appl. 21 (2012) 1129–1140.[50] Z. Xue, Y. Guo, Improved Cultural Algorithm based on Genetic Algorithm, in: Proceedings of the IEEE International Conference on Integration Technology,
2007, pp. 117–122.