cultural emperor penguin optimizer and its application for ...2020/06/11  · characteristics of...

16
Research Article Cultural Emperor Penguin Optimizer and Its Application for Face Recognition Jie Yang and Hongyuan Gao College of Information and Communication Engineering, Harbin Engineering University, No. 145 Nantong Street Nangang District, Harbin 150001, China Correspondence should be addressed to Hongyuan Gao; [email protected] Received 11 June 2020; Revised 31 August 2020; Accepted 15 October 2020; Published 5 November 2020 Academic Editor: Pietro Bia Copyright©2020JieYangandHongyuanGao.isisanopenaccessarticledistributedundertheCreativeCommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Face recognition is an important technology with practical application prospect. One of the most popular classifiers for face recognition is support vector machine (SVM). However, selection of penalty parameter and kernel parameter determines the performance of SVM, which is the major challenge for SVM to solve classification problems. In this paper, with a view to obtaining the optimal SVM model for face recognition, a new hybrid intelligent algorithm is proposed for multiparameter optimization problem of SVM, which is a fusion of cultural algorithm (CA) and emperor penguin optimizer (EPO), namely, cultural emperor penguin optimizer (CEPO). e key aim of CEPO is to enhance the exploitation capability of EPO with the help of cultural algorithm basic framework. e performance of CEPO is evaluated by six well-known benchmark test functions compared with eight state-of-the-art algorithms. To verify the performance of CEPO-SVM, particle swarm optimization-based SVM (PSO-SVM), genetic algorithm-based SVM (GA-SVM), CA-SVM, and EPO-SVM, moth-flame optimization-based SVM (MFO-SVM), grey wolf optimizer-based SVM (GWO-SVM), cultural firework algorithm-based SVM (CFA-SVM), and emperor penguin and social engineering optimizer-based SVM (EPSEO-SVM) are used for the comparison experiments. e experimental results confirm that the parameters optimized by CEPO are more instructive to make the classification performance of SVM better in terms of accuracy, convergence rate, stability, robustness, and run time. 1. Introduction As an important branch of pattern recognition, face rec- ognition has the widespread application demands in the field, such as intelligent monitoring, virtual reality, medicine examination, and human-computer interaction. e key step in face recognition is classifier design, namely, how to use the extracted features to classify the new face images. Recently, many classifiers have been presented, such as decision tree [1], neural network [2], k-nearest (KNN) [3], and support vector machine (SVM) [4], among which SVM is the most popular one. Although SVM has the advantages of good generaliza- tion ability, small training set size limitation, and high noise stability, it sometimes leads to overfitting problems. To address this problem, some researchers have applied genetic algorithm (GA) [5], particle swarm optimizer (PSO) [6], and simulated annealing (SA) [7] to parameter optimization of SVM. However, the implement of GA is complex although the search process has the characteristic of exclusivity, which makes the convergence speed slow. PSO and SA are suc- cessfully applied to solve some engineering optimization problems, but they have poor global search ability for pa- rameters optimization. erefore, it is worthy to design a new intelligent algorithm with superior performance for parameters optimization of SVM. In the face of complex optimization problems which are difficult to deal with effectively by traditional methods, many new inspired algorithms have been proposed for solving problems by simulations of natural rules and social be- haviours of biological population. Recently, some algorithms such as grey wolf optimizer (GWO) [8], moth-flame Hindawi Mathematical Problems in Engineering Volume 2020, Article ID 9579538, 16 pages https://doi.org/10.1155/2020/9579538

Upload: others

Post on 11-Mar-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

Research ArticleCultural Emperor Penguin Optimizer and Its Application forFace Recognition

Jie Yang and Hongyuan Gao

College of Information and Communication Engineering Harbin Engineering UniversityNo 145 Nantong Street Nangang District Harbin 150001 China

Correspondence should be addressed to Hongyuan Gao gaohongyuanhrbeueducn

Received 11 June 2020 Revised 31 August 2020 Accepted 15 October 2020 Published 5 November 2020

Academic Editor Pietro Bia

Copyright copy 2020 Jie Yang andHongyuanGao)is is an open access article distributed under theCreative CommonsAttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

Face recognition is an important technology with practical application prospect One of the most popular classifiers for facerecognition is support vector machine (SVM) However selection of penalty parameter and kernel parameter determines theperformance of SVM which is the major challenge for SVM to solve classification problems In this paper with a view to obtainingthe optimal SVM model for face recognition a new hybrid intelligent algorithm is proposed for multiparameter optimizationproblem of SVM which is a fusion of cultural algorithm (CA) and emperor penguin optimizer (EPO) namely cultural emperorpenguin optimizer (CEPO) )e key aim of CEPO is to enhance the exploitation capability of EPO with the help of culturalalgorithm basic framework )e performance of CEPO is evaluated by six well-known benchmark test functions compared witheight state-of-the-art algorithms To verify the performance of CEPO-SVM particle swarm optimization-based SVM (PSO-SVM)genetic algorithm-based SVM (GA-SVM) CA-SVM and EPO-SVM moth-flame optimization-based SVM (MFO-SVM) greywolf optimizer-based SVM (GWO-SVM) cultural firework algorithm-based SVM (CFA-SVM) and emperor penguin and socialengineering optimizer-based SVM (EPSEO-SVM) are used for the comparison experiments )e experimental results confirmthat the parameters optimized by CEPO are more instructive to make the classification performance of SVM better in terms ofaccuracy convergence rate stability robustness and run time

1 Introduction

As an important branch of pattern recognition face rec-ognition has the widespread application demands in thefield such as intelligent monitoring virtual reality medicineexamination and human-computer interaction )e keystep in face recognition is classifier design namely how touse the extracted features to classify the new face imagesRecently many classifiers have been presented such asdecision tree [1] neural network [2] k-nearest (KNN) [3]and support vector machine (SVM) [4] among which SVMis the most popular one

Although SVM has the advantages of good generaliza-tion ability small training set size limitation and high noisestability it sometimes leads to overfitting problems Toaddress this problem some researchers have applied genetic

algorithm (GA) [5] particle swarm optimizer (PSO) [6] andsimulated annealing (SA) [7] to parameter optimization ofSVM However the implement of GA is complex althoughthe search process has the characteristic of exclusivity whichmakes the convergence speed slow PSO and SA are suc-cessfully applied to solve some engineering optimizationproblems but they have poor global search ability for pa-rameters optimization )erefore it is worthy to design anew intelligent algorithm with superior performance forparameters optimization of SVM

In the face of complex optimization problems which aredifficult to deal with effectively by traditional methods manynew inspired algorithms have been proposed for solvingproblems by simulations of natural rules and social be-haviours of biological population Recently some algorithmssuch as grey wolf optimizer (GWO) [8] moth-flame

HindawiMathematical Problems in EngineeringVolume 2020 Article ID 9579538 16 pageshttpsdoiorg10115520209579538

optimization (MFO) [9] tunicate swarm algorithm (TSA)[10] and dice game optimizer (DGO) [11] are widely usedIn 2018 Dhiman and Kumar proposed a novel bio-inspiredalgorithm named emperor penguin optimizer [12] which isinspired by the budding behavior of emperor penguinHowever the experimental results showed the convergencerate of EPO will decrease continuously in the later stagewhich will affect the efficiency of optimization AfterwardsBaliarsingh et al proposed a hybrid algorithm using EPOwith social engineering optimizer (SEO) namely emperorpenguin and social engineering optimizer (EPSEO) [13] toimprove the performance of EPO

Cultural algorithm (CA) is a bio-inspired intelligentalgorithm proposed by Reynolds in 1994 [14] which is bysimulating the cultural evolution of human society CA is adouble-layer evolutionary mechanism which uses all kindsof knowledge accumulated in belief space to update con-tinuously so as to guide the evolution of population spaceand accelerate the optimization efficiency of the algorithm)erefore it can provide a space that allows other evolu-tionary algorithms to be embedded to cooperate and pro-mote each other Recently some researchers have proposedPSO bacterial foraging algorithm (BFA) and firework al-gorithm (FA) as a population space to ingrate with CA[15ndash17] and verified that the performance of the hybridalgorithms are significantly improved

Based on the above observations due to the fact that CAwith single population does not make full use of theknowledge of belief space and the convergence speed of EPOwill decrease continuously in the later stage we propose ahybrid metaheuristic algorithm named cultural emperorpenguin optimizer (CEPO) with the help of the embeddedframework provided by CA In CEPO hybrid of two dif-ferent evolution processes for locations is an effectivemechanism to enhance the exploitation capability of EPO

Specifically CEPO are implemented on six well-knownbenchmark test functions compared with eight state-of-the-art optimization algorithms Furthermore CEPO is appliedto multiparameter optimization problem of SVM for facerecognition In the process of feature extraction featurerepresentation vectors of face images are obtained by Gaborfilter and Principal Component Analysis (PCA) which are asinput to CEPO-SVM PCA is used to make up Gabor wavelettransform disadvantages caused by increased dimensional-ity Along with feature classification penalty parameter andkernel parameter of SVM are optimized by CEPO to get thetraining model with high classification accuracy To provethe performance of CEPO-SVM eight state-of-the-art op-timization algorithm-based SVM are used for the com-parison experiments in terms of accuracy convergence ratestability robustness and run time

)e main contributions of this paper are as follows

(1) A new hybrid intelligent algorithm named CEPOinspired by EPO and the framework of CA isproposed

(2) CEPO is applied to automatic learning of the net-work parameters of SVM to solve the classificationproblem for face recognition

(3) We demonstrate that the performance of CEPO-SVM is superior on the basis of accuracy conver-gence rate stability robustness and run timecompared with eight state-of-the-art algorithm-based SVM

(4) CEPO as a multidimensional search algorithm cannot only obtain the optimal penalty parameter andkernel parameter of SVM but also can be employedin other classifiers and other real number optimi-zation problems

)e rest of the work is organized as follows Section 2describes feature extraction of face images including theGabor face image representation and dimensionality re-duction by Principal Component Analysis Section 3 pres-ents formulations of parameter optimization of SVM forclassification as an optimization problem )e design ofcultural emperor penguin optimizer for parameter opti-mization of SVM is depicted in Section 4 Section 5 describesthe experimental setup In section 6 the experiment andresult analysis are presented Section 7 provides the con-clusion and future works

2 The Feature Extraction for Face Images

At the preprocessing stage each image is first converted intograyscale and then adjusted to the same size Due to thesuperior robustness of the Gabor filter to changes inbrightness and attitude of face images the Gabor filter isused to capture more useful features from face imagesFurthermore to make up Gabor wavelet transform disad-vantages caused by increased dimensionality PCA is appliedto dimensionality reduction After the preprocessing stageface images are divided into training sets and testing sets toobtain feature representation vectors which are as the inputto SVM Figure 1 shows the block diagram of face recog-nition based on Gabor-PCA

21 Gabor Face Image Representation )e two-dimensionalGabor filter can be defined as follows

φor(z) Wor

2

σ2exp minus

Wor

2z

2

2σ2⎛⎝ ⎞⎠

middot exp jWorz1113872 1113873 minus exp minusσ2

21113888 11138891113890 1113891

(1)

where z (x y) defines the pixel position in the spatialdomain o defines the orientation of the Gabor filter r

defines the scale of the Gabor filter σ represents the radius ofGaussian function which is used to limit the size of theGabor filter and Wo r is the wave vector of the filter atorientation o and scale r [18 19] In this paper with fivedifferent scales r isin 0 4 and eight different orienta-tions o isin 0 7 40 different Gabor filters can be ob-tained )e real part and the magnitude of the 40 Gaborfilters are shown in Figures 2 and 3 which exhibit desirable

2 Mathematical Problems in Engineering

characteristics of spatial locality spatial frequency andorientation selectivity

Let Y(z) represent the grey level distribution of animage and the Gabor wavelet representation of an image canbe given by

Gor(z) Y(z) lowastφor(z) (2)

where Go r(z) defines the convolution result between imageY(z) and the Gabor filters φo r(z) at different orientationsand scales In order to improve the computation speed theconvolution computation can be transformed by FFT [20]To contain the different spatial localities spatial frequenciesand orientation selectivities all these representation resultswill be concatenated to derive an augmented feature vectorQ In [19] before the concatenation each Go r(z) will bedownsampled by a factor δ at different scales and orienta-tions )erefore the augmented Gabor feature vector Q(δ)

can be given by

Q(δ) G(δ)

001113872 1113873T

G(δ)011113872 1113873

Tmiddot middot middot G(δ)

471113872 1113873T

1113874 1113875T

(3)

where T denotes transpose operator δ is the downsamplefactor and G(δ)

o r is the concatenated column vector

Gabor wavelet representationof the training images

Concatenate all therepresentation results

Downsample by a factor δ

Dimensionality reduction byPCA

Input

Feature representation vector

Preprocess all the faceimages

Training

Preprocessing

Featureextraction stage

Featureextraction stage

Testing

Feature representation vector

Feature representation ofall the face images

SVM

Face recognition accuracy

e augmentedfeature vector

Figure 1 Block diagram of face recognition based on Gabor-PCA

Figure 2 )e real part of the 40 Gabor filters

Figure 3 )e magnitude of the 40 Gabor filters

Mathematical Problems in Engineering 3

22 Principle Component Analysis for Feature Optimization)e augmented Gabor feature vector which is introduced inequation (3) has very high dimensionalityQ(δ) isin RO whereO is the dimensionality of vector space Obviously the highdimensionality seriously affects the computing speed andrecognition rate in the process of face recognition In themethod with PCA [21] we can find orthogonal basis forfeature vector sort dimensions in order of importance anddiscard low significance dimensions Let 1113936Q(δ) isin ROtimesO bethe covariance matrix of the augmented feature vector Q(δ)

1113944Q(δ) E Q(δ)

minus E Q(δ)1113872 11138731113960 1113961 Q(δ)

minus E Q(δ)1113872 11138731113960 1113961

T1113882 1113883 (4)

where E(middot) is the expectation operator )e covariancematrix 1113936Q(δ) can be transformed into the following form

1113944 Q(δ) ΩΛΩT

(5)

Ω U1 U2 UO1113858 1113859 (6)

Λ diag p1 p2 pO1113864 1113865 (7)

where Ω isin ROtimesO is an orthogonal eigenvector matrix andΛ isin ROtimesO is a diagonal eigenvalue matrix and the diagonalelements are arranged in descending order(p1 gep2 ge middot middot middot gepO)

An important property of PCA is that it is the optimalsignal reconstruction in the sense of minimummean-squareerror by using a subset of principal components to representthe original signal [19] Following this property PCA can beapplied to dimensionality reduction

X(δ) STQ(δ)

(8)

where S [U1 U2 UJ] JltO and S isin ROtimesJ)e lowerdimensional vector X(δ) isin RJ captures the most expressivefeatures of the original data Q(δ)

3 Formulations of Parameter Optimization ofSVM for Classification

31 Model of Support Vector Machine Support vector ma-chine (SVM) [22] is a kind of binary classifier with stronglearning ability and generalization ability )e key point ofSVM is to find the optimal hyperplane for accurate classi-fication of two classes and ensure that the classification gap islarge enough Figure 4 shows the optimal hyperplane ofSVM where H indicates the hyperplane and margin rep-resents the gap between class H1 and class H2

Suppose the training data V can be given by

V ah bh( 1113857|ah isin Rw

bh isin minus1 + 1 h 1 2 n1113864 1113865

(9)

where ah are the training samples bh are the labels inw-dimensional vector n is the number of training data andeach variable must subject to the criteria ah isin Rw andbh isin minus1 +1 For linear data the hyperplane g(a) 0 thatseparates the given data can be determined

g(a) ωTa + c (10)

where ω is n-dimensional vector and c is a scalar ω and c

determine all the boundaries SVM could be classifiedaccording to the problem types when the hyperplane withminimum margin width ω22 subject to

bh ωTa + c1113872 1113873ge 1 (11)

For the data that cannot be separated linearly therelaxing variables ξh are introduced to allow for optimalgeneralization and classification by controlling the size ofthem )en the optimal hyperplane can be obtained by thefollowing optimization problem

Ψ(ω ξ) 12ω2

+ C 1113944n

h1ξh (12)

subject to bh ωTa + c1113872 1113873ge 1 minus ξh ξh ge 1 (13)

where C denotes the penalty parameter To satisfy theKarushndashKuhnndashTucker (KKT) conditions the Lagrangemultipliers αh is introduced )e abovementioned optimi-zation problem is converted into the dual quadratic opti-mization problem

minα

12

1113944

n

h11113944

n

k1bhbkαhαk ah middot ak( 1113857 minus 1113944

n

k1αk (14)

subject to 1113944

n

h1bhαh 0 0le αh leC (15)

)us the ultimate classification function can be obtainedby solving the dual optimization problem

Margin

H1

H

H2

Figure 4 )e schematic model of a linear SVM

4 Mathematical Problems in Engineering

g(a) sgn 1113944

n

h1αhbh ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (16)

where sgn represents symbolic functionKernel functions can be used to map the data in the low-

dimensional input space to the high-dimensional space bynonlinearity )erefore the linear indivisibility of the inputcan be transformed into the linear separable problem by thismapping Kernel function can be defined as follows

K ah bk( 1113857 ψ ah( 1113857ψ bk( 1113857( 1113857 (17)

)en the optimal function can be given by

g(a) sgn 1113944n

h1αhbhK ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (18)

Several types of kernel functions such as Linear KernelPolynomial Kernel and Radial Basis Function (RBF) arewidely used However RBF has advantages of realizingnonlinear mapping less parameters and less numericaldifficulties In this paper RBF is selected for face recognitionwhich can be represented as follows

K(a b) exp minusca minus b2

1113872 1113873 (19)

where c denotes the kernel parameter

32 Objective Function of SVM With a view to obtainingtraining model with high classification accuracy the penaltyparameter C which is used to adjust confidence scale and thekernel parameter c which determines the width of kerneland the range of data points could be optimized by an ef-fective intelligent algorithm Furthermore the mean squareerror of parameters of SVM can be selected as the objectivefunction of the intelligent algorithm

objective 1n

1113944

n

h1bh minus bh1113872 1113873 (20)

where bh is the output value of the corresponding parameterand bh is the actual value of the corresponding parameter

4 Design of Cultural Emperor PenguinOptimizer for ParameterOptimization of SVM

41 6e Design of Cultural Emperor Penguin OptimizerEPO is a novel optimization algorithm presented by Dhimanand Kumar in 2018 [12] which is inspired by the buddingbehavior of emperor penguin In this paper a hybrid al-gorithm named CEPO is proposed to solve the real numbersoptimization problems which is with the help of culturalalgorithm basic framework in Figure 5 )e key idea ofCEPO is to obtain problem-solving knowledge from thebudding behavior of emperor penguin and make use of thatknowledge to guide the evolution of emperor penguinpopulation in return

Suppose CEPO is designed for general minimum opti-mization problem

min f xi( 1113857 (21)

where xi (xi1 xi2 xi D) denotes the position of the ithemperor penguin in the D-dimensional search spacexmin

j lt xij ltxmaxj (j 1 2 D) f() is the objective

function and f(xi) denotes the objective value of the po-sition xi xmin

j and xmaxj represent the lower and upper

boundary of the position of emperor penguins in the jthdimension

Belief space of emperor penguin population in the tthgeneration defined is given by st and Nt

j where st is situ-ational knowledge component Nt

j is normative knowledgewhich represents the value space information for each pa-rameter in the jth dimension and in the tth generation Nt

j

denotes I L U Itj [ltj ut

j] where Itj represents the interval

of normative knowledge in the jth dimension )e lowerboundary ltj and the upper boundary ut

j are initializedaccording to the value range of variables given by theproblem Lt

j represents the objective value of the lowerboundary ltj of the jth parameter and Ut

j represents theobjective value of the upper boundary ut

j of the jthparameter

)e acceptance function is used to select the emperorpenguins who can directly influence the current belief spaceIn CEPO the acceptance function selects the cultural in-dividual in proportion top 20 from the current populationspace to update the belief space

Situational knowledge st can be updated by updatefunction

st+1

xt+1best if f xt+1

best1113872 1113873ltf st1113872 1113873

st else

⎧⎨

⎩ (22)

where xt+1best is the optimal position of emperor penguin

population space in the (t + 1)th generation

Belief space

Population space

Select()

Accept() Influence()

Update()

Generate()

Objective()

Figure 5 )e cultural algorithm basic framework

Mathematical Problems in Engineering 5

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 2: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

optimization (MFO) [9] tunicate swarm algorithm (TSA)[10] and dice game optimizer (DGO) [11] are widely usedIn 2018 Dhiman and Kumar proposed a novel bio-inspiredalgorithm named emperor penguin optimizer [12] which isinspired by the budding behavior of emperor penguinHowever the experimental results showed the convergencerate of EPO will decrease continuously in the later stagewhich will affect the efficiency of optimization AfterwardsBaliarsingh et al proposed a hybrid algorithm using EPOwith social engineering optimizer (SEO) namely emperorpenguin and social engineering optimizer (EPSEO) [13] toimprove the performance of EPO

Cultural algorithm (CA) is a bio-inspired intelligentalgorithm proposed by Reynolds in 1994 [14] which is bysimulating the cultural evolution of human society CA is adouble-layer evolutionary mechanism which uses all kindsof knowledge accumulated in belief space to update con-tinuously so as to guide the evolution of population spaceand accelerate the optimization efficiency of the algorithm)erefore it can provide a space that allows other evolu-tionary algorithms to be embedded to cooperate and pro-mote each other Recently some researchers have proposedPSO bacterial foraging algorithm (BFA) and firework al-gorithm (FA) as a population space to ingrate with CA[15ndash17] and verified that the performance of the hybridalgorithms are significantly improved

Based on the above observations due to the fact that CAwith single population does not make full use of theknowledge of belief space and the convergence speed of EPOwill decrease continuously in the later stage we propose ahybrid metaheuristic algorithm named cultural emperorpenguin optimizer (CEPO) with the help of the embeddedframework provided by CA In CEPO hybrid of two dif-ferent evolution processes for locations is an effectivemechanism to enhance the exploitation capability of EPO

Specifically CEPO are implemented on six well-knownbenchmark test functions compared with eight state-of-the-art optimization algorithms Furthermore CEPO is appliedto multiparameter optimization problem of SVM for facerecognition In the process of feature extraction featurerepresentation vectors of face images are obtained by Gaborfilter and Principal Component Analysis (PCA) which are asinput to CEPO-SVM PCA is used to make up Gabor wavelettransform disadvantages caused by increased dimensional-ity Along with feature classification penalty parameter andkernel parameter of SVM are optimized by CEPO to get thetraining model with high classification accuracy To provethe performance of CEPO-SVM eight state-of-the-art op-timization algorithm-based SVM are used for the com-parison experiments in terms of accuracy convergence ratestability robustness and run time

)e main contributions of this paper are as follows

(1) A new hybrid intelligent algorithm named CEPOinspired by EPO and the framework of CA isproposed

(2) CEPO is applied to automatic learning of the net-work parameters of SVM to solve the classificationproblem for face recognition

(3) We demonstrate that the performance of CEPO-SVM is superior on the basis of accuracy conver-gence rate stability robustness and run timecompared with eight state-of-the-art algorithm-based SVM

(4) CEPO as a multidimensional search algorithm cannot only obtain the optimal penalty parameter andkernel parameter of SVM but also can be employedin other classifiers and other real number optimi-zation problems

)e rest of the work is organized as follows Section 2describes feature extraction of face images including theGabor face image representation and dimensionality re-duction by Principal Component Analysis Section 3 pres-ents formulations of parameter optimization of SVM forclassification as an optimization problem )e design ofcultural emperor penguin optimizer for parameter opti-mization of SVM is depicted in Section 4 Section 5 describesthe experimental setup In section 6 the experiment andresult analysis are presented Section 7 provides the con-clusion and future works

2 The Feature Extraction for Face Images

At the preprocessing stage each image is first converted intograyscale and then adjusted to the same size Due to thesuperior robustness of the Gabor filter to changes inbrightness and attitude of face images the Gabor filter isused to capture more useful features from face imagesFurthermore to make up Gabor wavelet transform disad-vantages caused by increased dimensionality PCA is appliedto dimensionality reduction After the preprocessing stageface images are divided into training sets and testing sets toobtain feature representation vectors which are as the inputto SVM Figure 1 shows the block diagram of face recog-nition based on Gabor-PCA

21 Gabor Face Image Representation )e two-dimensionalGabor filter can be defined as follows

φor(z) Wor

2

σ2exp minus

Wor

2z

2

2σ2⎛⎝ ⎞⎠

middot exp jWorz1113872 1113873 minus exp minusσ2

21113888 11138891113890 1113891

(1)

where z (x y) defines the pixel position in the spatialdomain o defines the orientation of the Gabor filter r

defines the scale of the Gabor filter σ represents the radius ofGaussian function which is used to limit the size of theGabor filter and Wo r is the wave vector of the filter atorientation o and scale r [18 19] In this paper with fivedifferent scales r isin 0 4 and eight different orienta-tions o isin 0 7 40 different Gabor filters can be ob-tained )e real part and the magnitude of the 40 Gaborfilters are shown in Figures 2 and 3 which exhibit desirable

2 Mathematical Problems in Engineering

characteristics of spatial locality spatial frequency andorientation selectivity

Let Y(z) represent the grey level distribution of animage and the Gabor wavelet representation of an image canbe given by

Gor(z) Y(z) lowastφor(z) (2)

where Go r(z) defines the convolution result between imageY(z) and the Gabor filters φo r(z) at different orientationsand scales In order to improve the computation speed theconvolution computation can be transformed by FFT [20]To contain the different spatial localities spatial frequenciesand orientation selectivities all these representation resultswill be concatenated to derive an augmented feature vectorQ In [19] before the concatenation each Go r(z) will bedownsampled by a factor δ at different scales and orienta-tions )erefore the augmented Gabor feature vector Q(δ)

can be given by

Q(δ) G(δ)

001113872 1113873T

G(δ)011113872 1113873

Tmiddot middot middot G(δ)

471113872 1113873T

1113874 1113875T

(3)

where T denotes transpose operator δ is the downsamplefactor and G(δ)

o r is the concatenated column vector

Gabor wavelet representationof the training images

Concatenate all therepresentation results

Downsample by a factor δ

Dimensionality reduction byPCA

Input

Feature representation vector

Preprocess all the faceimages

Training

Preprocessing

Featureextraction stage

Featureextraction stage

Testing

Feature representation vector

Feature representation ofall the face images

SVM

Face recognition accuracy

e augmentedfeature vector

Figure 1 Block diagram of face recognition based on Gabor-PCA

Figure 2 )e real part of the 40 Gabor filters

Figure 3 )e magnitude of the 40 Gabor filters

Mathematical Problems in Engineering 3

22 Principle Component Analysis for Feature Optimization)e augmented Gabor feature vector which is introduced inequation (3) has very high dimensionalityQ(δ) isin RO whereO is the dimensionality of vector space Obviously the highdimensionality seriously affects the computing speed andrecognition rate in the process of face recognition In themethod with PCA [21] we can find orthogonal basis forfeature vector sort dimensions in order of importance anddiscard low significance dimensions Let 1113936Q(δ) isin ROtimesO bethe covariance matrix of the augmented feature vector Q(δ)

1113944Q(δ) E Q(δ)

minus E Q(δ)1113872 11138731113960 1113961 Q(δ)

minus E Q(δ)1113872 11138731113960 1113961

T1113882 1113883 (4)

where E(middot) is the expectation operator )e covariancematrix 1113936Q(δ) can be transformed into the following form

1113944 Q(δ) ΩΛΩT

(5)

Ω U1 U2 UO1113858 1113859 (6)

Λ diag p1 p2 pO1113864 1113865 (7)

where Ω isin ROtimesO is an orthogonal eigenvector matrix andΛ isin ROtimesO is a diagonal eigenvalue matrix and the diagonalelements are arranged in descending order(p1 gep2 ge middot middot middot gepO)

An important property of PCA is that it is the optimalsignal reconstruction in the sense of minimummean-squareerror by using a subset of principal components to representthe original signal [19] Following this property PCA can beapplied to dimensionality reduction

X(δ) STQ(δ)

(8)

where S [U1 U2 UJ] JltO and S isin ROtimesJ)e lowerdimensional vector X(δ) isin RJ captures the most expressivefeatures of the original data Q(δ)

3 Formulations of Parameter Optimization ofSVM for Classification

31 Model of Support Vector Machine Support vector ma-chine (SVM) [22] is a kind of binary classifier with stronglearning ability and generalization ability )e key point ofSVM is to find the optimal hyperplane for accurate classi-fication of two classes and ensure that the classification gap islarge enough Figure 4 shows the optimal hyperplane ofSVM where H indicates the hyperplane and margin rep-resents the gap between class H1 and class H2

Suppose the training data V can be given by

V ah bh( 1113857|ah isin Rw

bh isin minus1 + 1 h 1 2 n1113864 1113865

(9)

where ah are the training samples bh are the labels inw-dimensional vector n is the number of training data andeach variable must subject to the criteria ah isin Rw andbh isin minus1 +1 For linear data the hyperplane g(a) 0 thatseparates the given data can be determined

g(a) ωTa + c (10)

where ω is n-dimensional vector and c is a scalar ω and c

determine all the boundaries SVM could be classifiedaccording to the problem types when the hyperplane withminimum margin width ω22 subject to

bh ωTa + c1113872 1113873ge 1 (11)

For the data that cannot be separated linearly therelaxing variables ξh are introduced to allow for optimalgeneralization and classification by controlling the size ofthem )en the optimal hyperplane can be obtained by thefollowing optimization problem

Ψ(ω ξ) 12ω2

+ C 1113944n

h1ξh (12)

subject to bh ωTa + c1113872 1113873ge 1 minus ξh ξh ge 1 (13)

where C denotes the penalty parameter To satisfy theKarushndashKuhnndashTucker (KKT) conditions the Lagrangemultipliers αh is introduced )e abovementioned optimi-zation problem is converted into the dual quadratic opti-mization problem

minα

12

1113944

n

h11113944

n

k1bhbkαhαk ah middot ak( 1113857 minus 1113944

n

k1αk (14)

subject to 1113944

n

h1bhαh 0 0le αh leC (15)

)us the ultimate classification function can be obtainedby solving the dual optimization problem

Margin

H1

H

H2

Figure 4 )e schematic model of a linear SVM

4 Mathematical Problems in Engineering

g(a) sgn 1113944

n

h1αhbh ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (16)

where sgn represents symbolic functionKernel functions can be used to map the data in the low-

dimensional input space to the high-dimensional space bynonlinearity )erefore the linear indivisibility of the inputcan be transformed into the linear separable problem by thismapping Kernel function can be defined as follows

K ah bk( 1113857 ψ ah( 1113857ψ bk( 1113857( 1113857 (17)

)en the optimal function can be given by

g(a) sgn 1113944n

h1αhbhK ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (18)

Several types of kernel functions such as Linear KernelPolynomial Kernel and Radial Basis Function (RBF) arewidely used However RBF has advantages of realizingnonlinear mapping less parameters and less numericaldifficulties In this paper RBF is selected for face recognitionwhich can be represented as follows

K(a b) exp minusca minus b2

1113872 1113873 (19)

where c denotes the kernel parameter

32 Objective Function of SVM With a view to obtainingtraining model with high classification accuracy the penaltyparameter C which is used to adjust confidence scale and thekernel parameter c which determines the width of kerneland the range of data points could be optimized by an ef-fective intelligent algorithm Furthermore the mean squareerror of parameters of SVM can be selected as the objectivefunction of the intelligent algorithm

objective 1n

1113944

n

h1bh minus bh1113872 1113873 (20)

where bh is the output value of the corresponding parameterand bh is the actual value of the corresponding parameter

4 Design of Cultural Emperor PenguinOptimizer for ParameterOptimization of SVM

41 6e Design of Cultural Emperor Penguin OptimizerEPO is a novel optimization algorithm presented by Dhimanand Kumar in 2018 [12] which is inspired by the buddingbehavior of emperor penguin In this paper a hybrid al-gorithm named CEPO is proposed to solve the real numbersoptimization problems which is with the help of culturalalgorithm basic framework in Figure 5 )e key idea ofCEPO is to obtain problem-solving knowledge from thebudding behavior of emperor penguin and make use of thatknowledge to guide the evolution of emperor penguinpopulation in return

Suppose CEPO is designed for general minimum opti-mization problem

min f xi( 1113857 (21)

where xi (xi1 xi2 xi D) denotes the position of the ithemperor penguin in the D-dimensional search spacexmin

j lt xij ltxmaxj (j 1 2 D) f() is the objective

function and f(xi) denotes the objective value of the po-sition xi xmin

j and xmaxj represent the lower and upper

boundary of the position of emperor penguins in the jthdimension

Belief space of emperor penguin population in the tthgeneration defined is given by st and Nt

j where st is situ-ational knowledge component Nt

j is normative knowledgewhich represents the value space information for each pa-rameter in the jth dimension and in the tth generation Nt

j

denotes I L U Itj [ltj ut

j] where Itj represents the interval

of normative knowledge in the jth dimension )e lowerboundary ltj and the upper boundary ut

j are initializedaccording to the value range of variables given by theproblem Lt

j represents the objective value of the lowerboundary ltj of the jth parameter and Ut

j represents theobjective value of the upper boundary ut

j of the jthparameter

)e acceptance function is used to select the emperorpenguins who can directly influence the current belief spaceIn CEPO the acceptance function selects the cultural in-dividual in proportion top 20 from the current populationspace to update the belief space

Situational knowledge st can be updated by updatefunction

st+1

xt+1best if f xt+1

best1113872 1113873ltf st1113872 1113873

st else

⎧⎨

⎩ (22)

where xt+1best is the optimal position of emperor penguin

population space in the (t + 1)th generation

Belief space

Population space

Select()

Accept() Influence()

Update()

Generate()

Objective()

Figure 5 )e cultural algorithm basic framework

Mathematical Problems in Engineering 5

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 3: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

characteristics of spatial locality spatial frequency andorientation selectivity

Let Y(z) represent the grey level distribution of animage and the Gabor wavelet representation of an image canbe given by

Gor(z) Y(z) lowastφor(z) (2)

where Go r(z) defines the convolution result between imageY(z) and the Gabor filters φo r(z) at different orientationsand scales In order to improve the computation speed theconvolution computation can be transformed by FFT [20]To contain the different spatial localities spatial frequenciesand orientation selectivities all these representation resultswill be concatenated to derive an augmented feature vectorQ In [19] before the concatenation each Go r(z) will bedownsampled by a factor δ at different scales and orienta-tions )erefore the augmented Gabor feature vector Q(δ)

can be given by

Q(δ) G(δ)

001113872 1113873T

G(δ)011113872 1113873

Tmiddot middot middot G(δ)

471113872 1113873T

1113874 1113875T

(3)

where T denotes transpose operator δ is the downsamplefactor and G(δ)

o r is the concatenated column vector

Gabor wavelet representationof the training images

Concatenate all therepresentation results

Downsample by a factor δ

Dimensionality reduction byPCA

Input

Feature representation vector

Preprocess all the faceimages

Training

Preprocessing

Featureextraction stage

Featureextraction stage

Testing

Feature representation vector

Feature representation ofall the face images

SVM

Face recognition accuracy

e augmentedfeature vector

Figure 1 Block diagram of face recognition based on Gabor-PCA

Figure 2 )e real part of the 40 Gabor filters

Figure 3 )e magnitude of the 40 Gabor filters

Mathematical Problems in Engineering 3

22 Principle Component Analysis for Feature Optimization)e augmented Gabor feature vector which is introduced inequation (3) has very high dimensionalityQ(δ) isin RO whereO is the dimensionality of vector space Obviously the highdimensionality seriously affects the computing speed andrecognition rate in the process of face recognition In themethod with PCA [21] we can find orthogonal basis forfeature vector sort dimensions in order of importance anddiscard low significance dimensions Let 1113936Q(δ) isin ROtimesO bethe covariance matrix of the augmented feature vector Q(δ)

1113944Q(δ) E Q(δ)

minus E Q(δ)1113872 11138731113960 1113961 Q(δ)

minus E Q(δ)1113872 11138731113960 1113961

T1113882 1113883 (4)

where E(middot) is the expectation operator )e covariancematrix 1113936Q(δ) can be transformed into the following form

1113944 Q(δ) ΩΛΩT

(5)

Ω U1 U2 UO1113858 1113859 (6)

Λ diag p1 p2 pO1113864 1113865 (7)

where Ω isin ROtimesO is an orthogonal eigenvector matrix andΛ isin ROtimesO is a diagonal eigenvalue matrix and the diagonalelements are arranged in descending order(p1 gep2 ge middot middot middot gepO)

An important property of PCA is that it is the optimalsignal reconstruction in the sense of minimummean-squareerror by using a subset of principal components to representthe original signal [19] Following this property PCA can beapplied to dimensionality reduction

X(δ) STQ(δ)

(8)

where S [U1 U2 UJ] JltO and S isin ROtimesJ)e lowerdimensional vector X(δ) isin RJ captures the most expressivefeatures of the original data Q(δ)

3 Formulations of Parameter Optimization ofSVM for Classification

31 Model of Support Vector Machine Support vector ma-chine (SVM) [22] is a kind of binary classifier with stronglearning ability and generalization ability )e key point ofSVM is to find the optimal hyperplane for accurate classi-fication of two classes and ensure that the classification gap islarge enough Figure 4 shows the optimal hyperplane ofSVM where H indicates the hyperplane and margin rep-resents the gap between class H1 and class H2

Suppose the training data V can be given by

V ah bh( 1113857|ah isin Rw

bh isin minus1 + 1 h 1 2 n1113864 1113865

(9)

where ah are the training samples bh are the labels inw-dimensional vector n is the number of training data andeach variable must subject to the criteria ah isin Rw andbh isin minus1 +1 For linear data the hyperplane g(a) 0 thatseparates the given data can be determined

g(a) ωTa + c (10)

where ω is n-dimensional vector and c is a scalar ω and c

determine all the boundaries SVM could be classifiedaccording to the problem types when the hyperplane withminimum margin width ω22 subject to

bh ωTa + c1113872 1113873ge 1 (11)

For the data that cannot be separated linearly therelaxing variables ξh are introduced to allow for optimalgeneralization and classification by controlling the size ofthem )en the optimal hyperplane can be obtained by thefollowing optimization problem

Ψ(ω ξ) 12ω2

+ C 1113944n

h1ξh (12)

subject to bh ωTa + c1113872 1113873ge 1 minus ξh ξh ge 1 (13)

where C denotes the penalty parameter To satisfy theKarushndashKuhnndashTucker (KKT) conditions the Lagrangemultipliers αh is introduced )e abovementioned optimi-zation problem is converted into the dual quadratic opti-mization problem

minα

12

1113944

n

h11113944

n

k1bhbkαhαk ah middot ak( 1113857 minus 1113944

n

k1αk (14)

subject to 1113944

n

h1bhαh 0 0le αh leC (15)

)us the ultimate classification function can be obtainedby solving the dual optimization problem

Margin

H1

H

H2

Figure 4 )e schematic model of a linear SVM

4 Mathematical Problems in Engineering

g(a) sgn 1113944

n

h1αhbh ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (16)

where sgn represents symbolic functionKernel functions can be used to map the data in the low-

dimensional input space to the high-dimensional space bynonlinearity )erefore the linear indivisibility of the inputcan be transformed into the linear separable problem by thismapping Kernel function can be defined as follows

K ah bk( 1113857 ψ ah( 1113857ψ bk( 1113857( 1113857 (17)

)en the optimal function can be given by

g(a) sgn 1113944n

h1αhbhK ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (18)

Several types of kernel functions such as Linear KernelPolynomial Kernel and Radial Basis Function (RBF) arewidely used However RBF has advantages of realizingnonlinear mapping less parameters and less numericaldifficulties In this paper RBF is selected for face recognitionwhich can be represented as follows

K(a b) exp minusca minus b2

1113872 1113873 (19)

where c denotes the kernel parameter

32 Objective Function of SVM With a view to obtainingtraining model with high classification accuracy the penaltyparameter C which is used to adjust confidence scale and thekernel parameter c which determines the width of kerneland the range of data points could be optimized by an ef-fective intelligent algorithm Furthermore the mean squareerror of parameters of SVM can be selected as the objectivefunction of the intelligent algorithm

objective 1n

1113944

n

h1bh minus bh1113872 1113873 (20)

where bh is the output value of the corresponding parameterand bh is the actual value of the corresponding parameter

4 Design of Cultural Emperor PenguinOptimizer for ParameterOptimization of SVM

41 6e Design of Cultural Emperor Penguin OptimizerEPO is a novel optimization algorithm presented by Dhimanand Kumar in 2018 [12] which is inspired by the buddingbehavior of emperor penguin In this paper a hybrid al-gorithm named CEPO is proposed to solve the real numbersoptimization problems which is with the help of culturalalgorithm basic framework in Figure 5 )e key idea ofCEPO is to obtain problem-solving knowledge from thebudding behavior of emperor penguin and make use of thatknowledge to guide the evolution of emperor penguinpopulation in return

Suppose CEPO is designed for general minimum opti-mization problem

min f xi( 1113857 (21)

where xi (xi1 xi2 xi D) denotes the position of the ithemperor penguin in the D-dimensional search spacexmin

j lt xij ltxmaxj (j 1 2 D) f() is the objective

function and f(xi) denotes the objective value of the po-sition xi xmin

j and xmaxj represent the lower and upper

boundary of the position of emperor penguins in the jthdimension

Belief space of emperor penguin population in the tthgeneration defined is given by st and Nt

j where st is situ-ational knowledge component Nt

j is normative knowledgewhich represents the value space information for each pa-rameter in the jth dimension and in the tth generation Nt

j

denotes I L U Itj [ltj ut

j] where Itj represents the interval

of normative knowledge in the jth dimension )e lowerboundary ltj and the upper boundary ut

j are initializedaccording to the value range of variables given by theproblem Lt

j represents the objective value of the lowerboundary ltj of the jth parameter and Ut

j represents theobjective value of the upper boundary ut

j of the jthparameter

)e acceptance function is used to select the emperorpenguins who can directly influence the current belief spaceIn CEPO the acceptance function selects the cultural in-dividual in proportion top 20 from the current populationspace to update the belief space

Situational knowledge st can be updated by updatefunction

st+1

xt+1best if f xt+1

best1113872 1113873ltf st1113872 1113873

st else

⎧⎨

⎩ (22)

where xt+1best is the optimal position of emperor penguin

population space in the (t + 1)th generation

Belief space

Population space

Select()

Accept() Influence()

Update()

Generate()

Objective()

Figure 5 )e cultural algorithm basic framework

Mathematical Problems in Engineering 5

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 4: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

22 Principle Component Analysis for Feature Optimization)e augmented Gabor feature vector which is introduced inequation (3) has very high dimensionalityQ(δ) isin RO whereO is the dimensionality of vector space Obviously the highdimensionality seriously affects the computing speed andrecognition rate in the process of face recognition In themethod with PCA [21] we can find orthogonal basis forfeature vector sort dimensions in order of importance anddiscard low significance dimensions Let 1113936Q(δ) isin ROtimesO bethe covariance matrix of the augmented feature vector Q(δ)

1113944Q(δ) E Q(δ)

minus E Q(δ)1113872 11138731113960 1113961 Q(δ)

minus E Q(δ)1113872 11138731113960 1113961

T1113882 1113883 (4)

where E(middot) is the expectation operator )e covariancematrix 1113936Q(δ) can be transformed into the following form

1113944 Q(δ) ΩΛΩT

(5)

Ω U1 U2 UO1113858 1113859 (6)

Λ diag p1 p2 pO1113864 1113865 (7)

where Ω isin ROtimesO is an orthogonal eigenvector matrix andΛ isin ROtimesO is a diagonal eigenvalue matrix and the diagonalelements are arranged in descending order(p1 gep2 ge middot middot middot gepO)

An important property of PCA is that it is the optimalsignal reconstruction in the sense of minimummean-squareerror by using a subset of principal components to representthe original signal [19] Following this property PCA can beapplied to dimensionality reduction

X(δ) STQ(δ)

(8)

where S [U1 U2 UJ] JltO and S isin ROtimesJ)e lowerdimensional vector X(δ) isin RJ captures the most expressivefeatures of the original data Q(δ)

3 Formulations of Parameter Optimization ofSVM for Classification

31 Model of Support Vector Machine Support vector ma-chine (SVM) [22] is a kind of binary classifier with stronglearning ability and generalization ability )e key point ofSVM is to find the optimal hyperplane for accurate classi-fication of two classes and ensure that the classification gap islarge enough Figure 4 shows the optimal hyperplane ofSVM where H indicates the hyperplane and margin rep-resents the gap between class H1 and class H2

Suppose the training data V can be given by

V ah bh( 1113857|ah isin Rw

bh isin minus1 + 1 h 1 2 n1113864 1113865

(9)

where ah are the training samples bh are the labels inw-dimensional vector n is the number of training data andeach variable must subject to the criteria ah isin Rw andbh isin minus1 +1 For linear data the hyperplane g(a) 0 thatseparates the given data can be determined

g(a) ωTa + c (10)

where ω is n-dimensional vector and c is a scalar ω and c

determine all the boundaries SVM could be classifiedaccording to the problem types when the hyperplane withminimum margin width ω22 subject to

bh ωTa + c1113872 1113873ge 1 (11)

For the data that cannot be separated linearly therelaxing variables ξh are introduced to allow for optimalgeneralization and classification by controlling the size ofthem )en the optimal hyperplane can be obtained by thefollowing optimization problem

Ψ(ω ξ) 12ω2

+ C 1113944n

h1ξh (12)

subject to bh ωTa + c1113872 1113873ge 1 minus ξh ξh ge 1 (13)

where C denotes the penalty parameter To satisfy theKarushndashKuhnndashTucker (KKT) conditions the Lagrangemultipliers αh is introduced )e abovementioned optimi-zation problem is converted into the dual quadratic opti-mization problem

minα

12

1113944

n

h11113944

n

k1bhbkαhαk ah middot ak( 1113857 minus 1113944

n

k1αk (14)

subject to 1113944

n

h1bhαh 0 0le αh leC (15)

)us the ultimate classification function can be obtainedby solving the dual optimization problem

Margin

H1

H

H2

Figure 4 )e schematic model of a linear SVM

4 Mathematical Problems in Engineering

g(a) sgn 1113944

n

h1αhbh ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (16)

where sgn represents symbolic functionKernel functions can be used to map the data in the low-

dimensional input space to the high-dimensional space bynonlinearity )erefore the linear indivisibility of the inputcan be transformed into the linear separable problem by thismapping Kernel function can be defined as follows

K ah bk( 1113857 ψ ah( 1113857ψ bk( 1113857( 1113857 (17)

)en the optimal function can be given by

g(a) sgn 1113944n

h1αhbhK ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (18)

Several types of kernel functions such as Linear KernelPolynomial Kernel and Radial Basis Function (RBF) arewidely used However RBF has advantages of realizingnonlinear mapping less parameters and less numericaldifficulties In this paper RBF is selected for face recognitionwhich can be represented as follows

K(a b) exp minusca minus b2

1113872 1113873 (19)

where c denotes the kernel parameter

32 Objective Function of SVM With a view to obtainingtraining model with high classification accuracy the penaltyparameter C which is used to adjust confidence scale and thekernel parameter c which determines the width of kerneland the range of data points could be optimized by an ef-fective intelligent algorithm Furthermore the mean squareerror of parameters of SVM can be selected as the objectivefunction of the intelligent algorithm

objective 1n

1113944

n

h1bh minus bh1113872 1113873 (20)

where bh is the output value of the corresponding parameterand bh is the actual value of the corresponding parameter

4 Design of Cultural Emperor PenguinOptimizer for ParameterOptimization of SVM

41 6e Design of Cultural Emperor Penguin OptimizerEPO is a novel optimization algorithm presented by Dhimanand Kumar in 2018 [12] which is inspired by the buddingbehavior of emperor penguin In this paper a hybrid al-gorithm named CEPO is proposed to solve the real numbersoptimization problems which is with the help of culturalalgorithm basic framework in Figure 5 )e key idea ofCEPO is to obtain problem-solving knowledge from thebudding behavior of emperor penguin and make use of thatknowledge to guide the evolution of emperor penguinpopulation in return

Suppose CEPO is designed for general minimum opti-mization problem

min f xi( 1113857 (21)

where xi (xi1 xi2 xi D) denotes the position of the ithemperor penguin in the D-dimensional search spacexmin

j lt xij ltxmaxj (j 1 2 D) f() is the objective

function and f(xi) denotes the objective value of the po-sition xi xmin

j and xmaxj represent the lower and upper

boundary of the position of emperor penguins in the jthdimension

Belief space of emperor penguin population in the tthgeneration defined is given by st and Nt

j where st is situ-ational knowledge component Nt

j is normative knowledgewhich represents the value space information for each pa-rameter in the jth dimension and in the tth generation Nt

j

denotes I L U Itj [ltj ut

j] where Itj represents the interval

of normative knowledge in the jth dimension )e lowerboundary ltj and the upper boundary ut

j are initializedaccording to the value range of variables given by theproblem Lt

j represents the objective value of the lowerboundary ltj of the jth parameter and Ut

j represents theobjective value of the upper boundary ut

j of the jthparameter

)e acceptance function is used to select the emperorpenguins who can directly influence the current belief spaceIn CEPO the acceptance function selects the cultural in-dividual in proportion top 20 from the current populationspace to update the belief space

Situational knowledge st can be updated by updatefunction

st+1

xt+1best if f xt+1

best1113872 1113873ltf st1113872 1113873

st else

⎧⎨

⎩ (22)

where xt+1best is the optimal position of emperor penguin

population space in the (t + 1)th generation

Belief space

Population space

Select()

Accept() Influence()

Update()

Generate()

Objective()

Figure 5 )e cultural algorithm basic framework

Mathematical Problems in Engineering 5

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 5: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

g(a) sgn 1113944

n

h1αhbh ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (16)

where sgn represents symbolic functionKernel functions can be used to map the data in the low-

dimensional input space to the high-dimensional space bynonlinearity )erefore the linear indivisibility of the inputcan be transformed into the linear separable problem by thismapping Kernel function can be defined as follows

K ah bk( 1113857 ψ ah( 1113857ψ bk( 1113857( 1113857 (17)

)en the optimal function can be given by

g(a) sgn 1113944n

h1αhbhK ah middot ak( 1113857⎛⎝ ⎞⎠ + c

⎧⎨

⎫⎬

⎭ (18)

Several types of kernel functions such as Linear KernelPolynomial Kernel and Radial Basis Function (RBF) arewidely used However RBF has advantages of realizingnonlinear mapping less parameters and less numericaldifficulties In this paper RBF is selected for face recognitionwhich can be represented as follows

K(a b) exp minusca minus b2

1113872 1113873 (19)

where c denotes the kernel parameter

32 Objective Function of SVM With a view to obtainingtraining model with high classification accuracy the penaltyparameter C which is used to adjust confidence scale and thekernel parameter c which determines the width of kerneland the range of data points could be optimized by an ef-fective intelligent algorithm Furthermore the mean squareerror of parameters of SVM can be selected as the objectivefunction of the intelligent algorithm

objective 1n

1113944

n

h1bh minus bh1113872 1113873 (20)

where bh is the output value of the corresponding parameterand bh is the actual value of the corresponding parameter

4 Design of Cultural Emperor PenguinOptimizer for ParameterOptimization of SVM

41 6e Design of Cultural Emperor Penguin OptimizerEPO is a novel optimization algorithm presented by Dhimanand Kumar in 2018 [12] which is inspired by the buddingbehavior of emperor penguin In this paper a hybrid al-gorithm named CEPO is proposed to solve the real numbersoptimization problems which is with the help of culturalalgorithm basic framework in Figure 5 )e key idea ofCEPO is to obtain problem-solving knowledge from thebudding behavior of emperor penguin and make use of thatknowledge to guide the evolution of emperor penguinpopulation in return

Suppose CEPO is designed for general minimum opti-mization problem

min f xi( 1113857 (21)

where xi (xi1 xi2 xi D) denotes the position of the ithemperor penguin in the D-dimensional search spacexmin

j lt xij ltxmaxj (j 1 2 D) f() is the objective

function and f(xi) denotes the objective value of the po-sition xi xmin

j and xmaxj represent the lower and upper

boundary of the position of emperor penguins in the jthdimension

Belief space of emperor penguin population in the tthgeneration defined is given by st and Nt

j where st is situ-ational knowledge component Nt

j is normative knowledgewhich represents the value space information for each pa-rameter in the jth dimension and in the tth generation Nt

j

denotes I L U Itj [ltj ut

j] where Itj represents the interval

of normative knowledge in the jth dimension )e lowerboundary ltj and the upper boundary ut

j are initializedaccording to the value range of variables given by theproblem Lt

j represents the objective value of the lowerboundary ltj of the jth parameter and Ut

j represents theobjective value of the upper boundary ut

j of the jthparameter

)e acceptance function is used to select the emperorpenguins who can directly influence the current belief spaceIn CEPO the acceptance function selects the cultural in-dividual in proportion top 20 from the current populationspace to update the belief space

Situational knowledge st can be updated by updatefunction

st+1

xt+1best if f xt+1

best1113872 1113873ltf st1113872 1113873

st else

⎧⎨

⎩ (22)

where xt+1best is the optimal position of emperor penguin

population space in the (t + 1)th generation

Belief space

Population space

Select()

Accept() Influence()

Update()

Generate()

Objective()

Figure 5 )e cultural algorithm basic framework

Mathematical Problems in Engineering 5

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 6: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

Assume that for the qth cultural individual a randomvariable θq lies in the range of [0 1] is produced )e qthcultural individual affects the lower boundary of normativeknowledge in the jth dimension when θq lt 05 is satisfiedNormative knowledge Nt

j can be updated by updatefunction

lt+1j

xt+1qj if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

ltj else

⎧⎪⎨

⎪⎩

Lt+1j

f xt+1q1113872 1113873 if x

t+1qj le l

tj orf xt+1

q1113872 1113873lt Ltj

Ltj else

⎧⎪⎨

⎪⎩

(23)

)e qth cultural individual affects the upper boundary ofnormative knowledge in the jth dimension when θq ge 05 issatisfied

ut+1j

xt+1qj if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

utj else

⎧⎪⎨

⎪⎩

Ut+1j

f xt+1q1113872 1113873 if x

t+1qj ge u

tj orf xt+1

q1113872 1113873ltUtj

Utj else

⎧⎪⎨

⎪⎩

(24)

Situational knowledge and normative knowledge can beused to guide emperor penguin population evolution by theinfluence function In CEPO a selection operator β isproduced to select one of two ways to influence the ofevolution emperor penguin population

β Maxiteration minus t

Maxiteration (25)

where Maxiteration denotes the maximum number ofiterations

Assume that for the ith emperor penguin a randomvariable λi which lies in the range of [0 1] is produced )efirst way is to update the position of emperor penguin bychanging the search size and direction of the variation withbelief space which will be implemented when satisfied λi le β)e position of emperor penguin in the jth dimension canbe updated by

xt+1ij

xtij + size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij lt s

tj

xtij minus size It

j1113872 1113873 middot N(0 1)11138681113868111386811138681113868

11138681113868111386811138681113868 if xtij gt s

tj

xtij + η middot size It

j1113872 1113873 middot N(0 1) else

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(26)

where N(0 1) is a random number subjecting to thestandard normal distribution size(It

j) is the length of ad-justable interval of the jth parameter in belief space in the tthgeneration η is set to be in the range of [001 06] by [14]

)e other way is by a series of steps in EPO which are thehuddle boundary generation temperature profile around thehuddle computing the distance calculation between em-peror penguins and the position update of emperor pen-guins which will be carried when satisfied λi gt β)e specificsteps can be represented as follows

Tprime T minust minus MaxiterationMaxiteration

T

0 if Rge 05

1 if Rlt 05

⎧⎪⎨

⎪⎩

(27)

where Tprime represents the temperature profile around thehuddle T is the time for finding best optimal solution and R

is a random variable which lies in the range of [0 1]

Dtep Sep A

t1113872 1113873 middot xt

best minus Bt

middot xti (28)

where Dtep denotes the distance between the emperor

penguin and the optimal solution xtbest represents the

current optimal solution found in emperor penguin pop-ulation space in the tth generation Sep represents the socialforces of the emperor penguin that is responsible for con-vergence towards the optimal solution At and Bt are used toavoid the collision between neighboring emperor penguinsand Bt is a random variable which lies in the range of [0 1]At can be computed as follows

At

M times Tprime + Ptgrid(Accuracy)1113872 1113873 times rand()1113872 1113873 minus Tprime

Ptgrid(Accuracy) xt

best minus xti

11138681113868111386811138681113868111386811138681113868

(29)

where M is the movement parameter which holds a gapbetween emperor penguins for collision avoidance andPtgrid(Accuracy) defines the absolute difference by com-

paring the difference between emperor penguins Sep(At) inequation (28) is computed as follows

Sep At

1113872 1113873

ε middot eminus tρ

minus eminus t

1113969

(30)

where e represents the base of natural logarithm ε and ρ aretwo control parameters for better exploration and exploi-tation which is in the range of [15 2] and [2 3] Ultimatelythe position of emperor penguin is updated as follows

xt+1i xt

best minus At

middot Dtep (31)

)e algorithm procedure of CEPO is sketched in Al-gorithm 1

42 CEPO for Parameter Optimization of SVM During theiterative searching process each emperor penguin adjuststhe optimal position by comparing the value of the objectivefunction and the optimal position xi is considered to be theparameter pair (C and c) for repeated training of SVM andupdate of each emperor penguin Figure 6 shows a schematicdiagram of SVM based on CEPO

5 The Experiments

51 Face Databases In the following experiments to testour proposed model we select four face databases thatare used commonly and internationally ie YALE(httpcvcyaleeduprojectsyalefacesyalefaceshtml)

6 Mathematical Problems in Engineering

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 7: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

ORL (httpwwwclcamacukresearchdtgattarchivefacedatabasehtml) UMIST (httpimageseeumistacukdannydatabasehtml) and FERET (httpwwwfrvtorg) Specifically the YALE database contains 165 im-ages of 15 individuals including changes in illuminationexpression and posture )e ORL database consists of400 photographs of 40 human subjects with differentexpressions and variations at different scales and ori-entations )e UMISTdatabase includes 564 images of 20human subjects with different angles and poses )e

FERETdatabase consists of a total of 1400 images for 200human subjects corresponding to different poses ex-pressions and illumination conditions To fully verifythe validity and rationality of the experiments config-uration of training and testing samples on four facedatabases is shown in Table 1

52 State-of-the-Art Algorithms for Comparison To validatethe performance of CEPO the eight state-of-the-art meta-heuristics are used for comparison )ese are Moth-Flame

Algorithm CEPOInput the emperor penguin population xi(i 1 2 m)

Output the optimal position of emperor penguin(1) Procedure CEPO(2) Initialize the size of population Maxiteration the parameters of EPO and CA(3) Initialize the situational knowledge and normative knowledge of the belief space(4) Initialize the population space of Eps(5) Compute the fitness of each EP(6) Arrange the fitness value of each EP(7) Initialize the belief space by acceptance proportion 20 and AdjustCulture(8) Initialize the belief space(9) While (tltMaxiteration) do(10) Calculate T and Tprime using equations (9) and (10)(11) For i⟵ 1 to m do(12) λi⟵Rand()

(13) For j⟵ 1 to D do(14) If(λi le β) using equation (7)(15) Update the position of EPs using equation (8)(16) Else(17) Compute A and B using equations (12) and (13)(18) Compute Sep(A) using equation (14)(19) Compute the new position of EPs using equation (15)(20) Compute the fitness of current Eps(21) Update the better position of EP compared with the previous position(22) End if(23) End for(24) End for(25) Amend EPs which cross the boundary(26) Arrange the fitness value of each EP(27) Update the belief space by acceptance proportion top 20 and AdjustCulture(28) t⟵ t + 1(29) End while(30) Return the optimal position(31) End procedure(32) Procedure AdjustCulture(33) Update situational knowledge using equation (2)(34) For q⟵ 1 to m5 do(35) θq⟵Rand()

(36) If(θq lt 05)

(37) For j⟵ 1 to D do(38) Update the lower boundary of normative knowledge using equation (3) and (4)(39) Else(40) For j⟵ 1 to D do(41) Update the upper boundary of normative knowledge using equation (5) and (6)(42) End if(43) End for(44) End for(45) End procedure

ALGORITHM 1 )e algorithm procedure of CEPO

Mathematical Problems in Engineering 7

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 8: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

Optimization (MFO) [9] Grey Wolf Optimizer (GWO) [8]Particle Swarm Optimization (PSO) [6] Genetic Algorithm(GA) [5] Cultural Algorithm (CA) [14] Emperor PenguinOptimizer (EPO) [12] Cultural Firework Algorithm (CFA)[17]and Emperor Penguin and Social Engineering Opti-mizer (EPSEO) [13]

53 Experimental Setup )e experimental environment isMATLAB 2018a with LIBSVM on the computer with IntelCore i5 processor and 16GB of RAM Table 2 Shows theparameter settings of the proposed CEPO and eight com-petitor intelligent algorithms ie MFO GWO PSO GACA EPO CFA and EPSEO And the parameter values ofthese algorithms are set to be recommended in their originalpaper

6 Simulation Results and Discussion

61 Performance of CEPO Comparison Experiment Sixbenchmark test functions are applied to verify the superiorityand applicability of CEPO Table 3 shows the details of sixbenchmark test functions including single-peak benchmarkfunctions which are Sphere (F1) and Schwefel 222 (F2) andmultipeak benchmark functions which are Rastrigin (F3)Griewank (F4) Ackley (F5) and Schaffer (F6) For eachbenchmark test function each algorithm carries out thirtyindependent experiments )e mean and the standard de-viation of the thirty independent results are shown in Table 4

As we can see the statistical results of CEPO for sixbenchmark test functions are obviously superior to the othereight comparison algorithms Under the same optimizationconditions the two evaluation indexes of CEPO are bothsuperior to several or even ten orders of magnitude of otheralgorithms which proves that the CEPO has strong opti-mization performance )e optimal mean index shows thatthe CEPOmaintains a high overall optimization level in thirtyindependent experiments and implies that CEPO has bettersingle optimization accuracy And the optimal standard de-viation index proves that CEPO has strong optimizationstability and robustness Furthermore under the same testconstraints CEPO attains the high optimization accuracy inthe single-peak functions (F1-F2) and multipeak functions(F3-F6) optimization process which indicates that the CEPOhas strong local mining performance and good convergenceaccuracy especially for the four multipeak functions (F3-F6)still maintain a high optimization objective value whichverifies its better global exploration performance and strongerlocal extreme value avoidance ability If CEPO takes the targetvalue optimization precision as the evaluation index thepreset precision threshold can be as high as 10E-12 Espe-cially for function F1F3F6 the threshold value can evenbe relaxed to 10E-25 under the premise of ensuring thesuccess rate of optimization and the precision can effectivelymeet the error tolerance range of engineering problemswhich further proves that CEPO has better application po-tential in engineering optimization problems

Initialize the emperor penguin population

Set the parameters

SVM training

Calculate the objective value

Update the best solution

Output the optimal parameters of SVM

CEPO-SVM model

Whether the termination criterion is met

YN

Figure 6 Block diagram of SVM based on CEPO

Table 1 Configuration of training and test samples on four face databases

Databases Total numbers Numbers of training samples Numbers of testing samplesYALE 165 70 percent the restORL 400 70 percent the restUMIST 564 70 percent the restFERET 200 70 percent the rest

8 Mathematical Problems in Engineering

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 9: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

62 Performance of CEPO-SVM Comparison ExperimentTo verify the superiority of the proposed CEPO-SVMclassification model for face recognition CEPO and eightcompetitor intelligent algorithms are used to iterativelyoptimize the penalty parameter C and kernel parameter c ofSVM to assess the performance of CEPO )e averagerecognition rates of thirty independent runs are shown inTable 5 As can be seen clearly from Table 5 CEPO-SVMachieves the highest accuracy among all nine models In caseof YALE face database an accuracy of 981 is obtained byCEPO-SVM model For ORL UMIST and FERET facedatabase CEPO-SVM gets an accuracy of 978 984 and985 respectively Furthermore the convergence rate ofCEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVMMFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM areexamined and depicted in Figures 7(a)ndash7(d) From this

figure it can be observed that the recognition rate succes-sively increases from iteration 1 to iteration 100 on all fourface databases For YALE database the accuracy rate has notimproved after 14th 22nd 31st 19th 18th 21st 20th 23rdand 24th iterations using CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively Similarly for ORLdatabase no improvement in accuracy is noticed after 11th20th 22nd 19th 15th 20th 18th 21st and 24th iterationsusing CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM respectively In case of UMISTdatabase CEPO-SVMPSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVMGWO-SVM CFA-SVM and EPSEO-SVM converge after15th 24th 25th 19th 20th 26th 18th 24th and 21st iter-ations respectively For FERETdatabase no rise in accuracy is

Table 2 Parameter setting of nine algorithms

Algorithms Parameters Values

Cultural emperor penguin optimizer (CEPO)

Size of population 80Control parameter ε [15 2]Control parameter ρ [2 3]

Movement parameter M 2)e constant η 006

Maximum iteration 100

Moth-flame optimization (MFO) [9]

Size of population 80Convergence constant [minus1 minus2]Logarithmic spiral 075Maximum iteration 100

Grey wolf optimizer (GWO) [8]Size of population 80Control parameter [0 2]Maximum iteration 100

Particle swarm optimization (PSO) [6]

Size of population 80Inertia weight 075

Cognitive and social coeff 18 2Maximum iteration 100

Genetic algorithm (GA) [5]

Size of population 80Probability of crossover 09Probability of mutation 005Maximum iteration 100

Cultural algorithm (CA) [14]Size of population 80

)e constant 006Maximum iteration 100

Emperor penguin optimizer (EPO) [12]

Size of population 80Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Cultural firework algorithm (CFA) [17]

Size of population 80Cost parameter 0025 02)e constant 03

Maximum iteration 100

Emperor penguin and social engineering optimizer (EPSEO) [13]

Size of population 80Rate of training 02

Rate of spotting an attack 005Number of attacks 50Control parameter [15 2]Control parameter [2 3]

Movement parameter 2Maximum iteration 100

Mathematical Problems in Engineering 9

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 10: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

seen after 16th 22nd 24th 19th 18th 20th 18th 21st and25th iterations using CEPO-SVM PSO-SVM GA-SVMEPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVMand EPSEO-SVM respectively Obviously CEPO-SVMconverges firstly among nine proposed models which verifiesthat EPO with the help of the embedded framework providedby CA could remarkably enhance convergence rate )ere-fore the parameters of optimization by CEPO are superior tomake the classification performance of SVM better

63 Stability of CEPO-SVM Comparison Experiment For allnine models the stability of the models is measured by themean and standard deviation of penalty parameter C andkernel parameter c In this experiment each model runsthirty experiments Tables 6ndash9 show the comparison of meanand standard deviation of parameters among ninemodels onfour face databases As we can see from Tables 6ndash9 the meanof penalty parameter C in CEPO-SVM is less than that ofother models on four face databases which could reduce thepossibility of overlearning of SVM And CEPO-SVM pro-vides the lowest standard deviation of penalty parameter C

among all nine models on all four databases which verifiesthe stability of CEPO-SVM is better For all four databasesthe mean and standard deviation of kernel parameter c areclose among the nine models However it can still be seenthat CEPO-SVM has the minimum standard deviation ofkernel parameter c Obviously CEPO is more stable tooptimize the parameters of SVM for face recognition

64 Robustness of CEPO-SVM Comparison ExperimentTo prove the robustness of CEPO-SVM salt and peppernoise is added to each image of the original four databases)e noise databases generated are used for face recognition

Table 3 )e details of six test benchmark functions

No Function Formulation D Search rangeF1 Sphere 1113936

Di1 x

2i () 10 [minus100 100]

F2 Schwefel 222 1113936Di1 |xi| + 1113937

Di1 |xi|() 10 [minus10 10]

F3 Rastrigin 1113936Di1(x

2i minus 10 cos(2πxi) + 10)() 10 [minus512 512]

F4 Griewank 1113936Di1 x

2i 4000 minus 1113937

Di1 cos(xi

i

radic) + 1() 10 [minus600 600]

F5 Ackley minus20 exp(minus02

1n 1113936Di1 x2i

1113969

) minus exp(1n 1113936Di1 cos(2πxi) + 20 + e() 10 [minus32 32]

F6 Schaffer 05 + (sinx21 minus x22

1113969)2 minus 05(1 + 0001(x21 + x22))

2() 2 [minus100 100]

Table 4 )e results of benchmark test functions by nine algorithms

Algorithms FunctionsF1 F2 F3 F4 F5 F6

MFO Mean 528Eminus 11 169Eminus 06 721Eminus 10 838Eminus 11 267Eminus 12 509Eminus 13Std 419Eminus 12 377Eminus 06 493Eminus 11 341Eminus 11 719Eminus 12 481Eminus 12

GWO Mean 703Eminus 13 443Eminus 05 848Eminus 12 976Eminus 09 383Eminus 13 292Eminus 14Std 692Eminus 13 782Eminus 05 312Eminus 12 428Eminus 09 906Eminus 12 637Eminus 14

PSO Mean 636Eminus 06 614Eminus 04 556Eminus 04 207Eminus 05 298Eminus 07 814Eminus 06Std 423Eminus 05 825Eminus 03 179Eminus 04 623Eminus 05 524Eminus 06 993Eminus 07

GA Mean 255Eminus 07 247Eminus 04 532Eminus 05 495Eminus 05 156Eminus 08 338Eminus 07Std 314Eminus 07 805Eminus 04 387Eminus 05 169Eminus 05 606Eminus 08 272Eminus 07

EPO Mean 857Eminus 06 914Eminus 05 352Eminus 07 689Eminus 07 583Eminus 09 402Eminus 08Std 487Eminus 06 243Eminus 05 499Eminus 07 427Eminus 07 718Eminus 09 539Eminus 08

CA Mean 413Eminus 12 149Eminus 05 441Eminus 09 665Eminus 12 233Eminus 07 753Eminus 11Std 378Eminus 13 136Eminus 05 223Eminus 10 489Eminus 13 268Eminus 08 621Eminus 09

CEPO Mean 128Eminus 27 415Eminus 14 654Eminus 26 213Eminus 21 239Eminus 24 774Eminus 29Std 463Eminus 27 239Eminus 14 716Eminus 26 149Eminus 21 476Eminus 24 322Eminus 29

CFA Mean 423Eminus 23 349Eminus 10 237Eminus 19 801Eminus 17 541Eminus 20 119Eminus 24Std 189Eminus 22 883Eminus 10 944Eminus 19 241Eminus 17 293Eminus 19 432Eminus 23

EPSEO Mean 867Eminus 24 725Eminus 11 391Eminus 20 661Eminus 18 425Eminus 21 881Eminus 22Std 542Eminus 24 941Eminus 11 823Eminus 20 393Eminus 18 789Eminus 21 667Eminus 21

Table 5 Average recognition rates () obtained by nine models onfour databases

Average recognition rate ()Database

YALE ORL UMIST FERETMFO-SVM 966 968 980 977GWO-SVM 971 970 965 982PSO-SVM 969 965 971 972GA-SVM 974 975 976 974EPO-SVM 976 972 979 976CA-SVM 978 974 977 974CEPO-SVM 981 978 984 985CFA-SVM 979 977 983 983EPSEO-SVM 977 976 981 980

10 Mathematical Problems in Engineering

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 11: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

by CEPO-SVM PSO-SVM GA-SVM EPO-SVM CA-SVM MFO-SVM GWO-SVM CFA-SVM and EPSEO-SVM For each noise databases we selected 70 of noisesamples as the training samples and the rest 30 of noisesamples as the testing samples Salt and pepper noise is arandom black or white pixels in an image Four differentlevels of noise image on YALE database which are 0 510 and 15 are shown in Figure 8 From Figure 8 we can

see that as the noise level improves the face image becomesmore blurred Figure 9 shows the detailed recognition ratesamong different models on four databases As we can seefrom Figure 9 CEPO-SVM obtains the highest recognitionrate at all four noise levels on among four databases incomparison with other eight models Although the faceimage is seriously polluted by salt and pepper noise which isup to 15 the recognition rate of CEPO-SVM can still attain

0 20 40 60 80 100Number of iterations

75

80

85

90

95

100A

ccur

acy

()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

75

80

85

90

95

100

Acc

urac

y (

)

(b)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(c)

0 20 40 60 80 100Number of iterations

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

70

75

80

85

90

95

100

Acc

urac

y (

)

(d)

Figure 7 Number of iterations versus accuracy on four databases (a) YALE database (b) ORL database (c) UMISTdatabase (d) FERETdatabase

Mathematical Problems in Engineering 11

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 12: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

825 on YALE database 835 on ORL database 811 onUMIST database and 821 on FERET database which farexceed that of other eight models Besides with the increaseof noise level the recognition rate of CEPO-SVM on four

databases decreases more slowly than that of other eightmodels Obviously the results show that CEPO-SVM cannot only attain good noise-free face recognition accuracy butalso achieve good accuracy under noise conditions which

Table 7 Comparison of mean and standard deviation of parameter on ORL face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183921 47817 00965 01143GWO 190418 51526 00883 01277PSO 261129 64990 01620 02563GA 275823 80652 01953 02497CA 199301 39721 00827 01683EPO 222801 62076 01198 01762CEPO 143873 20187 00121 00098CFA 162265 41552 00573 00829EPSEO 157192 35228 00525 00701

Table 6 Comparison of mean and standard deviation of parameter on YALE face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 183721 44921 00633 01109GWO 168983 40748 00518 00978PSO 252319 74295 01292 01753GA 283684 87016 01449 02387CA 191670 48603 00672 01283EPO 226792 57065 00812 01456CEPO 132441 24982 00100 00112CFA 154039 30409 00349 00778EPSEO 156382 27636 00426 00692

Table 8 Comparison of mean and standard deviation of parameter on UMIST face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 172782 42048 00804 01449GWO 169556 37649 00773 01283PSO 249031 71991 01439 01958GA 253415 82315 01557 02261CA 188737 40103 00762 01813EPO 220217 59525 00973 01502CEPO 129109 22081 00114 00101CFA 152271 34594 00638 00822EPSEO 159964 30182 00514 00769

Table 9 Comparison of mean and standard deviation of parameter on FERET face database among nine optimization algorithms

AlgorithmsPenalty parameter C Kernel parameter c

Mean Std Mean StdMFO 181928 41892 00619 01023GWO 173209 39713 00557 00921PSO 255827 69143 01401 01882GA 266929 84939 01379 02273CA 153462 50541 00924 01419EPO 212679 56314 00846 01364CEPO 111811 21192 00126 00087CFA 150647 37672 00489 00748EPSEO 144689 34295 00377 00661

12 Mathematical Problems in Engineering

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 13: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

(a) (b) (c) (d)

Figure 8 Four different levels of noise image on YALE database (a) 0 (b) 5 (c) 10 (d) 15

0 5 10 15Noise precentages ()

65

70

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(a)

0 5 10 15Noise percentages ()

75

80

85

90

95

100

Acc

urac

y (

)

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(b)

Figure 9 Continued

Mathematical Problems in Engineering 13

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 14: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

verifies that robustness of CEPO-SVM is superior to theother eight models

65 Run Time Analysis )e average training time of thirtyindependent runs by the proposed CEPO-SVM and eightcompetitor intelligent algorithm-based SVMs are shown inTable 10 As we can see from Table 10 in case of all fourdatabases the average training time by the proposed CEPO-SVM is more than that of six models with single algorithmsie MFO-SVM GWO-SVM PSO-SVM GA-SVM EPO-SVM and CA-SVM )is may be because hybrid of two al-gorithms increases the training time Although CEPO-SVMtakes more time than these six models with single algo-rithms it can be considered better than these six models

because of its high classification performance However forall four databases the average training time by the pro-posed CEPO-SVM is less than that of two models withhybrid algorithms ie CFA-SVM and EPSEO-SVM Ob-viously CEPO-SVM has better performance than these twomodels with hybrid algorithms

66 Limitation Analysis Like other hybrid algorithms al-though CEPO provides superior performance in face rec-ognition due to large amount of computation the proposedCEPO will cost more time for iterations compared withsingle algorithms Besides when CEPO is applied to solvesuper-dimensional optimization problems the performancemay be reduced

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(c)

70

75

80

85

90

95

100

Acc

urac

y (

)

0 5 10 15Noise percentages ()

CEPO-SVMPSO-SVMGA-SVMEPO-SVMCA-SVM

MFO-SVMGWO-SVMCFA-SVMEPSEO-SVM

(d)

Figure 9 Noise percentages versus accuracy on four databases (a) YALE database (b) ORL database (c) UMIST database (d) FERETdatabase

Table 10 Average training time (s) obtained by nine models on four databases

Training time (s)Databases

YALE ORL UMIST FERETMFO-SVM 4793 6154 7095 4603GWO-SVM 4506 5881 7231 5054PSO-SVM 4395 5412 6569 4891GA-SVM 6971 8273 9803 7942EPO-SVM 4068 4749 6855 4287CA-SVM 4633 5687 7741 5324CEPO-SVM 5976 7923 9576 6734CFA-SVM 7825 8196 11287 8459EPSEO-SVM 8243 8315 10743 8677

14 Mathematical Problems in Engineering

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 15: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

7 Conclusions

In this paper with respective advantages of EPO and CA wepropose a hybrid metaheuristic algorithm named CEPO withthe help of the embedded framework provided by CA topromote each other and overcome each otherrsquos self-defects)e proposed algorithm has been experimented on sixbenchmark test functions compared with other eight state-of-the-art algorithms Furthermore we apply this hybrid met-aheuristic algorithm to automatic learning of the networkparameters of SVM to solve the classification problem for facerecognition )e proposed classifier model has been tested onfour face databases that are used commonly and interna-tionally by comparing with models with other eight state-of-the-art algorithms )e experimental results show that theproposed model has better performance in terms of accuracyconvergence rate stability robustness and run time

We provide future works in different directions Firstthe proposed model has only been experimented on fourface databases however other new face databases should betested to extend the current work Second the proposedhybrid algorithm should be applied to other classifiers forface recognition Lastly the proposed hybrid algorithmshould be applied to solve other optimization problemssuch as digital filters design cognitive radio spectrum al-location and image segmentation

List of Definitions of Partial Parameters

xi )e position of the ith emperor penguin in theD-dimensional search space

st Situational knowledge componentNt

j Normative knowledge in the jth dimension and inthe tth generation

Itj )e interval of normative knowledge in the jth

dimensionltj )e lower boundary in the jth dimension and in

the tth generationLt

j )e objective value of the lower boundary ltj of thejth parameter

utj )e higher boundary in the jth dimension and in

the tth generationUt

j )e objective value of the upper boundary utj of the

jth parameterDt

ep )e distance between the emperor penguin and theoptimal solution

M )e movement parameterc )e scalarC )e penalty parameterξh )e relaxing variablesah )e training samplesαh )e Lagrange multipliersr )e scale of Gabor filtero )e orientation of Gabor filterQ(δ) )e concatenated column vector1113936 Q(δ) )e covariance matrix of the augmented feature

vector Q(δ)

Ω )e orthogonal eigenvector matrixΛ )e diagonal eigenvalue matrix

Data Availability

)e data used to support the findings of this study are in-cluded within the article

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)is research was supported by the National Natural ScienceFoundation of China (Grant no 61571149) HeilongjiangProvince Key Laboratory Fund of High Accuracy SatelliteNavigation and Marine Application Laboratory (Grant noHKL-2020-Y01) and Initiation Fund for Postdoctoral Re-search in Heilongjiang Province (Grant no LBH-Q19098)

References

[1] S Jiang H Mao Z Ding and Y Fu ldquoDeep decision treetransfer boostingrdquo IEEE Transactions on Neural Networks andLearning Systems vol 31 no 2 pp 383ndash395 2020

[2] Y Xia and J Wang ldquoA bi-projection neural network forsolving constrained quadratic optimization problemsrdquo IEEETransactions on Neural Networks and Learning Systemsvol 27 no 2 pp 214ndash224 2016

[3] S Zhang X Li M Zong X Zhu and RWang ldquoEfficient kNNclassification with different numbers of nearest neighborsrdquoIEEE Transactions on Neural Networks and Learning Systemsvol 29 no 5 pp 1774ndash1785 2018

[4] A Mathur and G M Foody ldquoMulticlass and binary SVMclassification implications for training and classificationusersrdquo IEEE Geoscience and Remote Sensing Letters vol 5no 2 pp 241ndash245 2008

[5] J Ruan H Jiang X Li Y Shi F T S Chan and W Rao ldquoAgranular GA-SVM predictor for big data in agricultural cyber-physical systemsrdquo IEEE Transactions on Industrial Infor-matics vol 15 no 12 pp 6510ndash6521 2019

[6] F Melgani and Y Bazi ldquoClassification of electrocardiogramsignals with support vector machines and particle swarmoptimizationrdquo IEEE Transactions on Information Technologyin Biomedicine vol 12 no 5 pp 667ndash677 2008

[7] L Wanling W Zhensheng and S Xiangjun ldquoParametersoptimization of support vector machine based on simulatedannealing and improved QPSOrdquo in Proceedings of the 2012International Conference on Industrial Control and ElectronicsEngineering pp 1089ndash1091 Xirsquoan China August 2012

[8] S Mirjalili S M Mirjalili S M Mirjalili and A Lewis ldquoGreywolf optimizerrdquo Advances in Engineering Software vol 69pp 46ndash61 2014

[9] S Mirjalili ldquoMoth-flame optimization algorithm a novelnature-inspired heuristic paradigmrdquo Knowledge-based Sys-tems vol 89 pp 228ndash249 2015

[10] S Kaur K Lalit A L S Awasthi et al ldquoTunicate swarmalgorithm a new bio-inspired based metaheuristic paradigmfor global optimizationrdquo Engineering Applications of ArtificialIntelligence vol 90 Article ID 103541 2020

[11] M Dehghani Z Montazeri O P Malik et al ldquoA newmethodology called dice game optimizer for capacitorplacement in distribution systemsrdquo Electrical Engineering ampElectromechanics vol 1 pp 61ndash64 2020

Mathematical Problems in Engineering 15

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering

Page 16: Cultural Emperor Penguin Optimizer and Its Application for ...2020/06/11  · characteristics of spatial locality, spatial frequency, and orientationselectivity. Let Y(z) represent

[12] G Dhiman and V Kumar ldquoEmperor penguin optimizer abio-inspired algorithm for engineering problemsrdquo Knowl-edge-Based Systems vol 159 pp 20ndash50 2018

[13] S Kumar Baliarsingh W Ding S Vipsita et al ldquoA memeticalgorithm using emperor penguin and social engineeringoptimization for medical data classificationrdquo Applied SoftComputing vol 85 2019

[14] Reynolds and G Robert ldquoAn introduction to cultural algo-rithmsrdquo in Proceedings of the 6ird Annual Conference onEvolutionary Programming pp 131ndash139 San Diego CAUSA January 1994

[15] C Lin C Chen and C Lin ldquoA hybrid of cooperative particleswarm optimization and cultural algorithm for neural fuzzynetworks and its prediction applicationsrdquo IEEE Transactionson Systems Man and Cybernetics vol 39 no 1 pp 55ndash682009

[16] H Gao C Xu and C Li ldquoQuantum-inspired cultural bac-terial foraging algorithm for direction finding of impulsenoiserdquo International Journal of Innovative Computing andApplications vol 6 no 1 pp 44ndash54 2014

[17] H Gao and M Diao ldquoCultural firework algorithm and itsapplication for digital filters designrdquo International Journal ofModelling Identification and Control vol 14 no 4pp 324ndash331 2011

[18] C Liu ldquoGabor-based kernel PCA with fractional powerpolynomial models for face recognitionrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 26 no 5pp 572ndash581 2004

[19] J Ilonen J-K Kamarainen and H Kalviainen ldquoEfficientcomputation of gabor featuresrdquo Research Report 100 Lap-peenranta University of Technology Department of Infor-mation Technology Lappeenranta Finland 2005

[20] C Liu and H Wechsler ldquoGabor feature based classificationusing the enhanced Fisher linear discriminant model for facerecognitionrdquo IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society vol 11 no 4pp 467ndash476 2002

[21] M D Farrell and R M Mersereau ldquoOn the impact of PCAdimension reduction for hyperspectral detection of difficulttargetsrdquo IEEE Geoscience and Remote Sensing Letters vol 2no 2 pp 192ndash195 2005

[22] B Fei and J Jinbai Liu ldquoBinary tree of SVM a new fastmulticlass training and classification algorithmrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 696ndash7042006

16 Mathematical Problems in Engineering