jalilvand_pso-based pid controller parameters tuning

7
Advanced Particle Swarm Optimization-Based PID Controller Parameters Tuning Abolfazl Jalilvand Department of Electrical Engineering Zanjan University Zanjan, Iran [email protected] Ali Kimiyaghalam Department of Electrical Engineering Zanjan University Zanjan, Iran [email protected] Ahmad Ashouri Department of Electrical Engineering Zanjan University Zanjan, Iran [email protected] Meisam Mahdavi Department of Electrical Engineering Zanjan University Zanjan, Iran aj [email protected] Abstract- PID parameter optimization is an important problem in control field. Particle swarm optimization (PSO) is powerful stochastic evolutionary algorithm that is used to find the global optimum solution in search space. However, it has been observed that the standard PSO algorithm has premature and local convergence phenomenon when solving complex optimization problem. To resolve this problem an advanced particle swarm optimization (APSO) is proposed in this paper. This new algorithm is proposed to augment the original PSO searching speed. This study proposes to use the (APSO) for its fast searching speed. These advanced particle swarm optimization to accelerate the convergence. The algorithms are simulated with MATLAB programming. The simulation result shows that the PID controller with (APSO) has a fast convergence rate and a better dynamic performance. Keywords- Advanced PSO Algorithm; Genetic Algorithm; Parameter Optimization; PID Parameters Tuning. I. INTRODUCTION Proportional-Integral-Derivative (PID) controller is one of earliest control technique that is still used widely in industrial because of their easy implementation, robust performance and being simple of physical principle of parameters. For achieving appropriate closed-loop performance, three parameters of the PID controller must be tuned [1-3]. Tuning methods of PID parameters are classified as traditional and intelligent methods. Conventional methods such as Zigler-Nichols [4] and simplex methods [5] are hard to determine optimal PID parameters and usually are not caused good tuning, i.e. it produces surge and big overshoot. Recently, intelligent approaches such as genetic algorithm [6-8], particle swarm optimization [9] and artificial fish swarm algorithm [2] have been proposed for PID optimization but among them genetic algorithm (GA) has received much interest and has been applied successfully to solve the problem of optimal PID controller parameters [1 OJ. The genetic algorithm may be not efficient for solving some complex optimization problems. This degradation in efficiency is apparent especially in applications when the parameters being optimized are highly correlated [11]. Particle swarm optimization (PSO) is an evolutionary computation technique developed by Kennedy and Eberhart in 1995. It finds 978-1-4244-2824-3/08/$25.00 ©2008 IEEE 429 global optimum solution in search space through the interactions of individuals in a swarm of particles [12]. Compared with the genetic algorithm (GA), PSO is characterized as a simple concept, easy implementation, and good computational efficiency [13]. However, the standard PSO algorithm has also some disadvantages like premature convergence phenomenon similar to the (GA) [14]. Although some improved methods, such as augment the swarm scale and dynamic adjustment inertia weight factors, can improve the optimization performance to some extent but their convergence speed is slow. In this paper an advanced particle swarm optimization (APSO) is proposed which has better searching speed than the original PSO Algorithm. This technique puts the adaptively changing terms in original constant terms, so that parameters of the original PSO algorithm changes with the convergence rate which is presented by the cost function. As result, the searching speed of the advanced method is much faster than that of the original method. APSO is indeed more efficient in improving searching capability and convergence characteristic. The simulation results show that the Pill controller with APSO algorithms has more excellent optimization performance which is better than the GA This paper is organized as follows: Controller design of PD, PI and PID is represented in Sec. 2. Fitness function and genetic algorithm are described in Sec. 3 and 4 respectively. Particle swarm optimization and advanced PSO algorithm have been explained in Sec. 5 and Sec. 6 respectively. Sec. 7 describes simulation results. Finally conclusion is represented in Sec. 8. II. DESIGN OF PI, PD AND PID CONTROLLERS To achieve equilibrium among control characteristics: response speed, settling time, and proper overshot rate, all of which guarantee the system stability, the PID controller is employed. Application of the PID controller, involves choosing the parameters: k p , k I and k D , that provide satisfactory closed-loop performance. But the main method is based on trial and error, which is time consuming.

Upload: h80bargo

Post on 10-Nov-2015

227 views

Category:

Documents


2 download

DESCRIPTION

Technical Paper

TRANSCRIPT

  • Advanced Particle Swarm Optimization-Based PIDController Parameters Tuning

    Abolfazl JalilvandDepartment of Electrical

    EngineeringZanjan University

    Zanjan, [email protected]

    Ali KimiyaghalamDepartment of Electrical

    EngineeringZanjan University

    Zanjan, [email protected]

    Ahmad AshouriDepartment of Electrical

    EngineeringZanjan University

    Zanjan, [email protected]

    Meisam MahdaviDepartment of Electrical

    EngineeringZanjan University

    Zanjan, [email protected]

    Abstract- PID parameter optimization is an important problemin control field. Particle swarm optimization (PSO) is powerfulstochastic evolutionary algorithm that is used to find the globaloptimum solution in search space. However, it has been observedthat the standard PSO algorithm has premature and localconvergence phenomenon when solving complex optimizationproblem. To resolve this problem an advanced particle swarmoptimization (APSO) is proposed in this paper. This newalgorithm is proposed to augment the original PSO searchingspeed. This study proposes to use the (APSO) for its fastsearching speed. These advanced particle swarm optimization toaccelerate the convergence. The algorithms are simulated withMATLAB programming. The simulation result shows that thePID controller with (APSO) has a fast convergence rate and abetter dynamic performance.

    Keywords- Advanced PSO Algorithm; Genetic Algorithm;Parameter Optimization; PID Parameters Tuning.

    I. INTRODUCTIONProportional-Integral-Derivative (PID) controller is one of

    earliest control technique that is still used widely in industrialbecause of their easy implementation, robust performance andbeing simple of physical principle of parameters. For achievingappropriate closed-loop performance, three parameters of thePID controller must be tuned [1-3]. Tuning methods of PIDparameters are classified as traditional and intelligent methods.Conventional methods such as Zigler-Nichols [4] and simplexmethods [5] are hard to determine optimal PID parameters andusually are not caused good tuning, i.e. it produces surge andbig overshoot. Recently, intelligent approaches such as geneticalgorithm [6-8], particle swarm optimization [9] and artificialfish swarm algorithm [2] have been proposed for PIDoptimization but among them genetic algorithm (GA) hasreceived much interest and has been applied successfully tosolve the problem of optimal PID controller parameters [1 OJ.

    The genetic algorithm may be not efficient for solvingsome complex optimization problems. This degradation inefficiency is apparent especially in applications when theparameters being optimized are highly correlated [11]. Particleswarm optimization (PSO) is an evolutionary computationtechnique developed by Kennedy and Eberhart in 1995. It finds

    978-1-4244-2824-3/08/$25.00 2008 IEEE

    429

    global optimum solution in search space through theinteractions of individuals in a swarm of particles [12].Compared with the genetic algorithm (GA), PSO ischaracterized as a simple concept, easy implementation, andgood computational efficiency [13]. However, the standardPSO algorithm has also some disadvantages like prematureconvergence phenomenon similar to the (GA) [14]. Althoughsome improved methods, such as augment the swarm scale anddynamic adjustment inertia weight factors, can improve theoptimization performance to some extent but their convergencespeed is slow. In this paper an advanced particle swarmoptimization (APSO) is proposed which has better searchingspeed than the original PSO Algorithm. This technique puts theadaptively changing terms in original constant terms, so thatparameters of the original PSO algorithm changes with theconvergence rate which is presented by the cost function. Asresult, the searching speed of the advanced method is muchfaster than that of the original method. APSO is indeed moreefficient in improving searching capability and convergencecharacteristic. The simulation results show that the Pillcontroller with APSO algorithms has more excellentoptimization performance which is better than the GA

    This paper is organized as follows: Controller design of PD,PI and PID is represented in Sec. 2. Fitness function andgenetic algorithm are described in Sec. 3 and 4 respectively.Particle swarm optimization and advanced PSO algorithm havebeen explained in Sec. 5 and Sec. 6 respectively. Sec. 7describes simulation results. Finally conclusion is representedin Sec. 8.

    II. DESIGN OF PI, PD AND PID CONTROLLERSTo achieve equilibrium among control characteristics:

    response speed, settling time, and proper overshot rate, all ofwhich guarantee the system stability, the PID controller isemployed. Application of the PID controller, involves choosingthe parameters: kp , k I and kD , that provide satisfactoryclosed-loop performance. But the main method is based on trialand error, which is time consuming.

  • Proceedings ofthe 12th IEEE International Multitopic Conference, December 23-24,2008

    The PI controller in the control processing or adjustmentsystems has great application. Integral controllers perform theircorrection in the course of time. This controller increases thesystem grade and reduces steady state error.

    B. PI Controller DesignError and error integral are employed for the control in PI

    controller, where the compensating conversion function is asfollows:

    A. PD Controller DesignIn this controller the error and its derivative both are

    employed for controlling. The compensating convertingfunction of this type ofcontroller is as follows:

    S 1 is real, the zero controller PD, (s =-kp / kD ), and the zerocontroller PI, (s =-k1 / k p ), is definite and then their gain areobtained to satisfy the angle criteria and size. In the design ofPID controller the amount of k1 is identified to reach to anintended error in steady state.

    PID control is a linear control methodology with a verysimple control structure (see Fig. 1). Pill controllers operatedirectly on the error signal, which is the difference between thedesired output and the actual output, and generates theactuation signal that drives the plant. This type of controllershave three basic terms: proportional action, in which theactuation signal is proportional to the error signal, integralaction, where the actuation signal is proportional to the timeintegral of the error signal, and derivative action, where theactuation signal is proportional to the time derivative of theerror signal. In PID controller design, k p , k1 and kD , relatedto the closed loop feedback system within the least time isdetermined and requires a long range of trial and error.

    To design a particular control loop, the values of the threeconstants (k p , k1 and kD ) have to be adjusted so that thecontrol input provides acceptable performance from the plant.In order to get a first approach to an acceptable solution, thereare several controller design methods that can be applied. Forexample, classical control methods in the frequency domain orautomatic methods like Ziegler-Nichols, which the most is wellknown of all tuning PID methodologies. Although thesemethods provide a first approximation, the response producedusually needs further manual retuning by the designer beforeimplementation.

    (1)

    (2)

    By using the above equation, it is observed that the PDcontroller is equal to the addition of a simple zero ins = -kp / kD , to the open loop transfer function whichimproves the transient response. Form another point of view,the PD controller can be employed to predict great errors andtry to correct them before happening.

    Where, F and e(t) are fitness function and errorrespectively.

    At first PID parameters are evaluated roughly usingconventional tuning method to get a smaller search space, for

    III. FITNESS FUNCTIONThe Fitness function is important to be properly defined. As

    usual, control tuning has to achieve different types ofspecification, such as:

    1) Obtaining dynamic performances evaluated with:a) Minimization of a performances index such as IAE

    (integral of absolute value error).b) Adjusting time specifications.

    2) Obtaining robustness properties:a) Model error robustness.b) Input noise robustness.

    Integral error is usually used as performance index of PIDsystem parameter tuning, while ITAE is often used in optimalanalysis and design. In this study, the fitness function isdefined as follows.

    C. PID Controller DesignOne of the most common controlling devices in the market

    is the PID controller. The order of types of controllerpositioning is depicted in Fig. 1.

    u--:Qj Controller t1-y yFigure 1. PIO controller positionening in a system.

    There are different processes for different composition ofproportional, integral and differential. The duty of controlengineering is to adjust the coefficients of gain to attain theerror reduction and fast dynamic responses simultaneously.The transfer function of PID controlling is defined as follows:

    For PI and PD controllers, the coefficient gain, which is notrequired, is taken as zero in PID controller. The aboveequations can only be applied for complex pole ofS I ' in case

    00

    F =ft le(t)~to

    (4)

    430

  • Proceedings ofthe 12th IEEE International Multitopic Conference, December 23-24, 2008example k p =2.6, k I =0.3 and k D =0.6 can be got withZiegler-Nichols experiential method [4]. Therefore, we get alarger parameter search space such as: 0 ~ kp ~ 7.8 ,o~ k I ~ 0.9 and 0 ~ kD ~ 1.8 .

    IV. GENETIC ALGORITHMGenetic algorithm is a random search method that can be

    used to solve non-linear system of equations and optimizecomplex problems. The base of this algorithm is the selectionof individuals. It doesn't need a good initial estimation for sakeof problem solution, In other words, the solution of a complexproblem can be started with weak initial estimations and thenbe corrected in evolutionary process of fitness. This algorithmcan be used to solve many problems such as PID parameteroptimization [15, 16].

    GA evolves into new generations of individuals by usingknowledge from previous generations. The fundamentalprinciple of GA is that chromosomes which include blocks ofgenetic information that are contained in the optimal solutionwill increase in frequency if the opportunity of reproduction ofeach chromosome is related, in some way, to its fitness value.Thus, GA is both explorative and exploitative methods forsolving problems that are not affordable by traditionalmethods.

    A typical example occurs when a potential solution of aproblem may be represented as a set of parameters, which intheir tum are represented by strings of characters. Here, the ndimensions decision-making vector X is denoted with n X imarks as X = X IX 2 X n . X i is named as one gene, X isone chromosome or individual, which consists of n genes. Wename this process as coding process. The operation object isthe population consisted of M chromosomes. Geneticoperations are applied to simulate the evolution mechanism ofindividuals of initial population. The individuals with higherfitness values are passed down the next generation. One ormore individuals after a series of evolution are the optimalsolutions.

    GA generally includes the three fundamental geneticoperators of reproduction, crossover and mutation. Theseoperators conduct the chromosomes toward better fitness.

    Selection operator selects the chromosome in thepopulation for reproduction. The more fit the chromosome, thehigher its probability of being selected for reproduction. Thus,selection is based on the survival-of-the-fittest strategy, but thekey idea is to select the better individuals of the population, asin tournament selection, where the participants compete witheach other to remain in the population. After selection of thepairs of parent strings, the crossover operator is applied to eachof these pairs.

    The crossover operator involves the swapping of geneticmaterial (bit-values) between the two parent strings. Based onpredefined probability, known as crossover probability, aneven number of chromosomes are chosen randomly. A randomposition is then chosen for each pair of the chosenchromosomes. The two chromosomes of each pair swap their

    genes after that random position. In this work, crossover isused with probability of 0.7 .

    Each individuals (children) resulting from each crossoveroperation will now be subjected to the mutation operator in thefinal step to forming the new generation. The mutation operatorenhances the ability of the GA to find a near optimal solutionto a given problem by maintaining a sufficient level of geneticvariety in the population, which is needed to make sure that theentire solution space is used in the search for the best solution.In a sense, it serves as an insurance policy; it helps prevent theloss of genetic material. In this work mutation is used withprobability of 0.1 per bit.

    With respect to this fact that goal is optimization of Pillparameters ( k p , k I and k D ), selected chromosome isdetermined as follows:

    Figure 2. Chromosome structure.

    Also, for more details, the flowchart of the proposed GA isshown in Fig. 3.

    Selection operator chooses the best chromosomeswhich their size is equal to number of

    chromosomes ininitial population.

    Figure 3. Flowchart of the proposed GA.

    V. PARTICLE SWARM OPTIMIZAION ALGORITHMThe PSO algorithm was introduced by Eberhart and

    Kennedy in 1995 [5]. Original PSO was inspired by thebehavior of a flock of birds or a school of fish during their

    431

  • (7)OJrnax - OJrnink xitermax

    VI. ADVANCED PSO ALGORITHM

    Where OJmax is the current weight factor, itermax is themaximum number of iteration, and k is a constant. Usually,OJmax and OJmin is 0.9 and 0.4, respectively, and k is adjustedaround 1. The whole operation process of PSO Algorithm canbe shown as Fig. 4.

    The PSO algorithm has been recognized its simplicity ofimplementation and ability to quickly converge to a solution[15-16]. High searching speed is essential in determining theproper parameters when much iteration is involved.Consequently, the advanced PSG algorithm was proposed inthis study. This technique puts the adaptively changing termsso that the parameters of the original PSO algorithm canchange according to the convergence rate which is presentedby the cost function. The original PSO is change like this:

    (6)

    (5)

    d = 1,2...,Di =1,2... ,n

    X id (t + 1) =X id (t ) +v id (t + 1)

    Proceedings ofthe 12th IEEE International Multitopic Conference, December 23-24,2008food-searching activities. The PSG Believed to be effective inmulti dimensional, linear and nonlinear problems.

    The PSO Believed to be effective in multi dimensional,linear and nonlinear problems. The form of PSO has theposition vector and the velocity vector term, and it isrepresented as Xi =(x it , ... ,x id) and Vi =(v it , ... ,V id) forith particle in d-dimensional space. By the function, namely,the cost function for optimization, the best positions of eachparticle and whole particle (group) are obtained at best fitnessfunction. Each of them is represented asPbestid =(pbestil',pbestid ) , Pbestg =(pbest gl' ... ,pbestgd)[12, 17]. The following equations are used to calculate newvelocities and positions of the particles for calculating the nextfitness function value:

    Where fbest id and fbest gd are the cost function values atthe best position of each particle and whole particle,respectively. fid is the cost function value at the presentposition, and rand is the random value between 0 and 1. r1 caninfluence the movement of the second term (individual term) asa weight factor. In early searching stage, the difference ofbetween fbest id and ./bestgd are the cost function values at thebest position of between fbest id and lid is relatively biggerthan that in the last stage. Accordingly, the value of (1_1MIld J,is

    I.dalso bigger than that in the last stage. As an individual particleapproaches near the individual best position, the movement ofindividual particle becomes gradually slow. So we can expectfaster convergence than the original. r2 has an effect on themovement of the third term (group). Likewise, it is interpretedas follows:

    Where n is the number of particle in a swarm, and D is thenumber of swarms, which is the dimension of the search space.t is the iteration number and c1 , C2 are the acceleration constant.r1 , r2are the uniformly distributed random number between 0and 1, and OJ is the inertia weight factor. v id (t) is the currentvelocity, and x id (t) the current position of i-th particle in dthswarm. pbestid is the best position of ith particle, and pbestgdis the best position of the group. The first term of Eq. (5), (6),OJy id (t) , provides particles' movements to roam in the searchspace. The second term, c1r1x (Pbesf'd - X id (t)) , represents theindividual movement. Third term, c2r2 x (pbest gd - x id (t )) ,represents the social behavior in finding the global bestsolution. v id (t) is limited by -v d max ~Vi/ ~ v d max , and v d max isproportional to the velocity of the convergence into the bestsolution. Usually, v d max is fixed in the range of the movementfrom the past elandc2 , the lower value takes the movementfrom the past target region, but the higher value takes themovement toward the past target region. The results of pastexperiments about PSG, eland c2 were often 2, and OJ wasnot considered at an early stage of PSG algorithm. However,OJ affects the iteration number to find an optimal solution. Ifthe value of OJ is high, the convergence will be fast, but thesolution will fall into the local minimum. On the other hand, ifthe value will increase, and the iteration number will alsoincrease. Therefore, the value of OJ should be high in the firststage so that OJ decreases gradually. From the above, OJ is usedas the follows:

    -} fbestid dr1 - ---+ranfid-1 fbestid dr2 - ---+ranfid

    (8)

    (9)

    (10)

    432

  • (12)G(s) = 201.5s 2 +4.5s +1

    Proceedings olthe 12th IEEE International Multitopic Conference, December 23-24, 2008VII. SIMULATION RESULTS

    In this paper, three proposed algorithms is applied to singleloop PIn parameter optimization system and obtained resultscompare with together. The transfer function of the objectcontrol model is defined as follows:

    Because ./bestgd is supposed as optimal and lowest value inentire particles' cost values, Eq. (5) can be derived. Eq. (6) canbe easily derived from Eq. (5). If the particles converge to theoptimal value, ./bestid and lid will have the same value,

    ./bestgd . Therefore, the replaced (1_ lbest'd J ,(1_1bestgd J willlid lid

    become zero, so that the second and third terms will moveslowly. It can derive the fast searching.

    if lim fbest ' = lim f id =I best d't~tmax ld t~tmax K'l

    1 (1 Ibestid ) - l' (1 Ibest id ) - 01m --- - 1m ----lid t~tmax lidt~tmax

    (11 ) u y

    Figure 5. PID controller with APSO.

    Start

    A particle with three parameters isdetermined

    Initial population is constructedrandomly.

    Initial position and velocity of particlesgenerated randomly.

    The best position of individual i ,Pi and thebest position of entire swarm, g i (group) are

    calculated.

    New velocities and positions of theparticles for calculating the next

    fitness function value calculated bythe following equations:

    vjd(t +l)=aJXv,it )+c/i x(Pbot. -x",(t))+c2,; x(Pw" -x",(t))Xid(t +1)=x id (t)+v id (t +1)

    Where, u is the input signal, y is the output signal, thedifference ofy and u is named as e(t), which is operated by PIncontroller to generate a control signa1.

    The single loop Pill parameter tuning for above-mentionedcase study system is accomplished by genetic algorithms, PSOalgorithm and APSO algorithm. The desired parameters areobtained according to Table 1.

    TABLE 1. OBTAINED PARAMETERS BY PROPOSED METHODS

    MethodsGenetic PSO Advanced PSO

    Algorithm Algorithm AlgorithmParameters

    kp 3 3.8783 4.1480

    kI 0.6606 0.8474 0.8991

    kD 0.9730 1.2251 1.3305The corresponding performance index curve and the step

    response curve of the GA and APSO methods are shown asFigs 7, 8 and 9 respectively. Since step response of PSOapproach is similar to APSO algorithm, therefore related curveis not represented here.

    Fig. 6 shows the change of fitness values during theoptimization process with three proposed methods. It is clearthat convergence curve of APSO method shows the fitnessfunction is more optimized comparing with PSG and GAmethods.

    Figure 4. PSO algorithm operating process.

    Where, tmax is the iteration of convergence? However, C Iand C2 in advanced PSO have different values compared withthe original PSO's constants. In the original, the value of C Iand C2 are usually 2, but the advanced PSO picks about 0.5 byexperiment results.

    433

  • Only the application for a PID industrial controller isshown because it is one of the most important basic controllers.However, this technique can be applied to many linear andnon-linear controllers. It is also possible to extend applicationto a multivariable control by simply adapting the indexfunction.

    3025

    -Apse

    Proceedings ofthe 12th IEEE International Multitopic Conference, December 23-24,2008number of generations for all mentioned algorithms isconsidered 30 that this rate is respectively small for parametersoptimization by APSO algorithm. Accordingly, if number ofgenerations or iterations increases the time running of APSOdecreases considerably in comparison with GA and PSO.

    This fine characteristic adapts the requirements of local on-line PID parameter tuning, which can complement the flaw ofconventional PID parameter tuning methods.

    10 15 20Times

    40~---r-----"T----.------r---.,.------,

    35

    I1

    I 1 1 I -----, pse__ .!. ' ' ~ _ _ -----, GA

    I 1 1 1I I I

    I \.1 I I 1 130 \ "

  • Proceedings ofthe 12th IEEE International Multitopic Conference, December 23-24,2008[12] 1. Kennedy, and R Eberhart, "Particle swarm optimization with adaptive

    mutation," Acta Electronica Sinica, vol. 4, perth, Australia, pp.1942-1948,1995.

    [13] Z. L. Gaing, "A particle swarm optimization approach for design ofPIDcontroller in AVR system," IEEE Trans. Energy conver., vol. 19, No.2,pp. 384-391,2004.

    435

    [14] Z. S. Lu, and Z. R. Hou, "Particle swarm optimization with adaptivemutation," Acta Electronica Sinica, vol. 32, No.3, pp. 416-420, 2004.

    [15] 1. Kennedy, and R. Eberhart, "Particle swarm optimization," in: Proc.IEEE International Conf. Neural Networks, vol. 4, pp. 1942-1948, 1995.

    [16] 1. H. Lin, and T. Y. Cheng, "DYnamic clustering using support vectorlearning with particle swarm optimization," in: Proc. 18th InternationalConf. Systems Engineering, pp. 45-56, 2005.