particle swarm optimization
TRANSCRIPT
Group Member Details
NAME UNIVERSITY ROLL NO COLLEGE ROLL NO
Ananga Mohan Chatterjee 11500112046 142212
Aniket Anand 11500112047 142213
Madhuja Roy 11500112078 142244
Mahesh Tibrewal 11500112079 142245
What is Swarm Intelligence?
The term swarm is used to representan aggregation of the animals orinsects which work collectively toaccomplish their day to day tasks in anintelligent and efficient manner
SI systems are typically made up of apopulation of simple agents interactinglocally with one another and with theirenvironment.
Natural examples of SI include antcolonies, bird flocking, animal herding,bacterial growth, and fish schooling.
Origin of Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a population based stochasticoptimization technique developed by Dr. Russell C. Eberhart and Dr.James Kennedy in 1995, inspired by social behaviour of bird flocking orfish schooling.
Dr. Russell C.
EberhartDr. James Kennedy
Origin of Particle Swarm Optimization (contd.)
Dr. Eberhart and Dr. Kennedy wereinspired by the flocking and schoolingpatterns of birds and fish. Originallythese two started out developingcomputer software simulations of birdflocking around food sources, then laterrealized how well their algorithmsworked on optimization problems.
Concept of Particle Swarm Optimization
PSO is an artificial intelligence (AI) technique that canbe used to find approximate solutions to extremelydifficult or impossible numeric maximization andminimization problems.
In PSO, a swarm of n individuals communicate eitherdirectly or indirectly with one another search directions(gradients).
Simple algorithm, easy to implement and fewparameters to adjust mainly the velocity.
Parameters in PSO
• Population initialized by assigning random positions (Xi) and
velocities (Vi); potential solutions are then flown throughhyperspace
• Each particle keeps track of its “best” (highest fitness) position in hyperspace. This is called
• At each time step, each particle stochastically accelerates towards its pbest and gbest (or lbest).
o “pbest for an individual particle.o “gbest” for best on group.o “lbest” for the best in neighbourhood.
Initialize Particles
end
Calculate fitness values
for each particle
Use each particle’s
velocity value to update
its data value
Keep previous pBestAssign current fitness as
new pBest
Assign best particle’s pBest
value to gBest
Calculate velocity for each
particle
Is current
fitness value
better than
pBest
Target or
maximum apochs
reached
yes no
no yes
Flowchart
Target positionInitial position
Mathematical Approach
Equation:
Vi = [vi1,vi2, ...,vin] called the velocity for particle i.Xi = [xi1,xi2, ..., xin] represents the position of particle i.Pbest : represents the best previous position of particle i(i.e., local-best position or its experience)Gbest : represents the best position among all particles in the population X= [X1,X2, . . .,XN] (i.e. global-best position)Rand(.)and rand(.) : are two random variables between [0,1].C1 and C2 : are positive numbers called acceleration coefficients that guide each particle toward the individual best and the swarm best positions, respectively.
PSO Pseudo Code
For each particle : Initialize particle
Do : For each particle :
Calculate fitness value If the fitness value is better than the best fitness value (pBest) in history Set current value as the new pBest
End For each particle :
Find in the particle neighborhood, the particle with the best fitness Calculate particle velocity according to the velocity equation (1) Apply the velocity constriction Update particle position according to the position equation (2) Apply the position constriction
End
While maximum iterations or minimum error criteria is not attained
Modifications in PSO structure
1. Selection of maximum velocity:The velocities may become too high so that the particles become uncontrolled and exceed search space. Therefore, velocities are bounded to a maximum value Vmax, that is
2. Adding inertia weight:A new parameter w for the PSO, named inertia weight is added in order to better control the scope of the search. So, Eq. (1) is now becomes:
Modifications in PSO structure (Contd.)
3. Constriction factor:
If running algorithm without restraining the velocity, the system will explodes after a few iterations . So,
induce a constriction coefficient in order to control the convergence properties.
With the constriction factor, the PSO equation for computing the velocity is:
Note that ,• if C = 5 then 𝜒= 0.38 from Eq. (4), will cause a very pronounced damping effect.
• But , if C is set to 4 then 𝜒 is thus 0.729, which works fine.
Population Topology
Pattern of connectedness between individuals is like a social network Connection pattern controls how the solutions can flow through the solution space
PSO gbest and lbest topologies
Effect of Re-Initialization
Among the three algorithms, PSO has a higher tendency to cluster rapidly and theswarm may quickly become stagnant. To remedy this drawback, several sub-grouping approaches had been proposed to reduce the dominant influence of theglobal best particle. A much simpler and frequently used alternative is to simplykeep the global best particle and regenerate all or part of the remaining particles.This has the effect of generating a new swarm but with the global best as one ofthe particles, and this process is called the re-initialization process.
In GA, the clustering is less obvious, but it is often found that the top part of thepopulation may look similar, and that re-initialization can also inject randomnessinto the population to improve the diversity.
In DE, the clustering is the least and re-initialization has the least effect for DE
Effect of Local Search
In GA, the density of solution space is less, so it is often found that the GAoperators cannot produce all potential solutions. A popular fix is the use oflocal search to see if a better solution can be found around the solutionsproduced by GA operators. The local search process is often time consuming,and to apply it over the whole population could lead to a long solution time.
For PSO, the best particle has a dominant influence over the whole swarm, anda time saving strategy is to only apply local search to the best particle, and thiscan lead to solution improvement with shorter solution time. This strategy wasdemonstrated to be highly effective for job shop scheduling in Pratchayaborirakand Kachitvichyanukul (2011).
This same strategy may not yield the same effect in DE since the best particledoes not have a dominant influence on the population of solutions.
Effect of Subgrouping
The use of sub-grouping of homogenous population to improve solution quality has been demonstrated in GA and PSO.
This sub-grouping allows some groups of solutions to be freed from the influence of the dominant solutions, and thus the group may be searching in a different area of the solution space and improve the exploration aspect of the algorithms.
For DE, the best particle has little influence on the perturbationprocess so it is rational to presume that sub-grouping withhomogeneous population may have limited effect on the solutionquality of DE.
Qualitative comparison of GA, PSO and DE
GA PSO DE
Require Ranking of solution Yes No No
Influence of population size on solution time Exponential Linear Linear
Influence of best solution on population Medium Most Less
Average fitness cannot be worse False False True
Tendency for premature convergence Medium High Low
Continuity(density) of search space Less More More
Ability to reach good solution without local search Less More More
Homogeneous sub-grouping improves convergence Yes Yes NA
Neural Network (NN) Training using PSO
A complex function that accepts some numeric inputs and that generatessome numeric outputs.
The best way to get an idea of what training a neural network using PSOis like is to take a look at a program that creates a neural networkpredictor for a set of Iris flowers, where the goal is to predict the speciesbased on sepal length and width, and petal length and width.
The program uses an artificially small, 30-item subset of a famous 150-item benchmark data set called Fisher’s “Iris Data”.
A 4-input, 6-hidden, 3-output neural network is instantiated. A fully-connected 4-6-3 neural network will have (4 * 6) + (6 * 3) + (6 +
3) = 51 weights and bias values. A swarm consisting of 12 virtual particles, and the swarm attempts to
find the set of neural network weights and bias values in a maximum of700 iterations.
After PSO training has completed, the 51 values of the best weights andbiases that were found are displayed. Using those weights and biases,when the neural network is fed the six training items, the networkcorrectly classifies 5/6 = 0.8333 of the items.
Mobile Robot Navigation Using Particle Swarm Optimization and Adaptive NN
Improved particle swarm optimization (PSO) is
used to optimize the path of a mobile robot
through an environment containing static
obstacles.
Relative to many optimization methods that
produce non-smooth paths, the PSO method can
generate smooth paths, which are more preferable
for designing continuous control technologies to
realize path following using mobile robots.
To reduce computational cost of optimization,
the stochastic PSO (S-PSO) with high
exploration ability is developed, so that a
swarm with small size can accomplish path
planning.
Simulation results validate the proposed
Hybridization of PSO with Other Evolutionary Techniques
A popular research trend is to merge or combine the PSO with the other techniques, especially the other evolutionary computation techniques such as selection, cross-over and mutation.Some improved and extended PSO methods:
Improved PSO (IPSO): It uses a combination of chaotic sequences and conventional linearly decreasing inertia weights and crossover operation to increase both exploration and exploitation capability of PSO.Modified PSO (MPSO): This approach is a mechanism to cope with the equality and inequality constraints. Furthermore, a dynamic search-space reduction strategy is employed to accelerate the optimization process.New PSO (NPSO): In this method, the particle is modified in order to remember its worst position. This modification is improved to explore the search space very effectively.Improved Coordinated Aggregation based PSO (ICA-PSO): In this algorithm each particle in the swarm retains a memory of its best position ever encountered, and is attracted only by other particles with better achievements than its own with the exception of the particle with the best achievement, which moves randomly.Hybrid PSO with Sequential Quadratic Programming (PSO-SQP): The SQP method seems to be the best nonlinear programming method for constrained optimization. It outperforms every other nonlinear programming method in terms of efficiency, accuracy, and percentage of successful solutions, over a large number of test problems.
◊ The process of PSO algorithm in finding optimal values follows the workof an animal society which has no leader.
◊ Particle swarm optimization consists of a swarm of particles, whereparticle represent a potential solution (better condition).
◊ Particle will move through a multidimensional search space to find thebest position in that space (the best position may possible to themaximum or minimum values).
◊ Constraints to be kept in mind are that velocity should have an optimumvalue, as too less will be too slow, and if the velocity is too high then themethod will become unstable.
We would like to express our gratitude to allthe respected faculty members of ourdepartment for providing us with thisopportunity of giving a presentation on a topicwhich was interesting to research on. Wethank our seniors for their able guidance andsupport in completing the presentation.Finally, a word of thanks to all those wo weredirectly or indirectly involved in thispresentation.
www.wikipedia.orgwww.swarmintelligence.comwww.visualstudiomagazine.comwww.youtube.comwww.academia.edumsdn.microsoft.comIntroduction to Particle Swarm Optimization by Rajib
Kumar Bhattacharjee, Department of CivilEngineering, IIT GuwahatiPaper on Particle Swarm Optimization by Gerhard
Venter, Vanderplaats Research and development Inc.