Particle swarm optimization (PSO) is a population
based stochastic optimization technique developed by
Dr. Eberhart and Dr. Kennedy in 1995, inspired by
social behavior of bird flocking or fish schooling.
PSO shares many similarities with evolutionary
computation techniques such as Genetic Algorithm
(GA). The system is initialized with a population of
random solutions and searches for optima by
updating generations. However, unlike GA, PSO has
no evolution operators such as crossover and
mutations. In PSO, the potential solutions, called
particles, fly through the problem space by following
the current optimu m part icles.
Each particle keeps track of its coordinates in the
problem space which are associated with the best
solution (fitness) it has achieved so far. (The fitness
value is also stored.) This value is called pbest.
Another "best" value that is tracked by the particle
swarm optimizer is the best value, obtained so far by
any particle in the neighbors of the particle. This
location is called lbest. When a particle takes all the
population as its topological neighbors, the best value
is a global best and is called gbest. The particle
swarm optimization concept consists of, at each time
step, changing the velocity of (accelerating) each
particle toward its pbest and lbest locations (local
version of PSO). Acceleration is weighted by a
random term, with separate random numbers being
generated for acceleration toward pbest and lbest
locations.
In past several years, PSO has been successfully
applied in many research and application areas. It is
demonstrated that PSO gets better results in a faster,
cheaper way compared with other methods.
Another reason that PSO is attractive is that there are
few parameters to adjust. One version, with slight
variations, works well in a wide variety of
applications. Particle swarm optimization has been
used for approaches that can be used across a wide
range of applications, as well as for specific
applications focused on a specific requirement.
a. Artificial neural network and
PSO
An artificial neural network (ANN) is an analysis
paradigm that is a simple model of the brain and the
back-propagation algorithm is the one of the most
popular method to train the artificial neural network.
Recently there have been significant research efforts
to apply evolutionary computation (EC) techniques
for the purposes of evolving one or more aspects of
artificial neural networks. Evolutionary computation
methodologies have been applied to three main
attributes of neural networks: network connection
weights, network architecture (network topology,
transfer function), and network learning algorithms.
Most of the work involving the evolution of ANN has
focused on the network weights and topological
structure. Usually the weights and/or topological
structure are encoded as a chromosome in GA. The
selection of fitness function depends on the research
goals. For a classification problem, the rate of mis-
classified patterns can be viewed as the fitness value.
The advantage of the EC is:
1. EC can be used in cases with non-differentiable PE
transfer functions and no gradient information
available.
The disadvantages are: