Tue . 19 Jul 2019
TR | RU | UK | KK | BE |

Particle swarm optimization

particle swarm optimization, particle swarm optimization matlab code
In computer science, particle swarm optimization PSO is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles This is expected to move the swarm toward the best solutions

PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school The algorithm was simplified and it was observed to be performing optimization The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence An extensive survey of PSO applications is made by Poli Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz

PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions However, metaheuristics such as PSO do not guarantee an optimal solution is ever found More specifically, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods

Contents

  • 1 Algorithm
  • 2 Parameter selection
  • 3 Neighbourhoods and topologies
  • 4 Inner workings
    • 41 Convergence
    • 42 Biases
  • 5 Variants
    • 51 Hybridization
    • 52 Alleviate premature
    • 53 Simplifications
    • 54 Multi-objective optimization
    • 55 Binary, discrete, and combinatorial
  • 6 See also
  • 7 References
  • 8 External links

Algorithm

A basic variant of the PSO algorithm works by having a population called a swarm of candidate solutions called particles These particles are moved around in the search-space according to a few simple formulae The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position When improved positions are being discovered these will then come to guide the movements of the swarm The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered

Formally, let f: ℝn → ℝ be the cost function which must be minimized The function takes a candidate solution as argument in the form of a vector of real numbers and produces a real number as output which indicates the objective function value of the given candidate solution The gradient of f is not known The goal is to find a solution a for which fa ≤ fb for all b in the search-space, which would mean a is the global minimum Maximization can be performed by considering the function h = -f instead

Let S be the number of particles in the swarm, each having a position xi ∈ ℝn in the search-space and a velocity vi ∈ ℝn Let pi be the best known position of particle i and let g be the best known position of the entire swarm A basic PSO algorithm is then:

for each particle i = 1, , S do Initialize the particle's position with a uniformly distributed random vector: xi ~ Ublo, bup Initialize the particle's best known position to its initial position: pi ← xi if fpi < fg then update the swarm's best known position: gpi Initialize the particle's velocity: vi ~ U-|bup-blo|, |bup-blo| while a termination criterion is not met do: for each particle i = 1, , S do for each dimension d = 1, , n do Pick random numbers: rp, rg ~ U0,1 Update the particle's velocity: vi,d ← ω vi,d + φp rp pi,d-xi,d + φg rg gd-xi,d Update the particle's position: xi ← xi + vi if fxi < fpi then Update the particle's best known position: pi ← xi if fpi < fg then Update the swarm's best known position: gpi

The values blo and bup are respectively the lower and upper boundaries of the search-space The termination criterion can be number of iterations performed, or a solution with adequate objective function value is found The parameters ω, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the PSO method, see below

Parameter selection

Performance landscape showing how a simple PSO variant performs in aggregate on several benchmark problems when varying two PSO parameters

The choice of PSO parameters can have a large impact on optimization performance Selecting PSO parameters that yield good performance has therefore been the subject of much research

The PSO parameters can also be tuned by using another overlaying optimizer, a concept known as meta-optimization Parameters have also been tuned for various optimization scenarios

Neighbourhoods and topologies

The topology of the swarm defines the subset of particles which each particle can exchange information The basic version of the algorithm uses the global topology as the swarm communication structure This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position g from a single particle However, this approach might lead the swarm to be trapped into a local minimum, thus different topologies have been used to control the flow of information among particles For instance, in local topologies, particles only share information with a subset of particles This subset can be a geometrical one – for example "the m nearest particles" – or, more often, a social one, ie a set of particles that is not depending on any distance In such a case, the PSO variant is said to be local best vs global best for the basic PSO

A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others The topology is not necessarily static In fact, since the topology is related to the diversity of communication of the particles, some efforts have been done to create adaptive topologies SPSO, stochastic star, TRIBES, Cyber Swarm, and C-PSO

Inner workings

There are several schools of thought as to why and how the PSO algorithm can perform optimization

A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a possibly local optimum This school of thought has been prevalent since the inception of PSO This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum This belief is the precursor of many PSO variants, see below

Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to eg exploration and exploitation Such studies have led to the simplification of the PSO algorithm, see below

Convergence

In relation to PSO the word convergence typically refers to two different definitions:

  • Convergence of the sequence of solutions aka, stability analysis, converging in which all particles have converged to a point in the search-space, which may or may not be the optimum,
  • Convergence to a local optimum where all personal bests p or, alternatively, the swarm's best known position g, approaches a local optimum of the problem, regardless of how the swarm behaves

Convergence of the sequence of solutions has been investigated for PSO These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles particles do not move unboundedly and will converge to somewhere However, the analyses were criticized by Pedersen for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position p and the swarm's best known position g, remain constant throughout the optimization process However, it was shown that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent

Convergence to a local optimum has been analyzed for PSO in and It has been proven that PSO need some modification to guarantee to find a local optimum

This means that determining convergence capabilities of different PSO algorithms and parameters therefore still depends on empirical results One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between p and g, so as to form a leading converging exemplar and to be effective with any PSO topology The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness However, such studies do not provide theoretical evidence to actually prove their claims

Biases

As the basic PSO works dimension by dimension, the solution point is easier found when it lies on an axis of the search space, on a diagonal, and even easier if it is right on the centre

One approach is to modify the algorithm so that it is not any more sensitive to the system of coordinates Note that some of these methods have a higher computational complexity are in On^2 where n is the number of dimensions that make the algorithm very slow for large scale optimization

The only currently existing PSO variant that is not sensitive to the rotation of the coordinates while is locally convergent has been proposed at 2014 The method has shown a very good performance on many benchmark problems while its rotation invariance and local convergence have been mathematically proven

Variants

Numerous variants of even a basic PSO algorithm are possible For example, there are different ways to initialize the particles and velocities eg start with zero velocities instead, how to dampen the velocity, only update pi and g after the entire swarm has been updated, etc Some of these choices and their possible performance impact have been discussed in the literature

A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances" The latest is Standard PSO 2011 SPSO-2011

Hybridization

New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, eg, combined PSO with biogeography-based optimization, and the incorporation of an effective learning method

Alleviate premature

Another research trend is to try and alleviate premature convergence that is, optimization stagnation, eg by reversing or perturbing the movement of the PSO particles, another approach to deal with premature convergence is the use of multiple swarms multi-swarm optimization The multi-swarm approach can also be used to implement multi-objective optimization Finally, there are developments in adapting the behavioural parameters of PSO during optimization

Simplifications

Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems

Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation A good example of this presented a promising variant of a genetic algorithm another popular metaheuristic but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered This bias was because of a programming error, and has now been fixed

Initialization of velocities may require extra inputs The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all

Another simpler variant is the accelerated particle swarm optimization APSO, which also does not need to use velocity and can speed up the convergence in many applications A simple demo code of APSO is available

Multi-objective optimization

PSO has also been applied to multi-objective problems, in which the objective function comparison takes pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front

Binary, discrete, and combinatorial

As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result Such a mapping can be very simple for example by just using rounded values or more sophisticated

However, it can be noted that the equations of movement make use of operators that perform four actions:

  • computing the difference of two positions The result is a velocity more precisely a displacement
  • multiplying a velocity by a numerical coefficient
  • adding two velocities
  • applying a velocity to a position

Usually a position and a velocity are represented by n real numbers, and these operators are simply -, , +, and again + But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems or more generally discrete ones, or even combinatorial ones One approach is to redefine the operators based on sets

See also

  • Bees algorithm / Artificial bee colony algorithm
  • Derivative-free optimization
  • Multi-swarm optimization
  • Particle filter
  • Swarm intelligence
  • Fish School Search

References

  1. ^ Kennedy, J; Eberhart, R 1995 "Particle Swarm Optimization" Proceedings of IEEE International Conference on Neural Networks IV pp 1942–1948 doi:101109/ICNN1995488968 
  2. ^ a b Shi, Y; Eberhart, RC 1998 "A modified particle swarm optimizer" Proceedings of IEEE International Conference on Evolutionary Computation pp 69–73 
  3. ^ a b c Kennedy, J 1997 "The particle swarm: social adaptation of knowledge" Proceedings of IEEE International Conference on Evolutionary Computation pp 303–308 
  4. ^ Kennedy, J; Eberhart, RC 2001 Swarm Intelligence Morgan Kaufmann ISBN 1-55860-595-9 
  5. ^ Poli, R 2007 "An analysis of publications on particle swarm optimisation applications" PDF Technical Report CSM-469 Department of Computer Science, University of Essex, UK 
  6. ^ Poli, R 2008 "Analysis of the publications on the applications of particle swarm optimisation" PDF Journal of Artificial Evolution and Applications 2008: 1–10 doi:101155/2008/685175 
  7. ^ Bonyadi, M R; Michalewicz, Z 2016 "Particle swarm optimization for single objective continuous space problems: a review" Evolutionary Computation in press 
  8. ^ Zhang, Y 2015 "A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications" Mathematical Problems in Engineering 2015: 931256 
  9. ^ Clerc, M 2012 "Standard Particle Swarm Optimisation" PDF HAL open access archive 
  10. ^ a b c d e Bratton, Daniel; Kennedy, James 2007 "Defining a Standard for Particle Swarm Optimization" PDF Proceedings of the 2007 IEEE Swarm Intelligence Symposium SIS 2007 
  11. ^ Taherkhani, M; Safabakhsh, R 2016 "A novel stability-based adaptive inertia weight for particle swarm optimization" Applied Soft Computing 38: 281–295 doi:101016/jasoc201510004 
  12. ^ a b Shi, Y; Eberhart, RC 1998 "Parameter selection in particle swarm optimization" Proceedings of Evolutionary Programming VII EP98 pp 591–600 
  13. ^ Eberhart, RC; Shi, Y 2000 "Comparing inertia weights and constriction factors in particle swarm optimization" Proceedings of the Congress on Evolutionary Computation 1 pp 84–88 
  14. ^ a b Carlisle, A; Dozier, G 2001 "An Off-The-Shelf PSO" PDF Proceedings of the Particle Swarm Optimization Workshop pp 1–6 
  15. ^ a b van den Bergh, F 2001 An Analysis of Particle Swarm Optimizers PhD thesis|format= requires |url= help University of Pretoria, Faculty of Natural and Agricultural Science 
  16. ^ a b c Clerc, M; Kennedy, J 2002 "The particle swarm - explosion, stability, and convergence in a multidimensional complex space" IEEE Transactions on Evolutionary Computation 6 1: 58–73 doi:101109/4235985692 
  17. ^ a b Trelea, IC 2003 "The Particle Swarm Optimization Algorithm: convergence analysis and parameter selection" Information Processing Letters 85 6: 317–325 doi:101016/S0020-01900200447-7 
  18. ^ a b Bratton, D; Blackwell, T 2008 "A Simplified Recombinant PSO" Journal of Artificial Evolution and Applications 
  19. ^ a b Evers, G 2009 An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization Master's thesis The University of Texas - Pan American, Department of Electrical Engineering 
  20. ^ Meissner, M; Schmuker, M; Schneider, G 2006 "Optimized Particle Swarm Optimization OPSO and its application to artificial neural network training" BMC Bioinformatics 7 1: 125 doi:101186/1471-2105-7-125 PMC 1464136 PMID 16529661 
  21. ^ a b Pedersen, MEH 2010 Tuning & Simplifying Heuristical Optimization PhD thesis University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group 
  22. ^ a b c Pedersen, MEH; Chipperfield, AJ 2010 "Simplifying particle swarm optimization" PDF Applied Soft Computing 10 2: 618–628 doi:101016/jasoc200908029 
  23. ^ Pedersen, MEH 2010 "Good parameters for particle swarm optimization" PDF Technical Report HL1001 Hvass Laboratories 
  24. ^ Kennedy, J; Mendes, R 2002 "Population structure and particle swarm performance" Evolutionary Computation, 2002 CEC'02 Proceedings of the 2002 Congress on doi:101109/CEC20021004493 
  25. ^ Mendes, R 2004 Population Topologies and Their Influence in Particle Swarm Performance PhD thesis Universidade do Minho
  26. ^ Suganthan, Ponnuthurai N "Particle swarm optimiser with neighbourhood operator" Evolutionary Computation, 1999 CEC 99 Proceedings of the 1999 Congress on Vol 3 IEEE, 1999
  27. ^ Oliveira, M; Pinheiro, D; Andrade, B; Bastos-Filho, C; Menezes, R 2016 "Communication Diversity in Particle Swarm Optimizers" International Conference on Swarm Intelligence doi:101007/978-3-319-44427-7_7 
  28. ^ SPSO Particle Swarm Central
  29. ^ Miranda, V, Keko, H and Duque, Á J 2008 Stochastic Star Communication Topology in Evolutionary Particle Swarms EPSO International Journal of Computational Intelligence Research IJCIR, Volume 4, Number 2, pp 105-116
  30. ^ Clerc, M 2006 Particle Swarm Optimization ISTE International Scientific and Technical Encyclopedia, 2006
  31. ^ Yin, P, Glover, F, Laguna, M, & Zhu, J 2011 A Complementary Cyber Swarm Algorithm International Journal of Swarm Intelligence Research IJSIR, 22, 22-41
  32. ^ Elshamy, W; Rashad, H; Bahgat, A 2007 "Clubs-based Particle Swarm Optimization" PDF IEEE Swarm Intelligence Symposium 2007 SIS2007 Honolulu, HI pp 289–296 
  33. ^ Cleghorn, Christopher W 2014 "Particle Swarm Convergence: Standardized Analysis and Topological Influence" Swarm Intelligence Conference 
  34. ^ Van den Bergh, F "A convergence proof for the particle swarm optimiser" Fundamenta Informaticae 
  35. ^ a b c Bonyadi, Mohammad reza; Michalewicz, Z 2014 "A locally convergent rotationally invariant particle swarm optimization algorithm" Swarm intelligence 8 3: 159–198 doi:101007/s11721-014-0095-1 
  36. ^ a b Zhan, Z-H; Zhang, J; Li, Y; Shi, Y-H 2011 "Orthogonal Learning Particle Swarm Optimization" PDF IEEE Transactions on Evolutionary Computation 15 6: 832–847 doi:101109/TEVC20102052054 
  37. ^ Monson, C K & Seppi, K D 2005 Exposing Origin-Seeking Bias in PSO GECCO'05, pp 241-248
  38. ^ Spears, W M, Green, D T & Spears, D F 2010 Biases in Particle Swarm Optimization International Journal of Swarm Intelligence Research, Vol 12, pp 34-57
  39. ^ Wilke, D N, Kok, S & Groenwold, A A 2007 Comparison of linear and classical velocity update rules in particle swarm optimization: notes on scale and frame invariance International Journal for Numerical Methods in Engineering, John Wiley & Sons, Ltd, 70, pp 985-1008
  40. ^ SPSO 2011, Particle Swarm Central
  41. ^ Bonyadi, Mohammad reza; Michalewicz, Z 2014 "SPSO 2011 analysis of stability; local convergence; and rotation sensitivity" GECCO2014 the best paper award in the track ACSI: 9–16 
  42. ^ Bonyadi, Mohammad reza; Michalewicz, Z 2014 "An analysis of the velocity updating rule of the particle swarm optimization algorithm" Journal of Heuristics 20 4: 417–452 doi:101007/s10732-014-9245-2 
  43. ^ Zambrano-Bigiarini, M; Clerc, M; Rojas, R 2013 "Standard Particle Swarm Optimisation 2011 at CEC-2013: A baseline for future PSO improvements" Evolutionary Computation CEC, 2013 IEEE Congress on 
  44. ^ Lovbjerg, M; Krink, T 2002 "The LifeCycle Model: combining particle swarm optimisation, genetic algorithms and hillclimbers" Proceedings of Parallel Problem Solving from Nature VII PPSN pp 621–630 
  45. ^ Niknam, T; Amiri, B 2010 "An efficient hybrid approach based on PSO, ACO and k-means for cluster analysis" Applied Soft Computing 10 1: 183–197 doi:101016/jasoc200907001 
  46. ^ Zhang, Wen-Jun; Xie, Xiao-Feng 2003 DEPSO: hybrid particle swarm with differential evolution operator IEEE International Conference on Systems, Man, and Cybernetics SMCC, Washington, DC, USA: 3816-3821
  47. ^ Zhang, Y; Wang, S 2015 "Pathological Brain Detection in Magnetic Resonance Imaging Scanning by Wavelet Entropy and Hybridization of Biogeography-based Optimization and Particle Swarm Optimization" Progress in Electromagnetics Research – Pier 152: 41–58 
  48. ^ Lovbjerg, M; Krink, T 2002 "Extending Particle Swarm Optimisers with Self-Organized Criticality" Proceedings of the Fourth Congress on Evolutionary Computation CEC 2 pp 1588–1593 
  49. ^ Xinchao, Z 2010 "A perturbed particle swarm algorithm for numerical optimization" Applied Soft Computing 10 1: 119–124 doi:101016/jasoc200906010 
  50. ^ Xie, Xiao-Feng; Zhang, Wen-Jun; Yang, Zhi-Lian 2002 A dissipative particle swarm optimization Congress on Evolutionary Computation CEC, Honolulu, HI, USA: 1456-1461
  51. ^ Cheung, N J, Ding, X-M, & Shen, H-B 2013 OptiFel: A Convergent Heterogeneous Particle Sarm Optimization Algorithm for Takagi-Sugeno Fuzzy Modeling, IEEE Transactions on Fuzzy Systems, doi:101109/TFUZZ20132278972
  52. ^ Nobile, M; Besozzi, D; Cazzaniga, P; Mauri, G; Pescini, D 2012 "A GPU-Based Multi-Swarm PSO Method for Parameter Estimation in Stochastic Biological Systems Exploiting Discrete-Time Target Series" Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics Lecture Notes in Computer Science 7264 pp 74–85 
  53. ^ Zhan, Z-H; Zhang, J; Li, Y; Chung, HS-H 2009 "Adaptive Particle Swarm Optimization" PDF IEEE Transactions on Systems, Man, and Cybernetics 39 6: 1362–1381 doi:101109/TSMCB20092015956 
  54. ^ Yang, XS 2008 Nature-Inspired Metaheuristic Algorithms Luniver Press ISBN 978-1-905986-10-1 
  55. ^ Tu, Z; Lu, Y 2004 "A robust stochastic genetic algorithm StGA for global numerical optimization" IEEE Transactions on Evolutionary Computation 8 5: 456–470 doi:101109/TEVC2004831258 
  56. ^ Tu, Z; Lu, Y 2008 "Corrections to "A Robust Stochastic Genetic Algorithm StGA for Global Numerical Optimization" IEEE Transactions on Evolutionary Computation 12 6: 781–781 doi:101109/TEVC2008926734 
  57. ^ Kennedy, James 2003 "Bare Bones Particle Swarms" Proceedings of the 2003 IEEE Swarm Intelligence Symposium 
  58. ^ X S Yang, S Deb and S Fong, Accelerated particle swarm optimization and support vector machine for business optimization and applications, NDT 2011, Springer CCIS 136, pp 53-66 2011
  59. ^ http://wwwmathworkscom/matlabcentral/fileexchange/term=APSO
  60. ^ Parsopoulos, K; Vrahatis, M 2002 "Particle swarm optimization method in multiobjective problems" Proceedings of the ACM Symposium on Applied Computing SAC pp 603–607 
  61. ^ Coello Coello, C; Salazar Lechuga, M 2002 "MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization" Congress on Evolutionary Computation CEC'2002 pp 1051–1056 
  62. ^ Roy, R, Dehuri, S, & Cho, S B 2012 A Novel Particle Swarm Optimization Algorithm for Multi-Objective Combinatorial Optimization Problem 'International Journal of Applied Metaheuristic Computing IJAMC', 24, 41-57
  63. ^ Kennedy, J & Eberhart, R C 1997 A discrete binary version of the particle swarm algorithm, Conference on Systems, Man, and Cybernetics, Piscataway, NJ: IEEE Service Center, pp 4104-4109
  64. ^ Clerc, M 2004 Discrete Particle Swarm Optimization, illustrated by the Traveling Salesman Problem, New Optimization Techniques in Engineering, Springer, pp 219-239
  65. ^ Clerc, M 2005 Binary Particle Swarm Optimisers: toolbox, derivations, and mathematical insights, Open Archive HAL
  66. ^ Jarboui, B, Damak, N, Siarry, P, and Rebai, AR 2008 A combinatorial particle swarm optimization for solving multi-mode resource-constrained project scheduling problems In Proceedings of Applied Mathematics and Computation, pp 299-308
  67. ^ Chen, Wei-neng; Zhang, Jun 2010 "A novel set-based particle swarm optimization method for discrete optimization problem" IEEE Transactions on Evolutionary Computation 14 2: 278–300 doi:101109/tevc20092030331 

External links

  • Particle Swarm Central is a repository for information on PSO Several source codes are freely available
  • A brief video of particle swarms optimizing three benchmark functions
  • Simulation of PSO convergence in a two-dimensional space Matlab
  • Applications of PSO
  • Automatic Calibration of a Rainfall-Runoff Model Using a Fast and Elitist Multi-objective Particle Swarm Algorithm
  • Particle Swarm Optimization see and listen to Lecture 27
  • Links to PSO source code

particle swarm optimization, particle swarm optimization flowchart, particle swarm optimization matlab code, particle swarm optimization mimic behaviour, particle swarm optimization ppt, particle swarm optimization theory, particle swarm optimization tutorial, particle swarm optimization with mutation, particle swarm optimization youtube, particle swarm optimization: technique system and challenges


Particle swarm optimization Information about

Particle swarm optimization


  • user icon

    Particle swarm optimization beatiful post thanks!

    29.10.2014


Particle swarm optimization
Particle swarm optimization
Particle swarm optimization viewing the topic.
Particle swarm optimization what, Particle swarm optimization who, Particle swarm optimization explanation

There are excerpts from wikipedia on this article and video

Random Posts

B♭ (musical note)

B♭ (musical note)

B♭ B-flat; also called si bémol is the eleventh step of the Western chromatic scale starting from C ...
Fourth dimension in art

Fourth dimension in art

New possibilities opened up by the concept of four-dimensional space and difficulties involved in tr...
Holt Renfrew

Holt Renfrew

Holt, Renfrew & Co, Limited, commonly known as Holt Renfrew or Holt's,1 is a chain of high-end C...
Later Silla

Later Silla

Later Silla 668–935, Hangul: 후신라; Hanja: 後新羅; RR: Hushila, Korean pronunciation: ...