File Exchange

image thumbnail

Constrained Particle Swarm Optimization

version 1.31.4 (40.6 KB) by Sam
Implementation of a PSO algorithm with the same syntax as the Genetic Algorithm Toolbox.


Updated 05 Nov 2018

From GitHub

View Version History

View license on GitHub

Previously titled "Another Particle Swarm Toolbox"

Particle swarm optimization (PSO) is a derivative-free global optimum solver. It is inspired by the surprisingly organized behaviour of large groups of simple animals, such as flocks of birds, schools of fish, or swarms of locusts. The individual creatures, or "particles", in this algorithm are primitive, knowing only four simple things: 1 & 2) their own current location in the search space and fitness value, 3) their previous personal best location, and 4) the overall best location found by all the particles in the "swarm". There are no gradients or Hessians to calculate. Each particle continually adjusts its speed and trajectory in the search space based on this information, moving closer towards the global optimum with each iteration. As seen in nature, this computational swarm displays a remarkable level of coherence and coordination despite the simplicity of its individual particles.

Ease of Use
If you are already using the Genetic Algorithm (GA) included with MATLAB's Global Optimization Toolbox, then this PSO toolbox will save you a great deal of time. It can be called from the MATLAB command line using the same syntax as the GA, with some additional options specific to PSO. This will allow a high degree of code re-usability between the PSO toolbox and the GA toolbox. Certain GA-specific parameters such as cross-over and mutation functions will obviously not be applicable to the PSO algorithm. However, many of the commonly used options for the Genetic Algorithm Toolbox may be used interchangeably with PSO since they are both iterative population-based solvers. See >> help pso (from the ./psopt directory) for more details.

* Support for distributed computing using MATLAB's parallel computing toolbox.
* Full support for bounded, linear, and nonlinear constraints.
* Modular and customizable.
* Binary optimization. See PSOBINARY function for details.
* Vectorized fitness functions.
* Solver parameters controlled using 'options' structure similar to existing MATLAB optimization solvers.
* User-defined custom plots may be written using same template as GA plotting functions.
* Another optimization solver may be called as a "hybrid function" to refine PSO results.

A demo function is included, with a small library of test functions. To run the demo, from the psopt directory, call >> psodemo with no inputs or outputs.

Bug reports and feature requests are welcome.

Special thanks to the following people for contributing code and bug fixes:
* Ben Xin Kang of the University of Hong Kong
* Christian Hansen of the University of Hannover
* Erik Schreurs from the MATLAB Central community
* J. Oliver of Brigham Young University
* Michael Johnston of the IRIS toolbox
* Ziqiang (Kevin) Chen

* J Kennedy, RC Eberhart, YH Shi. Swarm Intelligence. Academic Press, 2001.
* Particle Swarm Optimization.
* RE Perez, K Behdinan. Particle swarm approach for structural design optimization. Computers and Structures 85 (2007) 1579–1588.
* SM Mikki, AA Kishk. Particle Swarm Optimization: A Physics-Based Approach. Morgan & Claypool, 2008.

Addendum A
Nonlinear inequality constraints in the form c(x) ≤ 0 and nonlinear equality constraints of the form ceq(x) = 0 have now been fully implemented. The 'penalize' constraint boundary enforcement method is now default. It has been redesigned and tested extensively, and should work with all types of constraints.

See the following document for the proper syntax for defining nonlinear constraint functions:
To see a demonstration of nonlinear inequality constraints using a quadrifolium overlaid on Rosenbrock's function, run PSODEMO and choose 'nonlinearconstrdemo' as the test function.

Addendum B
See the following guide in the GA toolbox documentation to get started on using the parallel computing toolbox.

Addendum C
If you are just starting out and hoping to learn to use this toolbox for work or school, here are some essential readings:
* MATLAB's Optimization Toolbox:
* MATLAB's Global Optimization Toolbox:
* MATLAB's Genetic Algorithm:

Addendum D
There is now a particle swarm optimizer included with the Global Optimization Toolbox. If you have a recent version of the Global Optimization Toolbox installed, you will need to set the path appropriately in your code to use this toolbox.

Cite As

S Chen. Constrained Particle Swarm Optimization (2009-2018). MATLAB File Exchange.

Comments and Ratings (252)


how do i run the code... :((

zakaria bellahcene

Error in pso (line 429)
state.Score(i) = fitnessfcn(state.Population(i,:)) ;

li Zeng


Hi sam
Thank you for your effort in putting together this. When I run this in my code, it doesn't advance more than (Finding initial feasible position). I downloaded the version 1.31.4, and I wonder if this has been fixed in that. Anyways, the code doesn't progress further from 'finding feasible initial positions'. Could you please advise on this?

Iurii Kazantcev

zhizhi liu

hi sam
thanks a million for ur program , however , i have a problem with my model , it doesnt advance more than (Finding intial feasible position), Need help on how to add the intial postion and assist the algorithm to run properly.

Hi Sam, thanks for this great work, please, how do we determine the next point for evaluation in pso, I mean a function like acquisition function in bayesian optimization.

liping ren


Nayan Rawat

What are the various options used?

Chandra Sekar S

Not able to install this toolbox from Get-add-on option. Can anyone suggest how to instal this package?

shuai huang

xiangyue wang


Hey Sam, like Kevin wrote on 21 Oct 2017, me too, I am looking for the possibility of your code to handle integer/ discrete inputs. Do you have any aspirations on including this in a future release? For the Matlab "ga" they solved this in the following way: (scroll to "Add Discrete Non-Integer Variable Constraints"). Thanks a lot!

Kingsley Cool

On my observation, when the bound constraints are violated, modify the 'ConstrBoundary' default other than 'penalize' will solve the problem. Despite the little bug, it's a powerful toolbox!

Jinghui HAN

Jinghui HAN

>> clear
>> fun = @(x) x(1)^2+x(2)^2+x(3)^2+x(4)^2

fun =

包含以下值的 function_handle:


>> y = pso(fun, 4, [], [], [1 1 1 1], [100],[0 0 0 0],[] )


The value of the fitness function did not improve in the last 50 generations and maximum constraint violation is less than 1e-06, after 51 generations.

Final best point: [0 0 0 100]
How to fix this problem? Thanks.

Jinghui HAN

How to fix this problem? Thanks.

Rodney Ngone

Great work on this! However, I do agree with Allen Hu's observation that bound constraints are violated sometimes. Are you able to fix this?

Allen hu

Dear Sam
This code seems to violate the bound constraints sometime. How to fix this bug?


Bug fix for to fmincon not being properly detected when checking for a valid hybrid function. Thanks to Martin Hallmann from Germany for pointing this out.
Additionally, thanks to Luis Salinas San Martin and "M B" for pointing out the previous error regarding population initialization.


I have been occupied with my medical training. The errors described below relating to population initialization have been fixed in today's release.

Guido Francesco Frate

When using nonlinear costraint it issues the error signaled by the two commenter below (luis salinas and M B). please fix, or provided a guide to fix it.

liruixin liruixin

liruixin liruixin

can not deal with constraint

kaiheng sang

Bowen Zhang

I am trying to use this toolbox to solve the following problem:
fitnessfcn = @(x)(10*x(:,1)+15*x(:,2)+20*x(:,3)+0*x(:,4)+0*x(:,5)+0*x(:,6));
nvars = 6 ;
LB=[0 ;0 ;0 ;0 ;-2*pi;-2*pi];
UB=[0.20;0.19;0.05;0 ; 2*pi; 2*pi];
Aineq = [0 0 0 1 -1 0; 0 0 0 -1 1 0; 0 0 0 0 1 -1; 0 0 0 0 -1 1; 0 0 0 1 0 -1; 0 0 0 -1 0 1]; ; end
bineq = [1.25*0.2; 1.25*0.2; 2*0.25; 2*0.25; 1.25*0.4; 1.25*0.4]; end
Aeq = [1 0 0 1.5 -1 -0.5; 0 1 0 -1 4 -3; 0 0 1 -0.5 -3 3.5]; end
beq = [0.10;0.10;0.15]

when running the code then I got the following error message:

Error using optimset (line 249)
Unrecognized parameter name 'Simplex'. Please see the
options table in the documentation for a list of
acceptable option parameters. Note that some
parameters are only supported by OPTIMOPTIONS. Link to
options table

Error in psocheckinitialpopulation (line 25)
state.LinprogOptions =

Error in pso (line 338)
[state,options] =


Arash Rahmani

How can use Nonlinear equality constraints in PSO?


Error when starting demo and choosing nonlinearconstrdemo as function
Error using optimset (line 249)
Unrecognized parameter name 'Simplex'. Please see the options table in the documentation for a list of
acceptable option parameters. Note that some parameters are only supported by OPTIMOPTIONS. Link to
options table

Error in psocheckinitialpopulation (line 25)
state.LinprogOptions = optimset('Simplex','off',...

Error in pso (line 338)
[state,options] = psocheckinitialpopulation(state,...

Error in psodemo (line 72)
The code checker in the editor shows a 'red' error in 'psocheckinitialpopulation.m' : 'Simplex has been removed. Use 'interior-point' or 'dual-simplex' instead.

freed chen

This is a great job. It's easy to use with my problem.

But there are some shortcomings. When there are many decision variables, a larger population is needed to obtain better results, which greatly slows the calculation speed. In our calculation, we have 20 decision variables and set the populationSize as 2000 and it nedds more than 20s.

Is there a familiarity with GPU acceleration algorithms? If using a CUDA-based acceleration method, I think the calculation speed will be greatly improved.

Hope that Sam can allow others to modify this toolkit, which will make the quality improve faster.


% Check bounds before proceeding
% ---------------------------------------------------------------------
if ~all([isempty([Aineq,bineq]), isempty([Aeq,beq]), ...
isempty([LB;UB]), isempty(nonlcon)])
state = boundcheckfcn(state,Aineq,bineq,Aeq,beq,LB,UB,nonlcon,...
options) ;
end % if ~isempty
% ---------------------------------------------------------------------

%----------added by Linfei Yin
for i=1:n
%----------added by Linfei Yin


Mugheera Malik

Sir can you please tell me how to put constrains in Pso ???

Ruby Bhatt

Hi Sam,

Have you performed Particle Swarm Optimization using Matrix Algorithm ?? If some work is being done in that direction, please share the code !!!!!! Thanks .....

shahzad hameed

Peter Peter

nazli melek

Dear Sam
I have a problem when running psodemo. I get the following error:
Not enough input arguments.
how could I solve it?

mingzhi chen

Thanks a lot

Mohammad Reza Mirzaei

thanks a million


In the Global optimization toolbox, the `ga` algorithm can handle a small set of [`mixed integer nonlinear optimization problems`,]( If possible, can you add such feature in your future plan list? thank you!

Hanis Nasir


Dulara De Zoysa

Le ducdao

Dear Sam
could you tell me how to use your pso to solve the maximize problem

Qingsong Li

Hi, Sam,
Thank you for your program. But there are some problems when I use it. In the calculating process, only the initial paritcles satisfy the linear constraint. But after the postion updating, the particles will not satisfy the linear constraint any more. In the program, there is no any method to solve this.


Why does this require the optimization toolbox? I am looking for 3rd party Matlab code which doesn't require any toolboxes, so that I can solve a problem with a heuristic (like a PSO or GA) without having to buy the optimization toolbox.

yue gu

how to use this program.
did it has a typical problem structure? I has read all of the help files and the demo, however I still don't know How to set up my problem. what's the structure of a problem?
even the demonstration can't execute.
this error "Subscript indices must either be real positive integers or logicals." always return in Matlab.
x=[0 0];
nvars = 2 ;
LB = [-1.5,-2] ;
UB = [2,2] ;
fitnessfcn = 100*(x(2)-x(1)^2)^2+(1-x(1))^2 ;
options.PopInitRange = [[-2;4],[-1;2]] ;

options.PlotFcns = {@psoplotbestf,@psoplotswarmsurf} ;
options.Generations = 200 ;
options.DemoMode = 'on' ;
options.KnownMin = [1 1] ;
options.OutputFcns = {} ;
options.ConstrBoundary = 'penalize' ;
options.UseParallel = 'never' ;
[x,fval] = pso(fitnessfcn,nvars);

mariam elloumi

Dear Sam !
thank you very much for providing very nice optimizing toolbox .
in my optimization problem i have 4 optimizing parameter. I want to plot its with generation (generation vs var(1) ,generation vs var(2),generation vs var(3) etc ...). Also, i want to plot the best fitnes value with generation. Can you help me?

reza dd

hi sam
I tried to run this on matlab 9.2 but it's sent this error

Error using optimset (line 245)
Simplex has been removed. Use other LINPROG options instead.

Error in psocheckinitialpopulation (line 26)
state.LinprogOptions = optimset('Simplex','off',...

how can i resolve it ?!


Martin Hallmann

Hey Sam, what a great job!!! I've got the same problem with hybridization, that Kyriaki reported. In MATLAB v2014a the fmincon solver can be found, but in newer version an error occurs, because the "exist" function can't detect it. For patternsearch everything ist fine, using GA with fmincon also works. Do you have an idea, why the PSO toolbox can't call the fmincon solver ?

Thank you and best regards from Germany!

ahmed abdulsahib

where is your code ... I did not find it

JiaJun Pan

zhou kangpeng

yang xk

i am learning.


Lots of questions built up over the past year, sorry I've had no time to respond.

Alice: I haven't used the genetic algorithm in a long time, I'm sure it's been changed. Let me know what differences there are.

Rabin and Dilip: I won't be able to implement multiobjective optimization. Training to be a surgeon is now taking up all of my time.

Genevois: I'm sure there's a way to set the initial population using the psooptimset to generate an options structure that can then be passed to the pso function. Is it not working for you?

Pan: I'll look at it again. It is calculating the AVERAGE improvement in the fitness over a given number of generations. It looks like the denominator should just be options.StallGenLimit instead of (k - options.StallGenLimit).

Kyriaki: did you change computers or install new MATLAB version that doesn't have the optimization toolbox? I put that error message in there in case the fmincon function (which is included in the optimization toolbox) couldn't be found in the namespace.

sky miko: I think your error may be related to what Pan found as mentioned above. I'll make a change as soon as I manage to install a new version of MATLAB.


hello, Sam, I think the syntax of handle of the fitnessFun is different from the GA tool box when the fitnessFun is not one.

Kun Wang


hi sam

moltiobjective ?

genevois pierre

hi, why initial population cannot be set ?

Dilip Kumar Roy

Hi Sam, Thanks for this great code. Are you planning to implement this code for multi-objective problem setting?

Thank you once again.


Hi Sam,

Thanks for creating this wonderful PSO toolbox, which is very useful, powerful and easy to implement.

Recently I found a typo in the pso file. In line 486 of this file, the code for checking improvement is written as:
imprvchk = k > options.StallGenLimit && ...
(state.fGlobalBest(k - options.StallGenLimit) - ...
state.fGlobalBest(k)) / (k - options.StallGenLimit) < ...
options.TolFun ;

I suspect that the denominator (k - options.StallGenLimit) should be state.fGlobalBest(k - options.StallGenLimit).

Please let me know whether my understanding is correct or not. Thank you.


Kyriaki Kostoglou

Perfect. Hybrid function used to work, but now it displays the following msg:
"Hybrid function fmincon cannot be found. Check toolboxes"


Hi sam, I want to find the optima solution in integer space. How to use the program?

hurd cheng



My code does not advance from "Finding feasible initial positions..."

lb=[-0.4143 -0.4143 -0.4143];
ub=[0.4143 0.4143 0.4143];


Thanks for your time!

sky miko

Hi Sam
Need your help!!!!!!
How to set the MaxIter?
[x, fval,exitflag,output,population,scores] =pso(@diertimubiao,3,[],[],[],[],lb,ub,[],options);

Swarming...Operands to the || and && operators must be convertible to logical scalar values.

Error in pso (line 486)
imprvchk = k > options.StallGenLimit && ...

Error in dierti (line 10)
[x, fval,exitflag,output,population,scores] =pso(@diertimubiao,3,[],[],[],[],lb,ub,[],options);

sky miko

hi Sam,
when i use your pso toolbox to deal a problem, it's couldn't find any answer. While when I use ga box or fmincon to deal with the problem, it can give me some unsteady answer back. Whether the way you used to deal the nonlinear problem is wrong?


This program actually is very useful.

But there are also some things which have not been implemented correctly (in my opinion).

E.g. when you have linear inequality constraints, the algorithm goes like this:
1) Create a uniform distribution within the lower and upper bounds.
2) Find out which particles violate the constraints
3) shift this particles on the closest boundary of the valid area.

This does not lead to a uniform distributed starting population!
Because -depending on the inequality constraints- more or less particles are on the boundary.

Another problem is that the reflective mode doesn't work if you have inequality constraints.
It works if you do not have inequality constraints, but not correctly (the point of impact is not calculated correct).

So I think there are some things concerning the "constraints-feature", that can be improved. Especially because the word "constrained" is part of the program's name

jackie zhu


Amani mahdy

Dear Sam;

THANKS SO MUCH BTW . but i have one question. what if the variable x contains some of the integer or binary values AND continuous ones ? Does the code deal with that ?

+ how can we implement the nonlinear constraints .. i mean what if i have x(1)*x(2)=5 , so the nonlcon= x(1)*x(2)-5 ?

thanks so much <3<3<#

genevois pierre

hi, I use pso with non linear constraints. On first trials it didn't run, then I replaced in state.LinprogOptions the Algorithm by sqp and it went through this step up to another error using psocheckpopulationinitrange ; can I send you my objective and constraints functions ? They "pass" ga without problems

mohsen ebi

Hi, i want to increas the number of parameter but i cant, plz help me
that is emerge


Dear Sam:
I am a student from China,I am studing in HHU university. I want to say your program is very useful to me! However I have one question,it will be greatful if you can help me to solve it.
In my case,I need to use Linear constraints and Linear equality constraints. Especially for Linear constraints, i used a*x<b and -a*x>-c instead of c<a*x<b. However,when I run the code ,it come to the tips "错误使用 psocheckinitialpopulation (line 47)
Problem is infeasible due to constraints
","出错 pso (line 338)
[state,options] = psocheckinitialpopulation(state,...".However ,if I to run the code for the second,third,forth ...the result may come out randomly.

Besides, the tips “警告: The 'active-set' algorithm will be removed in a future release. To avoid this warning or a future error, choose a different algorithm:
'interior-point' or 'dual-simplex'.

> In linprog at 366
In sdnchen-psomatlab-b4c4a1e\private\psocheckinitialpopulation at 41
In pso at 338
In runpso at 94 ” come out so many times ,do you know why?
I am looking forward to your reply!
My emial is .

Gabriele Lubin

It not working

Orcun Kor

Hi there,
I'm trying to run PSO with single variable, however the code finds different optimum each time. Another problem is that my design variable does not respect lower and upper bounds.
Help please :)


Hi Monique, I've seen particle swarm optimization done with discrete variables before but I haven't had time to implement it in my. It's been a few years since I've looked at the problem, I'll have to dig up the paper where it's described.

Monique Bakker

Hi Sam, is there a way to use pso.m but with integer variables only?



Interesting, they seem to have made some modifications to the code and improved the documentation, but I do recognize substantial parts of it as mine.

The open source license encourages people to do things like this, and while I would have appreciated them not deleting my name from the documentation as a professional courtesy, there's nothing in the license that says that they have to. There's also nothing preventing me from merging their improvements back into my own code.

Thanks for the heads up!


Hi Sam,

You may be interested in this.

Not sure there is any links/differences between yours and the above one.

work wolf

Rejeki Tambun

Good morning sir
I want use this program to get optimal placement and sizing Distributed Generation in Distribution System..
Anybody can help me?

Rejeki Tambun



I am loking fro psomultiobjective please guide me how to use psomultiobjective in toolbox.



Hello Sam,

Thank you very much for your "Another Particle Swarm Toolbox" which is exactly what I'm looking for. However, I have several questions when using the toolbox.
1. in MATLAB 2015a, the "matlabpool" command cannot be used anymore. Do you have updates on this?
2. When I set 'Display', 'diagnose', I cannot see the information of each iterations in the command window. How can I set it correctly so that I can get the information for each generation?
3. MATLAB also published a toolbox of particle swarm, however, without constraints. I really do appreciate your work with constrained PSO. However, I cannot get the same convergence speed between yours and MATLAB'S PSO toolbox, even though I have no constraints and both using default settings. I'm not sure if I make the settings wrong...

Thank you very much!
Really appreciate your great contributions.

Best regards,

Ahmadullah Khan

what is meant by best score and best score in the particle swarm optimization algorithm.


I have contacted you with a question about my fitness function. Hope for a positive answer. : )

Onno Broekmans

Onno Broekmans


How can I use this toolbox for object detection and tracking in video? Is there any readymade prog. available for the same using PSO? Pl. help

Jian Guo

Thank you for your share.

Maria Esmeral

Good Morning,

I want to know if I need the optimization toolbox in order to use the pso.m function?

Thank you for your help


Dear Sam !
thank you very much for providing very nice optimizing toolbox .
in my optimization problem i have 4 optimizing parameter. i want to plot it with generation (generation vs var(1) ,generation vs var(2),generation vs var(3) etc ...)



I found something in one of your comments here. On 15 May 2013

"I've also made a small change to ensure that only feasible solutions are selected as global optima when the penalty-based constraint enforcement method is used."

What does this mean? We can obtain an relatively optimal result among all iterations? Is there an example of this kind of application. Or it is just set with options.ConstrBoundary = 'penalize' ? Thanks.


And another problem about population size and generation. I assigned this kind of value to these two variables.
f.option.PopulationSize = 500000; % Same to GA.
f.options.Generations = 1000 ;

But I always obtain the result like this:
rt2 =

x: [1.8990 0.9206 2.0019 -0.3474 -0.0901]
fval: -1.2477
exitflag: 3
output: [1x1 struct]
population: [40x5 double]
scores: [40x1 double]
data1: [50x5 double]
real_v: [1x50 double]

The population dimension is 40*5.
I called the function this way:

fitnessfcn = str2func('mytest');
options = fitnessfcn('init') ;

issue1 = options;

issue1.fitnessfcn = fitnessfcn;
issue1.nvars = 5;
issue1.options.DemoMode = 'fast' ;

[x,fval,exitflag,output,population,scores] = pso(issue1);


I fixed that problem.
But one more stupid question about the objective function. Anyway, maybe I didn't understand it correctly, for I have been using GA. In GA, it always generate the maximum value of the objective function. But, here with the PSO, how I can obtain the maximum value or a minimum value. Do I need to specify some constraints with the parameters such as Aineq bineq something? Anyone help? Thanks.



Does the toolbox work for the problems that variables are more than 2?
For example, I want to implement something like
y = a1*x1 + a2*x2 + ... + an*xn
n >= 5
In my problem, I need to find out several coefficients by optimizing the 1st equation. It means that I have 'n' variables. To do so, I need to find out 'n' coefficients as well. But I got a problem of your toolbox. I have already sent you an email with my codes. Thanks.


Molong Duan



psoplotswarm is meant to plot particle positions in a 3-dimensional axes. I use it in the PSODEMO file to make it easier to visualize how the swarm behaves. IIRC the ijk variable is a 3-element array where you specify which dimension of your problem you want to plot (for example, if you have a problem with 12 dimensions and you want to plot the particle positions along the 4th, 7th, and 11th dimensions on a 3D plot).


Stephen Bush

What is the meaning of the ijk parameter of psoplotswarm(options,state,flag,ijk)? What is psoplotswarm() intended to plot?

As further background, I'm using psobinary() to optimize a two-dimensional array (adjacency matrix for a network).


Hi Sam,

Great tool, I have been using the 2010 release with no problems. Just updated to the latest release 20140330 but found out that my upper/lower bounds LB/UB are now somehow getting ignored. I noticed that this issue was brought up by Erik and others below, and you seem to indicate that this was fixed. Was it fixed in this release? This bug is serious because my LB is positive yet the PSO is straying into negative values. Hope this could be fixed soon. I am running (R2012b).



What about the constraints handling?


Dear Sam,
Thanks for powerful pso toolbox. What about the PSO TECHNICAL INFORMATION, such as the PSO algorithm.
A document ablout the Particle Swarm Optimization Toolbox for use with Matlab is needed.


Hi parinya,

Can you email me a copy of your nonlinear constraints function through the Contact Author link? I will have a look at it.



Dear Sam,

Thanks for powerful pso toolbox. I have an error after running the code with nonlinear constraint. The error message is:
" Problem is infeasible due to nonlinear constraints"

I have checked that my nonlinear constraints is passed with my initial population that I supplied. What could be the possible place that I can take a look to fix this problem.




Thanks for pointing that out, Aman.

b should really be a column vector [2;1] so that it will fit the equation

[1 0 ; 0 1]*[x1; x2] ≤ [2; 1]

however it looks like GA is robust enough to check for and correct that error.

I will add a small piece of input-checking code in the next release so that PSO will yield the same behavior as GA.


Sam, no problem :)

Aman Parkash

Sam,If PSO toolbox syntax same as GA toolbox so then I have found one a little bugs(but not with GA If using same syanx )for example: If I compare n run both GA n PSO syntax for two variable objective function.... pso(@(x)(x(1)^2+x(2)^2+x(1)),2,[1 0;0 1],[2 1]) showing "hozcat " and "psocheckinitialpopulation" error
...BUT ga(@(x)(x(1)^2+x(2)^2+x(1)),2,[1 0;0 1],[2 1]) result come out


Aman, I'm glad your problem is working properly now. Sorry for the inconvenience! Erik, you are very welcome; is it OK if I add your name to the list of acknowledgements for this toolbox?

Aman Parkash

Erik,Sam thanks for tell n fix this bug.... now my all results are coming within range..

Aman Parkash

means that Despite giving the bounds within postive range but some reslts was coming out of range i.e. with negative also....


Sam, thanks for fixing the bug so quickly!


Erik, I have discovered a typo in one of the helper functions for PSO which is causing the bug that you describe. I have submitted an update which should appear over the next few days. This should also improve performance for anyone who is using lower and upper bound constraints for their optimization problems.


kerolos, there is an option to use binary inputs (call PSOBINARY instead of PSO) however I haven't implemented any integer inputs yet.

Erik, that is very curious behavior and I am getting the same results as you. Even stranger is getting a negative result if I set LB to less than 1. I will have to do some debugging to get to the bottom of this.

Cristina, sorry I took so long to get back to you. I haven't actually tried to program Pareto fronts before, so I will not be of much help to your problem. I will look into developing a multiobjective version of this toolbox as time allows, however I cannot make any guarantees as medical school has other duties for me to attend to at this time. Let me know if you find a way to make it work!

Matthias, that behavior is a result of the PSO algorithm not preserving the best point of every generation (unlike the genetic algorithm, which does). The algorithm proposed in Kennedy et al's book that I referred to above does not include such elite-preserving behavior, therefore I left it out of my code to maintain fidelity to their code. Ideally the swarm would stabilize in a region close to the global maximum anyway, so most of the time this is not a problem. If you are getting wildly different values between the historical best point found and the final best point, then the swarm has likely been terminated before it has found a stable equilibrium and I would not rely on those results. If the difference is within a very small margin of error, then you may choose which result to accept.

Kirti Wanjale

Kirti Wanjale

Kirti Wanjale

Kirti Wanjale



this works great ;

can i use any options to make integer inputs for the four dimensions



kerolos, try setting LB to [0, 0, 0, 0] and UB to [1, 1, 1, 1] and avoid using the linear constraints altogether.


thanks for this great toolbox

I want some help using this toolbox for optimization of a 4 dimensional problem

I want an example of Aeq and beq

to saticify a constrain that all 4 dimensions values will be between 0 and 1

i tried this
% A=[ 1 ,-1 ;1 ,-1 ;1 ,-1 ;1 ,-1 ] , b=[ 1 ,0 ;1 ,0 ;1 ,0 ;1 ,0 ]
% A=[ 1 , 1 ,1,1 -1,-1,-1,-1] ,b= [ 1 , 1 ,1,1 0,0,0,0]

but no luck

Error using horzcat
Dimensions of matrices being concatenated are not consistent.

Error in psocheckinitialpopulation (line 36)
if (~isempty([Aineq,bineq]) &&
any(Aineq*state.Population(i,:)' - bineq >
options.TolCon)) ...

Error in pso (line 338)
[state,options] = psocheckinitialpopulation(state,...

I read the help

% x = pso(fitnessfcn,nvars,Aineq,bineq)
% Linear constraints, such that Aineq*x <= bineq. Aineq is a matrix of size
% nconstraints x nvars, while b is a column vector of length nvars.

could not understand very well

thanks .



Hi Sam. Thanks for this great function. I have found a small bug I believe. If I try to find the lowest value of a 1/x function between 1 and 10:
it returns x = 1. When I set the bound to 1.01 and 10:
the lowest value is found at x = 10.


Hi Sam, thank you for the toolbox! I have tried to modify the code in order to get a multiobjective optimizer, by means of a adaptive weighted sum approach for the fitness. How can I write the code, in order to write out the Pareto points? I wrote for each generation k

for k=2:itr

if k>=2
% trypareto=(state.obj1(k)<state.obj1(1:(k-1)))&(state.obj2(k)<state.obj2(1:(k-1)));

% if trypareto


But it does not work well as it writes out equal points.Does anyone have any suggestion? thank you all.


hi, Sam. I am right now using PSO as a tool to minimize my fitness function. Actually i am using binary coding for the fitness function which i have already run it with GA previously, and it works fine. However, when i am using your binary PSO to calculate the same function, it best value in each generation is like random values, no patterns like gradually going down or at least sign of minimizing. I am wondering why is this happening. The binary coding fine for my case, cause it work well in GA. Will you be so kind to answer me this question, or i will post the details if you need to fix my trouble. Hope to get your reply soon.



I think I found a crucial bug (version 20130702): in my program I write down the test parameters if they fit better than in the iterations before. In my test PSO stopped after fulfilling a break condition and gave me a final parameter set, which fitted less than an intermediate result I wrote down before.

sakthi priya

Hi sam
thank you so much for the code.i want to know can we give circuit netlist as input.


Aman Parkash, what have you defined for Aineq and bineq? Aineq must have the same number of rows as bineq.

agus mujianto, I did not design my code to run in a Simulink environment, but I have heard from other users who have tried it with success. Your question pertains to aspects of the Simulink model that you are working with, which is not part of my toolbox. Unfortunately I will not be able to provide you with technical support for something that I did not create. I hope you find your answer soon!

Aman Parkash

Hi sam
I m trying to run a one objective with Linear inequalities Constraints ,But If I using ga then rulut coming well with defined ranges but ,but not with pso and showing the following errors:-

Error using horzcat
CAT arguments dimensions are not consistent.

Error in psocheckinitialpopulation (line 36)
if (~isempty([Aineq,bineq]) && any(Aineq*state.Population(i,:)' - bineq >
options.TolCon)) ...

Error in pso (line 338)
[state,options] = psocheckinitialpopulation(state,...


CognitiveAttraction: 0.5000
ConstrBoundary: 'soft'
AccelerationFcn: @psoiterate
DemoMode: 'off'
Display: 'final'
FitnessLimit: -Inf
Generations: 200
HybridFcn: []
InitialPopulation: []
InitialVelocities: []
KnownMin: []
OutputFcns: {}
PlotFcns: {}
PlotInterval: 1
PopInitRange: '[0;2]'
PopulationSize: 40
PopulationType: 'doubleVector'
SocialAttraction: 1.2500
StallGenLimit: '100'
StallTimeLimit: Inf
TimeLimit: Inf
TolCon: 1.0000e-06
TolFun: 1.0000e-06
UseParallel: 'never'
Vectorized: 'off'
VelocityLimit: []

So tell me what should I do for satisfied these constraints n for getting result.... thanks

agus mujianto

hi dear sam :
i have some problem with particle swarm optimization, i have simulink file and i want to optimize some part with PSO.

the scrip :
% store names

% define the problem
X0=[5000 45000 5000 65]; % constraint violation (accel)

where i must place dv_names, resp_names?because it contain a lot of variable but i just want to optimize 4 variable.
thank you


hi,does anyone knows where can i get pso toolbox that support multi objective optimization?

Aman Parkash

Hi dear Sam,
When I optimizing the objective function then I screwing in a problem of how to synatax the range of such ange of limit in Matlab script..e.g.
please help me



Sanjaya, I am glad that you were able to find the answer to your question. After thinking about your problem I have also thought of some improvements to the code that I will implement in the near future.

Natanael, the error looks like it is coming from the objective function which you supplied.

I have recently received some emails from more community members asking for help using this toolbox. As I have mentioned before, I cannot guarantee that I will be able to respond to your particular questions in a timely or sufficient manner. I have provided demonstration functions and comprehensive help documentation with this toolbox, as well as links to further information which is posted in the file description above. Please refer to those sources of information to find the answers to your questions.

If your question is time-sensitive and related to academic homework, please post them to the community newsgroup so that more people will be able to see and respond to your question.


Dear Sam,
First of all I apologize for my strong word to you. I am very sorry. Yes your coding is very useful to me. Before you write I have already taken the ConstrBoundaryas soft instead of penalize and became successful. Thank you very much.

Natanael Acencio Rijo

when i try to optimize my function gives me this error. What could be it?

Swarming...Error using cost (line 3)
Not enough input arguments.

Error in @(p)cost

Error in pso (line 429)
state.Score(i) = fitnessfcn(state.Population(i,:)) ;


Sanjaya, my apologies, my answer to your second question from last week should have been to try setting the 'ConstrBoundary' option to 'soft' not 'penalize'. This will allow the optimizer to skip over evaluating any particle which finds its way out of the feasible design space. For your problem, that would mean that Simulink would not have to be called for particles which venture outside of your lower and upper bound constraints. I hope this new information does not come too late for you.


Sanjaya, regarding your second question, try setting the 'ConstrBoundary' option to 'penalize'. Regarding your first question, my code should be able to handle as any variables as you are willing to throw at it, but unfortunately it is not able to handle mixed integer problems. The Genetic Algorithm included in MATLAB's Global Optimzation Toolbox should be able to help you with that.

If you have a time-sensitive question (e.g. academic assignment due), please direct any questions to your professor, teaching assistant, or a classmate. Like most contributors to the File Exchange community, I am not a Mathworks employee and have other responsibilities that I must attend to. I cannot guarantee that I will have the chance to check this page on a regular basis to answer your questions.

Some resources that can help you with any problems that arise while using the toolbox include:
- Genetic Algorithm Toolbox documentation (
- The books, academic paper, and Wikipedia article listed in the "bibliography" section of the file description


Hallo sam
It is very sad that you have not replied my query till yet.Again I am facing another problem with your code which I am explaining.
I am getting the fitness function from a simulink file. The simulink file will run if the variables lie within lower bound and upper bound.But with your code sometimes the variables exceed the lower bound or upper bound so that the simulink file is not able to execute and it shows error.So please help me as soon as possible. Thanks.


Halo Sham,
I have to optimize 16 parameters. Out of which the first nine parameters are in the range of zero to one but the rest seven variables are in the range of one to four (only integers).So please help me to use your codings. Is your coding is useful for mixed integer constraints?Again is it useful for 16 variables?
Anticipating a quick reply.Thanking you.


Nara and Natiolol,

I had thought about implementing an integer solving method, but never had the time to do so. After giving it some thought, I came up with one idea to get around this: you could convert your integers into binary form and use the psobinary solver.

Mouloud Kachouane, to find the maximum of a function simply add a (-) sign in front of your problem to convert it into a minimum-finding problem.


can we use it for integers or can we enforce it to choose numbers from a predefined set.


Hello Sam,

Quick question. Does this work for integer constraints as well?

Mouloud Kachouane

Thank you so much, I like your toolbox.
I want to ask you, what have I to do to search for maximas of 2D a function using PSO (your toolbox).
Thank you a lot!


Hi Zachary,

I'm glad that you found this toolbox useful in your work! As for citation, the following should do:

Chen, Samuel (2009-13). Another Particle Swarm Toolbox (, MATLAB Central File Exchange. Retrieved (whenever you downloaded the version you have).

I got the format from this link:

This goes for anyone else too: if you are comfortable, I would love to see some examples of how you've used the toolbox. If you have any publications that this toolbox in has played a part in, please feel free to send me a link or DOI number via the Contact Author page.

Zachary Taylor

An excellent optimizer I have used on several projects now.
I am currently writing a paper where I use your optimizer how should I cite it?


Mohammed, this toolbox uses the same command line syntax as the genetic algorithm toolbox. So you can refer to the document for the Genetic Algorithm toolbox here:


Hi Sam, I have tried several times to set the LB,UB, and nvars to match my multi dimensional problem but it doesnt work, would you please " if you have a time" clarify that in steps and what is the maximum number of decision variables that can be solved "effeciently" as Iam developing a stochastic reservoir optimization code and have already written the function but it doesnt yet fit with your pso code



Hi Omari, unfortunately my code does not support multi-objective optimization. The GAMULTIOBJ function that comes with the Global Optimization Toolbox would be your best bet.


hi Sam

i am starting in optimization field, but i wolud like to know if is posible to use your code in a multiobjective optimization? thanks a lot


Hi Zhaoyi, I think you may have an older version of the toolbox. For one of the releases, I accidentally included a duplicate copy of the psoiterate.m file. If you look in the psopt folder, you should find a copy of psoiterate.m in the psopt folder, and another copy in the /private folder. Simply download the latest version, or delete the older file in the private folder on the version that you have, and the problem should resolve.


Hi Sam,
Thanks a lot for your great job. I'm trying to learn to use it. But I got a problem...maybe very silly...

when I type in "pso", it displays :
Swarming...??? Error using ==> psopt\private\psoiterate
Too many input arguments.

Error in ==> pso at 515
state = options.AccelerationFcn(options,state,flag) ;

Everytime I try to use it to solve my own problem, it also shows this message...
Hope you can help me solve this problem...
Will it be due to the MATLAB version? Mine is 2010a...

Thank you a lot!


Version 20130615 should be online shortly. Parallel computing capability has been implemented, as per many community requests.


I have used this for quite a while. Come back to rate it from my experience.


Mohammed, make sure that the PopInitRange, LB and UB variables that you set are the correct size. PopInitRange (which is set using psooptimset) should be a 2 x nvars matrix (that's two rows, and nvars columns). LB and UB should both be 1 x nvars, i.e. row vectors.


Hi Sam, this the error msg: I have tried several times to modify the dimensions but it doesnt work

Index exceeds matrix dimensions.

Error in psocheckpopulationinitrange (line 9)
index(~lowerInf) = LB(~lowerInf) ~= lowerRange(~lowerInf) ;

Error in pso (line 214)
options.PopInitRange = ...


Hi Mohammed, what does the error message say?


Hi Sam, thanks for the nice job. just wanna ask you how can I modify the pso.m to handle a 24 unknown optimization problem?
I have tested your code through different test functions and different PSO parameters adjustments and it works fine, however, when I try high dimensional problem I always fail in matrix dimension and population range problem.
Could you please help me to cope with this issue?


Hi Kevin and others,

The latest version (20130515) should be coming online shortly, and re-implements some previous bug fixes which were lost because when I came back to this project, I started working from an older version of the code. I've also made a small change to ensure that only feasible solutions are selected as global optima when the penalty-based constraint enforcement method is used.

Next I plan to work on implementing parallel computing capabilities, as suggested by many users previously.


hi Sam, thank you for updating the submission. Do you have a plan to convert them into "mex" version?


This is a relatively major update and it has been a long time since I have programmed in MATLAB. Although I ran through the pre-release checklist of tests that I had previously developed and everything seemed to work OK, I will leave the stable 2010 version of the toolbox available on Google Code's Project Hosting service for anyone who has problems with the new release.

Link here:


After a three year hiatus, I've had a bit of time to make some updates that I've always wanted to make to this toolbox. I've started by completing the implementation of an alternate constraint enforcement method that was in the works and almost complete when I started medical school back in 2010. This method should work better for nonlinear constraints, and can be activated by setting options.ConstrBoundary to 'penalize' when calling the PSOOPTIMSET function. This update (version 20130502) should appear on the File Exchange in the next few days.

Over the past three years, I have also received many helpful suggestions from community members such as kevin, Michael Johnston and some others who have emailed me in private. Over the coming weeks I will try to set aside some time to implement their excellent suggestions.


It seems changing all the " if strcmpi(flag,'init')" in pso.m and plotFcns .m functions into " if strcmpi(flag,'init') || ( state.Generation==options.PlotInterval ) " can address the issue; if options.PlotInterval is larger than 1 there will be problem, error messeage requires haxes to be created first.


Message to Mathworks: will you please work with Sam together to add particle swarm optimization and differential evolution toolbox in the future release? I look forward to this


another suggestion is, would you please add C0, C1, and C2 adaptive strategy options since these are critical for PSO algorithm's performance. In psooptions.PlotFcns, there better be an additional option of displaying the trend of C0, C1 and C2 with pso generations. Thank you very much for the great job!


hi Sam, one suggestion on options.HybridFcn:

It is very common to use PSO hybrided with @ga (not only just like ga's options.HybridFcn to use @fmincon), since PSO performs good on global search while GA does better in convergence.

The current pso.m on HybridFcn is similar to Matlab's ga, so it is suitable for @fmincon which requires initial value but may not work for @ga which requires number of variables:

I now find out the reason why @ga does not work when I set PSO options.HybridFcn as @ga:
In the bottom lines of your pso.m there are codes like below:

% Check for hybrid function, run if necessary
% -------------------------------------------------------------------------
if ~isempty(options.HybridFcn) && exitflag ~= -1
[xOpt,fval] = psorunhybridfcn(fitnessfcn,xOpt,Aineq,bineq,...
Aeq,beq,LB,UB,nonlcon,options) ;
% -------------------------------------------------------------------------
The xOpt should be initial value for @fmincon, while @ga requires number of variables.
So in order to use @ga correctly, this xOpt may have to be changed into length(xOpt) or max(size(xOpt)).

Is there any general strategy to make sure both @fmincon and @ga can work correctly?




The best PSO toolbox I've ever seen


Thanks for your efforts,
your code worked with me well in MATLAB but I am having problems using it in SIMULINK as code generation does not support function handles nor struct data types :)


Hi everyone, sorry for the lack of updates, I've been quite busy at medical school over the past months and haven't had time to work on this project. Rilin, the error that you encounter could mean that the set of constraints that you defined are not compatible with each other. A simple example: if I defined a set of constraints x < 5 and x > 8, and both had to be satisfied simultaneously, can you see where I might run into problems?

Troy Lim, the best thing to do would be to look at the product documentation for the MATLAB optimization and global optimization toolboxes. I tried to make my toolbox so that it is fully compatible with the way that they define fitness functions. It is a little tricky to master at first, but with some patience and persistence in going through the examples provided in the MATLAB documentation, you should be able to understand it!

Joe Ajay, I had been meaning to program discrete optimization functionality into this toolbox, but never had the time to do it! I can't promise you anything, but I may have a little more time to make some updates this summer.

Drew Compston


Hi,Sam.Thanks for your good job.
I have a problem when I use the nonlinear contraint.'Problem is infeasible due to nonlinear constraints' occured.But I don't know why. Can you help me? Or anybody knows it can give me a hand. Thanks.

Joe Ajay

Dear Sam, Thanks for this toolbox, It was really helpful in my project. and I've done 4 optimization problems with it. I'd like to work on discrete optimization using PSO. Do you have any update of this toolbox for discrete optimization? If not, what is the other option to go for discrete optimization using PSO.

troy lim

hi Sam,

u have done a very good job for the pso toolbox.

i am a newbie in matlab dealing with a project in college.

i need to optimize the membership functions parameters(total 63 parameters)of flc in simulink by using pso according to observed data result from simulation of the developed system as the fitness function. i found that the most difficult part for me is to formulate the fitness function, or is it possible i can find the best solution without fitness function?
i wish u can help me out or give me some idea as this is the last part of my project towards the end....thx


Or, it can just be set in the nonlinear constraint fucntion file?


Hi,Sam. I have a problem to use PSO to find C=[c1,c2,c3,c4].The contraint is A*W=1,where A=[a b],W=[c1*exp(jc3),c2*exp(jc4)].How can I set Aeq. Thanks for your answer.


Due to several requests, I will be looking at fixing some issues with the 'penalize' constraint method over the coming weeks, so that this algorithm will work properly for problems with nonlinear constraints. Stay tuned for updates.

fenfen Xiong

or some one who has code that can deal with optimization with non-linear constraints at hand, would u please send me a copy?
Much thanks!

my email:

besbesmany besbesmany

my variables is
LB vector of zeros
UB vector of ones

Aeq is matrix containing zeros and ones
Beq is vecotr of ones
my initial value is matrix of any number from 0 to 1

penalize is the only method give me output but not in range of LB, uB and Aeq, Beq
all other constraint method return the same initial value
is thiere any way to have correct result from pso, any updated toolbox you will release soon?

i've correct result from ga toolbox but i want to check other algorithms
is thier any matlab algorithm solve same problem other than ga, fmincon

any direction will be appreciated

Thanks so much Sam


Sorry that it's not working for you! The 'penalize' constraint method is unfinished, and it doesn't quite work yet: I took that feature out of the documentation and put in a warning message in the last revision of this toolbox when I realized that it still had problems.

If the 'soft' constraint method is giving you the same initial value with no change, then it could be because of several reasons:

1. None of your initial particle positions are feasible (so they are all set to infinity, using the 'soft' constraint enforcement method)
2. Your objective function is throwing an error with the given input vector (could be related to reason #1 above)
3. The design space is flat in the region of the initial particle positions

Are you using discrete or binary variables? Right now my PSO toolbox doesn't support design vectors that have a combination of real and discrete components (but the GA toolbox that MATLAB comes with should be able to those with no problem).

besbesmany besbesmany

Dear Sam
Thanks alot for your effort in this toolbox,
i've problem in constraint pso , lb=0 and UB= 1 but the result of pso is not restricted with this bounds

also Aeq and beq is not restrict the result

i tried penalized and soft , penalized is out of range of lb, ub, Aeq, beq

soft is give me the same initial value with no change

can you help me in that


Haydar, I think the RANDI function was introduced in MATLAB r2008b as discussed here:

If you're not able to get a more recent version of MATLAB, I can see if I can release a small update over the weekend that eliminates PSOBINARY's dependency on RANDI. Thanks for the feedback!

Haydar Dag

When using psobinary, the system crashes for not being able to find the function 'randi'.


Dear Sam!
Thank you very much for your help and your efficient toolbox. It works very well for my problems now.


Hi Nguyen, I don't think that this toolbox can be used with optimtool -- that would require modifications to the optimtool code, which is beyond my capabilities at the moment. If you learn how to use the optimization toolbox functions from the command line, then you should be well-equipped to use this PSO toolbox. Hope your studies go well!


Dear Sam!
I'm student and i do not know well pso. Can you tell me, how can i use pso from optimtool?


Thanks Oliver, I'll see if I can implement what you suggested for a parallel processing option. I'll have no way of testing it, so I might add a little note saying that the feature is in beta. Glad you found the toolbox helpful!

Erdal Bizkevelci



Thanks for your submission, it works wonderfully. And thank you for sticking to familiar syntax for those of us who have been using the optimization toolbox, this really helps with the learning curve.

I saw some comments about taking advantage of parallel processing, but it didn't look like anyone has done anything about it thus far. When I profiled pso using my objective function the big place that things got bogged down was obviously in the many objective function evaluations. As such I changed the code a little bit to evaluate all of the objective function calls in parallel. I changed the following lines from this:

for i = setdiff(1:n,find(state.OutOfBounds))
state.Score(i) = fitnessfcn(state.Population(i,:)) ;
end % for i

to this:

tempstatepop = state.Population;
itinerary = setdiff(1:n,find(state.OutOfBounds));
temp = zeros(length(itinerary),1);
parfor i = 1:length(itinerary)
temp(i) = fitnessfcn(tempstatepop(itinerary(i),:)) ;
end % for i
state.Score(itinerary) = temp;

This is probably a crude way of doing it, but even so I saw a speed up of just under 3 times. Perhaps something similar will be helpful in a future build. Thanks again.



Dear Sam
i am writing a simple PSO function in matlab, i would like to ask:
if the particle go out of the boundary, is that i should reset the velocity to zero?

Mark Shore

Sam, I will be trying out your toolbox in the near future as time permits. As far as requirements for installed MATLAB toolboxes, as far as I'm concerned, the fewer the better.

As a single commercial-licence user, I have to justify each additional toolbox I purchase (to myself, but still...). The wavelet and signal processing toolboxes were a non-issue. Got the optimization toolbox expressly for John D'Errico's SLM tools, and still sitting on the fence for parallel processing, image processing, mapping toolbox and curve fitting toolboxes, among others. $1000 here, $1000 there, plus maintenance, eventually adds up and can cut into one's desire to test FEX submissions...


Hey Ben, there's a brief note about that in the help provided for the PSOOPTIMSET m-file (I know, the documentation is scattered all over the place -- one of the things I was hoping to do was to create a more comprehensive help file).

Glad you found the demo helpful, Mark. I was thinking about including the Global Optimization toolbox in the list of requirements, since it would really help to be familiar with GA. But as you found, it's not strictly necessary to run the PSO code.

Mark Shore

...or I could have just downloaded APST, ran the included demo, and easily answered my own question. Yes - as listed - the requirement is the Optimization Toolbox.

Mark Shore

A quick question - Another Particle Swarm Toolbox requires MATLAB's Optimization Toolbox, NOT the Genetic Algorithm (now Global Optimization) Toolbox, correct?


Hi Sam,

Could you briefly explain how to set the options related to different constraints? Frankly, there is no much word in the .m file.

Thank you,


Looks like it will take until Monday for the aforementioned bug fix to be posted.


Ben, thanks for pointing that out. I was actually experimenting with a new default constraint enforcement method ('penalize'), but it doesn't seem to handle boundary constraints very well. You can fix that problem by setting options.ConstrBoundary to 'soft' or 'absorb'. I'll release a quick patch (should be up by tomorrow) to set the default back to 'soft' so that this doesn't happen to other people.

WANG, are you passing any input arguments to PSOBINARY? PSO will run a default demonstration case without any inputs, but I haven't provided a similar function for PSOBINARY yet. Anyway, the first argument to PSOBINARY should be a pointer to a fitness function that's written by you (it should not actually be 'fitnessfcn', that's just a placeholder used in the documentation for the aforementioned function point). It should be able to accept a 1xnvars vector of 0s and 1s, and returns a fitness value. Make sure that the m-file for the fitness function is in your path when you call PSOBINARY. Hope that helps.


thanks for your recommendation.i found that my release is lower than required ,so i updated it.then i run pso.m successfully,but when i run psobinary.m ,it appeared that
??? Input argument "fitnessfcn" is undefined.

Error in ==> psobinary at 37
[xOpt,fval,exitflag,output,population,scores] = ...
would you tell me what is the problem and how to solve it.
thank you .best wishes!


Hi Sam,

There is another problem: sometimes the result goes out of the box constraints (lower & upper boundaries) a lot. Did you meet it before?



WANG, the syntax for this PSO toolbox is described in the comments at the top of the file named pso.m. You can type >> help pso from the command line, with the current directory set to where the pso.m file is, to read it, or you could just open the file. As I mentioned before, it should be the same as the syntax for the Genetic Algorithm included with the Global Optimization toolbox, so you can also refer to them:

If by BPSO you mean binary PSO, make sure that you're not trying to impose any constraints. Type >> help psobinary to learn the syntax for binary pso, which is slightly different.

t g and satish, I'd like to be able to help you, but school has been very busy for me, and your questions are bigger than I can properly address at this time. Again, I recommend reading the documentation provided with my PSO toolbox, as well as MATLAB's Global Optimization Toolbox, which I recommended to WANG (it's probably better-written and better presented than mine). You could try some of their simple examples to get an idea of how to use the toolboxes. I hope your research goes well!

satish jain

hi sam
i am using basic pso for my function minimization problem, it is working but gbest is going beyond range . could you pl let me know , how and where i can change my code .
sk jain


Thanks for you hard working and selfless dedication fristly. i am a mater of electrical engineering,and want to use BPSO to optimize a function,but i can't run th psotoolbox successful ly.when i run pso.m,then
??? Function name must be a string.

Error in ==> psooptimset at 180
idx = find(cellfun(@(varargin)strcmpi(varargin,requiredfields{i,1}),...

Error in ==> pso at 171
options = psooptimset(options) ;
couldy you give me more details using the syntax.
thanks a lot!
best wishes!

t g

hi sam,
i am doing my university project on manufacturing cell design using PSO tool. The problem is defined as a part/machine incidence matrix which maps part and machine, and the clustering should be formed block diagonally in order to make the cells. the objective is to minimize exceptional element count (EE). the PSO particle string should contain the cell nos. and index of the string are the machine nos.
Since i am new to this field, facing problem to implement the logic and code. Can anyone help in this regards. Matlab is the interface of the program. the problem is shown in a link

if you can anyway help me out.


Reposting this on the public thread, in case others have the same issue:

Hey Ben,

The psocreationuniform function in the /private folder will generate an initial population using a uniform random distribution based on options.PopInitRange. If you've got linear or nonlinear (in)equality constraints, this initial population will then get passed on to psocheckinitialpopulation (again in the private folder), and it will ensure that all of the initial points are feasible, moving the ones which aren't. Anyway, if my following answer is unclear, let me know what kinds of constraints are in your problem and I'll see if I can get back to you. I'll be moving over the next couple of days, so it might be a while before I see your next reply:

One problem I did encounter was with the fact that the PopInitRange option is set from 0 to 1 in all dimensions, by default (i.e. repmat([0;1],1,nvars)). This is obviously not representative of all design spaces. This might be a problem if you haven't set any boundary constraints LB and UB. This behavior was in the original genetic algorithm code from MATLAB upon which I based my code, so I left it it as is, in case other people have already written GA code assuming this behavior and want to try it with PSO.

So basically, if you haven't tried this already, you can manually set the PopInitRange option to a reasonable range that fully encompasses your design space. The CognitiveAttraction and SocialAttraction options might also be adjusted, but first see the note about them provided in the psooptimset help. More drastically, you could try editing the psocreationuniform function.

If none of those work, I'd be inclined say that you've run into an inherent limitation of the PSO, i.e. that if you initialize it in a small enough domain of the design space, and the global optimum lies well outside of it, then the swarm is not guaranteed to find its way out of it.



Hi Sam,

I using your code to solve a sub-problem in my algorithm. However, when doing intensive testing, I'v found some stability or repeatability problem of your code. In many cases (more than 10%) of a large number of testing, I got different result using the same configuration (same dataset, same objective function, same pso option).

From my observation, this is caused by the initial particles. It was trapped into some unreasonable local minima.

Could you tell me how you generate the initial particles?



Ben, thanks for the suggestion. I'll have a look at it next week when things get a little less busy for me. kaz uki, I'm not familiar with discrete PSO myself so I can't be of much help to you. Try searching for papers about it on Google Scholar or Compendex. Hope your final year project goes well.

kaz uki

i'm newbie about PSO...can somebody help me how to minimize assembly sequence time(product) using Discrete PSO(DPSO) and implement it using matlab...i need this for my degree-final year project.....i dont where to please someone guide me step by me for best....

kaz uki

U can email me at try my best..


Hi Sam,

A minor suggestion for the plotting part. It would be more convenient, at least for me. Hope it is helpful.

if ~isempty(options.PlotFcns)
%%%% close(findobj('Tag', 'Swarm Plots', 'Type', 'figure'));
hFig = findobj('Tag', 'PSO_Plots', 'Type', 'figure');
if isempty(hFig)
state.hfigure = figure(...
'NumberTitle', 'off', ...
'Name', 'Particle Swarm Optimization', ...
'NextPlot', 'replacechildren', ...
'Tag', 'PSO_Plots' );
state.hfigure = hFig;
set(0, 'CurrentFigure', state.hfigure);
end % if ~isempty


Good eye on psoiterate, Samuel. However the inertia weight does scale by default, from 0.9 at the beginning of the optimization to 0.4 as it reaches the maximum number of iterations. I've updated the toolbox so that the velocity update function can be changed to your own custom function by setting the 'AccelerationFcn' option to the appropriate function pointer. I haven't documented the syntax for this yet, so it might be best to use the default psoiterate function as a template for developing your own velocity update function. It's currently located in the /private directory, but it will be moved to the base directory in future releases.

Also you can now set a time limit (in seconds) for the solver using the 'TimeLimit' option. Default is infinity.



Implementation details: This PSO version uses a static intertial weight. You can easily change the velocity update function in file "psoiterate.m" if you want to implement some dynamic change, or even using a different PSO technique such is a constriction factor instead of inertial weighting.

George Evers


To maximize a function, simply minimize its additive inverse. In other words, maximizing f(x) is mathematically equivalent to minimizing -f(x).

One easy way to do this would be simply to add
"f = -f;" as the last line of your test function.


Hi Sam,
Your toolbox is prefect.
But now, i want to find the maximum of a function using PSO.
Would you mind helping me?


Hello Mr. Sam,

Much thanks for this excellent software. I just have a question about the specifics of your implementation. Are you using an inertia weight in the update velocity, and if so does that weight decrease? It is recommended by some of the original PSO guys in "Defining a Standard for Particle Swarm Optimization" - Daniel Bratton and James Kennedy. It can cause a good space search in beginning and fine tuning toward the end.

Also, how do you deal with particles going over the bound? Are you preventing them from going out entirely, or letting them go out without evaluation the cost function (which will make the particle eventually pull back into the allowable search space)? The reason I ask is that preventing the particles from going out entirely can cause some bias toward the center of the search space.

Thanks again! I have had great success using your implementation.


Hey everyone, I've been extremely busy with my thesis so I won't be able to provide any technical support for this file in the foreseeable future. I'm glad that so many people have found it useful. If you have any questions, please refer to previous comments, the file description, as well as getting familiar with how to use the Genetic Algorithm included with MATLAB's Global optimization (

loo cheng

Thanks for you hard working and selfless dedication fristly.
i have encountered some problems when i using psodemo, some errors emergence, i tried to figure out ,but failed lastly, the error indication is below
"??? Error using ==> strcmp
Inputs must be the same size or either one can be a scalar.

Error in ==> isfield at 12
tf = any(strcmp(fieldnames(s),f));

Error in ==> psodemo at 38
if any(isfield(options,{'options','Aineq','Aeq','LB'}))"


i am a new one. i can't run th psotoolbox successful ly. couldy you give me more details using the syntax.
thanks a lot!


Sam, Many thanks for your kindly reply. When i used the default with test funcion of 12 inputs. The 12 output never reached the known global min. Then I tuned up all possible parameters in your toolbox and found the best parameter set comprising
SocialAttraction parameters =1.5
Generation = 300
popsize= 50

which made the 11 out of 12 outputs meet the theoritical min. Till now I could not find any pso parameter set that can bring the output converge to the same point of every runs which is the general expectation for global min. Somehow making boundary more stricly may be helpful, so i'm trying it at the moment.


Amaraporn, the swarm is only stable when the sum of the CognitiveAttraction and SocialAttraction parameters is less than 4; if they are 2 and 2 as you've got them, then 2+2 = 4 and the swarm will not converge. Try reducing one or both of them such that their sum is less than 4. I'll add a note (and paper reference) regarding this to the documentation.

Also, try using fewer generations; that will make the inertia reduction parameter scale better. 10,000 is a very large number of generations. Try setting "Generations" to a few hundred, at most. Same with "StallGenLimit" -- what happens when you use the default values?


Dear Sam

Just one quick question about pso algorithm parameters used in your toolbox. I have tried to tune
problem.options.Generations eg. 10000
problem.options.CognitiveAttraction eg. 2
problem.options.SocialAttraction eg. 2
problem.options.StallGenLimit eg. 8000
problem.options.PopulationSize eg. 40
and initial inertia eg. 1

but global min could not been found for my objective function of 12 parameters. I'm seeking other important parameters and wondering if you got these following parameters somewhere in the algorithm. "inertia reduction parameter", "bound and velocity fraction","velocity reduction parameter"


Glad it helped. 5000 frames sounds like a lot of data, so it might take a long time depending on your computer and the complexity of the calculations. See this document for tips on improving the performance of MATLAB code:



Many thanks for all of your feedbacks :) Now I can use your toolbox with actual function though spent almost a whole day for a clean termination. I'm not a computing guy so a bit wondering about its time consuming. Is it sounds resonable for the obj function which fits the 12 parameter dynamic model to the data of (5000 frames) time history of experimental dynamic motion? Anyway this is a good toolbox for everyone including students from out of field, I confirm :)


Amaraporn, don't use PSODEMO to run your actual optimization. I included that function to provide an easy way to visualize how the swarm behaves, but it wasn't intended to be used to run actual optimizations.

Instead, you should call PSO directly using the syntax explained when you type >> help pso. It was designed to be very similar to the Genetic Algorithm (now called "Global Optimization") Toolbox, so it would help if you become familiar with its documentation (the link is provided in one of my previous posts). If you read the help provided with PSO, you'll see that there is no "default" dimensionality. The number of dimensions of the problem (nvars) must be provided by the user, i.e. >> pso(@fitnessfcn, nvars, ...).


Dear Sam

You are right about the persistent function, i have corrected it and try with the default options. I found the toolbox terminate with some local mins as follows.
[420.97 420.97 420.97 -302.53 420.97 420.97 420.97 -500 420.97 420.97 420.97 -302.52] ( Theoritical global min is 420.97).

So, I presume that the toolbox can works well with 12 inputs given that the suitable pso variables are defined.
Now I move to use it with my real obj function having single objective function subject to lb and ub for 12 inputs (this objective function already worked with fmincon (alone)). The error when used with you pso toolbox is as follows.

Swarming...??? Attempted to access Swarm(3); index out of bounds because numel(Swarm)=2.

Error in ==> vMarkSqr_spineCT_pso_mod at 37

Error in ==> overlaysurface at 13
ZZ(i,j) = fitnessfcn([XX(i,j) YY(i,j)]) ;

Error in ==> psoplotswarmsurf at 30
overlaysurface(state.fitnessfcn,options) ;

Error in ==> pso at 334
state = options.PlotFcns{i}(options,state,flag) ;

Error in ==> psodemo at 61
"Swarm" is my optimized parameterers.
The words "Error in ==> vMarkSqr_spineCT_pso_mod at 37
vThoracicTransl=[Swarm(1);Swarm(2);Swarm(3)];" is all about my objective codes and line 37 is very first line of my main calculation.
I think this is a problem between the default swarm dimension of the toolbox and the input dimension of my obj function. My function starts with simple head line as follows. Anything should be changed? It is not convenient to sent the whole function to you but I can explain how it works as follows.

function LeastSqr = vMarkSqr_spineCT_pso_mod(Swarm)

This obj function is the calculation based on multiple transformation matrices similar as the one from biomechanical application which you may have known it as "ankle joint parameters solving using parallel global optimization with particle swarm". In the main obj function, other 3 functions (created by my self) are called and all deals with transformation matrices with each matrix is calculated from (3-5) swarm inputs.

Could you please help comment about this?


Amaraporn, Can I ask why you've got sumX as a persistent variable in the test function? That means that its value is retained between calls to the function, so that the fitness value of every particle will be dependant on the fitness of all previously evaluated particles. This is probably not what you want PSO to do. What happens if you instead change that line to sumX = 0?

How does the swarm perform with the default options?


Hi Sam,

Many thanks for your feedback:) Now I'm back to start from a test function which its answer is known. I found your toolbox easy to use with 2-11 inputs and suitable populationsize and generation are needed to help the finding of global min. Now, I got problem with 12 inputs, populationsize=50 and generation=from 2000 to 8000. If you could help to comment how to define suitable pso variables to achieve global min.
The value of the fitness function did not improve in the last 50 generations and maximum constraint violation is less than 1e-006, after 61 generations.

The code for test function is below and subject to lb = -500 and ub=500 (as you know, in same dimension with input number).

function f = easyTest(x)
[xSize, Dim] = size(x);
persistent sumX
for j=1:Dim
fx = (-x(j)) * sin(sqrt(abs(x(j))));
sumX = sumX+fx;
f = sumX;


Hi Amaraporn, I'll investigate what's going on at line 258. Can I ask what is the exact code you use to set the options structure and then call pso?

I know it's a bit complicated to set all the options for this PSO code -- I wrote it to be very similar to the Genetic Algorithm that's already provided with a MATLAB toolbox, so that people who already know how to that toolbox can easily transfer their code to this PSO algorithm. If anything's unclear, the online documentation for the Genetic Algorithm Toolbox may help:


Hi Sam,
I'm trying to use your toolbox solve one objective function subjecto lb and ub with 12 inputs but found it is pretty complicated to define suitable option to get the program run. Mostly, i found this error and been trying, now desperate to sort it out.

Swarming...??? Subscript indices must either be real positive integers or logicals.

Error in ==> pso at 258
state.Score(setdiff(1:n,find(state.OutOfBounds))) = ...

Your comment would be very helpful


Uduakobong, if you've never used PSO before then it's best to take some time to read through the three references listed under "bibliography" in this file's description. It's an interesting problem to mix continuous and discrete variables, but this toolbox isn't capable of that. As there has been some interest, I will implement the ability to solve problems with binary variables soon.


Hello sam, i am trying to write a PSO program to solve a multiobjective, nonlinear contriant problem. the problem has 3 varibles with 2 of the varibles being discret and one is continous. the thing is i have never used PSO before so i find it difficult ti understand what to do. Please could you give me some pointers thank you


Sure, you may also be able to use the test functions provided with the two other files I listed under the "Acknowledgments" section.

Albert Lee

Hi Sam

I need to reply the question that you reply to me. Since I am doing a research about the hybrid PSO with the other algorithm, so I need to write out the program and testing on the test functions. One of my program is particle swarm ant colony optimization. Another one is evolutionary particle swarm optimization. So, I hope that some guidelines will be provided in order to solve the problem. thank you.


Albert, Ant Colony algorithms are quite different from Particle Swarms, although there have been papers published proposing a hybrid of the two algorithms. My particle swarm code by itself does not do ant colony optimization, so I'm not sure what your question is -- are you trying to write an ant colony algorithm, or are you trying to learn how to use somebody else's ant colony toolbox?

The method for defining a fitness function for this PSO toolbox is the same as for other MATLAB optimizers such as GA, FMINCON, or FMINUNC. Any fitness function you write which works with those optimizers should also work for PSO. See this document for instructions -- the only thing is, you don't need to provide the gradient/Jacobian or the Hessian to PSO. Note that PSO can only minimize fitness values; if you have a problem where you're trying to maximize f(x), just set g(x) = -f(x) and minimize g(x) with PSO.

Albert Lee

Can I know how to write for the Matlab code for the Particle Swarm Ant Colony Optimization? If the test function is the same as the PSO toolbox. Thank you.


I've considered adding parallel processing features, but I don't have the toolbox myself so I'd have no way of testing it.

An alternative could be to use a vectorized fitness function, setting options.Vectorized to 'on', and do all the parallel computing tasks in the fitness function, independently of the PSO code.

If you're interested in collaborating to add Parallel Computing capability, I've got an SVN repository with this project going on Google Code Project Hosting, just search for "psomatlab".

I could easily set up PSOOPTIMSET to create the appropriate option, and somebody with the parallel computing toolbox could set up PSO itself to handle it. Let me know if you're interested.

Michael Johnston

Thanks, Sam. Have you considered adding options for parallel processing (for those who have the Parallel Computing Toolbox)?


Mike -- I've uploaded a new version which should appear tomorrow. You can set options.Display to 'off' to kill all command line output (except for a few warnings and error messages that you'd probably want to know about anyway).


Mike -- that's a good idea, I can implement the "quiet" mode fairly quickly.

Michael Johnston

Sam -- Thanks very much for providing your code. You've clearly put a lot of work into it. The performance and stability on my end has been flawless thus far.

The only feature request I can think of is adding a field to the options structure to control the verbosity of the output to the command window? Sometimes it's nice to be able to kill this completely.




Hi Karim,

Sorry about that, I know I mentioned the possibility that I might implement binary variables for this toolbox. Over the past few weeks my thesis work has taken me in another direction, so I don't think I'll have the time to do it. The first book listed in the bibliography section of my description: "Swarm Intelligence" by J Kennedy, RC Eberhart and YH Shi, describes in detail how to implement PSO with binary variables, if you're interested.



Dear Mr sam

Iam eng.karim and i want to thank you about this great tool box and i ask about any news for binary support for this tool box since i work in my Master's degree and i want to use binary pso and i didnt find any matlab code support this. finally thank you again.


Dear Mr Saeed,

Does your fitness function work with other MATLAB optimization solvers? Please see this document ( for how to write a fitness function for the Genetic Algorithm (which should also work for PSO). Note that this PSO code doesn't support binary (where the only possible values are 0 or 1) or discrete variables yet.


Mohammed Ahmed Saeed

Dear sir ,How do you do ? i prepare for my Master in optimal relay coordination and if you please i need your help to provide me with a swarm code to determine the optimal relay settings ( i can not do the fitness function for each particle )


Hi Tom,

This webpage ( gives a large list of available toolboxes, although I don't know if any of them can handle non-linear constraints, and most of them are not written in MATLAB. Since you're interested, I will work on implementing non-linear constraints for this toolbox in a future release, maybe in about two weeks' time?



Hi, Sam,
I am interested in PSO with both linear and nonlinear constraints for high-dimention problem, but I found this package can not handle it.
Is there any other PSO toolbox to handle the problems with nonlinear and linear constraints? Please let me know: I am waiting for your reply.



Glad you found it useful. FYI I just found a bug where the swarm doesn't actually comply with the imposed linear constraints. I'm working to fix it as soon as possible.

Hanlin Zhang

This PSO toolbox is very useful for solving constrained optimization problems. Thanks.

MATLAB Release Compatibility
Created with R2017a
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!