# Optimisation algorithm ga and fmincon

41 views (last 30 days)
R Hebi on 31 Mar 2020
Commented: R Hebi on 6 Apr 2020
Hey
I have an optimisation problem and I solve it by using fmincon & ga but the results are different. Is it possible to get different results based on the function used? and if yes why?
##### 2 CommentsShowHide 1 older comment
R Hebi on 6 Apr 2020
Thank you Athul

John D'Errico on 3 Apr 2020
Athul has already explained it pretty well. Let me try to expand on that, to add some ideas or possibilities.
Many nonlinear problems have multiple local minima. A simple example is to minimize the function cos(x). Does it matter if you choose the solution at -pi or at +pi?
cos(-pi)
ans =
-1
>> cos(pi)
ans =
-1
>> cos(2*pi + pi*[-5:2:5])
ans =
-1 -1 -1 -1 -1 -1
In fact, cos(x) has infinitely many local minima, all of which have the same value: -1. So in every case, they are all also perfectly valid solutions. Which solution should an optimizer choose? All are equally good, and you should not care which one is returned. But different solvers can easily find a different solution. But now, suppose I modified the problem slightly?
fun = @(x) exp(x/10).*cos(x)
fun =
function_handle with value:
@(x)exp(x/10).*cos(x)
fplot(fun,[-5,20]) Here we see the solution does matter which min we choose, but in fact, there is no globally correct min, since they grow to minus infinity. An optimizer, ANY optimizer will eventually pick some solution though.
These two examples are a bit extreme of course, but my point is that most nonlinear functions have local minima. In 2-d, a simple example is the peaks function in MATLAB. Again local minima exist.
fsurf(@peaks) Here it appears as if 3 distinct minima exist. but depending on the starting values, any optimizer (probably even GA if used poorly) can also probably end up diverging to infinity. But fmincon will certainly get stuck in a local minimum if given poor starting values.
Next, it is still possible the two solvers have converged to effectively the same solution, but because of the shape of the surface, the solution is very poorly defined in that vicinity. So it might look as if they are different solutions, yet convergence issues are confusing things.
Remember that a numercal optimzer does not truly "understand" your function. It just searches an area, and depending on where you start it, the optimizer will converge to some solution. Any optimizer treats your problam as what is called a black box - it passes a set of parameter values into the box, and gets a result for the objective function. By trying different points, it moves around in the domain of the function until it thinks it has found a point that meets your convergence criteria.
There is no guarantee that any solver will find the global solution. As I showed, even GA must eventually give up the search when posed with a difficult problem.
It is best to understand how an optimizer works, how the different tools are different in how they all work, and then you can better use one of them to solve a your problems. In his comment, Athul gave you good advice.