MATLAB Answers

How to work with Nelder-Mead algorithm with 7 unknown variables?

2 views (last 30 days)
Julkar_Mustakim
Julkar_Mustakim on 14 Jan 2020
Commented: Walter Roberson on 15 Jan 2020
Dear Everyone,
I need a code of Nelder-Mead Algorithm which can deal with 7 unknown varibales which will be extrcated from excel/csv file. Thank you and looking forward.

  0 Comments

Sign in to comment.

Answers (1)

Matt J
Matt J on 14 Jan 2020
Edited: Matt J on 14 Jan 2020
You can use fminsearch (an implementation of Nelder-Mead) with any number of variables. However, there is no way to gaurantee that it will converge quickly, or at all, when there is more than one variable.

  4 Comments

Show 1 older comment
Walter Roberson
Walter Roberson on 15 Jan 2020
I guarantee that no algorithm based on simplex search will ever always be able to converge.
If you need guarantees of convergence then you need a completely different class of algorithm.
In particular you need new mathematics, as there is no algorithm currently known in the world that can guarantee convergence for multidimensional minimization.
Historically, Simulated Annealing has been written about as the only algorithm that can guarantee convergence, given enough aeons to execute in, but I suspect that is only for the case of two parameters; I suspect that for three or more parameters that some portions of the search space would still be expected to be missed after infinite time.
Matt J
Matt J on 15 Jan 2020
Can you please help me, how can I develop similar things based on MatLab? So that I can optimize it as per my requirements.
It depends on the properties of your function, but there are lots of options in the Optimization Toolbox and the Global Optimization Toolbox. If your function is non-differentiable, you might consider, ga
Walter Roberson
Walter Roberson on 15 Jan 2020
ga guarantees convergence only for a small range of function types.
In every single optimization routine that Mathworks provides in the Optimization Toolbox and Global Optimization toolbox, if the optimizer accepts a function handle as the objective function, then the optimization routine does not guarantee convergence. Guaranteed convergence is only available for some of the functions that accept an objective described by a matrix, such as quadratic programming.
Basically if you need guarantees of convergence then you live in the wrong mathematical universe.
Given any particular set of Nelder-Mead tuning parameters, you can always construct a function that the algorithm will not converge for.

Sign in to comment.

Sign in to answer this question.

Products


Release

R2018a

Translated by