Fincon Warning: undesired gradient requirement
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hi,
I'm having a problem using Fmincon and I cannot find a solution online. I have to minimize a function ('ikSolver' in the code below) whose input is a [10x1] vector of doubles:
%Get lower and upper boundaries
[lb,ub] = getLegJointLimits('A')
% Random starting point initialization
rng('shuffle');
qlegs = rand(10,1);
% Boundaries on the parameters must be satified by
% the initial value in the optimization procedure thus:
qlegs = lb + (ub-lb).*qlegs;
options = optimset('Display','iter','MaxFunEvals',50000,...
'TolFun',1e-06,...
'MaxIter',50000,'LargeScale','on');
[fqs,Fval,EXITFLAG] = fmincon('ikSolver',qlegs,[],[],[],[],lb,ub,'constraint',options);
The values of the boundary vectors are:
lb =
-0.6685
-0.9745
0.1193
-1.5464
-0.6229
-0.2624
-1.5480
0.1282
-0.9782
-0.2811
ub =
0.2712
0.7202
1.8977
0.2598
0.3003
0.6735
0.2582
1.8921
0.7114
0.6523
and the functions appearing (simplified for simplicity):
function F = ikSolver(theta)
global Rmatr % Relabeling matrix
global gt gteq
global H
Qplus = Rmatr*theta;
F = sum(Qplus)^2;
gt = [ ];
gteq = [ ];
end
with:
Rmatr =
0 0 0 0 0 0 0 0 0 1
0 1 1 1 0 0 -1 -1 0 0
0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
And 'constraint':
function [gtr,gteqr] = constraint(Jsolcons)
global gt gteq
gtr = gt; %non linear inequality constraints
gteqr = gteq; %non linear equality constraints
return
The warning is the following:
Warning: To use the default trust-region-reflective algorithm you must supply the gradient in the objective function and set the GradObj option to 'on'. FMINCON will use the active-set algorithm instead. For information on applicable algorithms, see Choosing the Algorithm in the documentation. > In fmincon at 492
The same warning (and consequent wrong result) appears if I change the arguments of the function fmincon as for example:
[fqs,Fval,EXITFLAG] = fmincon('ikSolver',qlegs,[],[],[],[],lb,ub);
[fqs,Fval,EXITFLAG] = fmincon('ikSolver',qlegs,eye(10),ub);
options = optimset('Display','off','MaxFunEvals',50000,...
'TolFun',1e-06,...
'MaxIter',50000);
I don't understand where am I asking for the trust-region-reflective algorithm which requires the gradient...
Thank you very much!!
1 Kommentar
Matt J
am 17 Apr. 2013
Bearbeitet: Matt J
am 17 Apr. 2013
Are you sure you really meant
F = sum(Qplus.^2);
If you really did mean,
F=sum(Qplus)^2
then your objective function is equivalent to
Rsum=sum(Rmatr,1);
F=(dot(Rsum,theta))^2;
Obviously it is then more efficient to pre-compute Rsum once rather thatnto repeatedly perform the more expensive matrix multiplication Rmatr*theta in every iteration of the algorithm.
Akzeptierte Antwort
Kye Taylor
am 17 Apr. 2013
Bearbeitet: Kye Taylor
am 17 Apr. 2013
The trust-region-reflective algorithm is the default option for fmincon.
You can specify the solver by adding the name-value pair
'Algorithm','active-set' to the input to optimset, as in the command
options = optimset('Display','off','MaxFunEvals',50000,...
'TolFun',1e-06,...
'MaxIter',50000,...
'Algorithm','active-set');
See the online doc at http://www.mathworks.com/help/optim/ug/choosing-a-solver.html#bsbqd7i for more info.
Weitere Antworten (1)
Matt J
am 17 Apr. 2013
Bearbeitet: Matt J
am 17 Apr. 2013
Instead of using an alternative algorithm that doesn't require the gradient of the objective function, why not just supply the trust-region-reflective algorithm the gradient that it requires? The gradient in your case has a very simple form
gradient = 2*(Rmatr.'*Qplus)
I assume here that you really meant to write
F = sum(Qplus.^2);
as per my comment above.
Siehe auch
Kategorien
Mehr zu Linear Programming and Mixed-Integer Linear Programming finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!