fmincon: Proceeds when some variables are fixed???
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hey everyone,
I am having some behaviour with fmincon in Matlab that appear a little strange to me. I derived an objective function depending on N variables including an analytic expression for the Jacobian and Hessian.
( Couple of months ago I already posed a question about that but I thought it was more or less solved back then: http://www.mathworks.com/matlabcentral/answers/90090-fmincon-less-optimal-results-with-user-supplied-analytic-gradient )
Now, I am experiencing some problems with the convergence, the algorithm terminates prematurely although there are some significant residuals in the gradient left. In fact, I computed the objective function values with respect to one variable within the bounds and it is far away from the minimum.
However, when I use the final solution vector, and then recall fmincon while fixing all but one variable fmincon finds the minimum with respect to that variable. That doesn't really make sense to me, and I haven't been able to resolve that issue by now (I tried different solved, different tolerances, anything that came to my mind)
Thank you very much for your help in advance!
First fmincon call:
TolFun = 1e-8; %Default 1e-6
TolX = 1e-8; %Default 1e-6
TolCon = 1e-8; %Default 1e-6
FminconOptions = optimset( ...
'TolFun', TolFun, ...
'TolCon',TolCon, ...
'TolX',TolX,...
'DerivativeCheck','off',...
'SubproblemAlgorithm','cg',...
'GradObj','on',...
'Hessian','lbfgs',...
'FunValCheck','on',...
'TolProjCG',1e-4,...
'Algorithm','interior-point', ...
'Diagnostics','on', ...
'MaxFunEvals',inf, ...
'MaxIter',1000);
LowerBounds = -x_max * ones(NV,1);
UpperBounds = +x_max * ones(NV,1);
x_opt = fmincon( @(x) EvalObjective2D( x, AuxData ), zeros(NV,1) ,[],[], ...
[],[],LowerBounds,UpperBounds,[],FminconOptions);
Command Output:
First-order Norm of
Iter F-count f(x) Feasibility optimality step
0 1 1.238193e+00 0.000e+00 1.331e-02
1 2 1.237488e+00 0.000e+00 1.328e-02 2.646e-02
2 3 1.233951e+00 0.000e+00 1.323e-02 1.332e-01
...
12 47 7.989712e-01 0.000e+00 3.533e-03 3.543e-05
13 56 7.989712e-01 0.000e+00 3.533e-03 9.688e-07
14 62 7.989712e-01 0.000e+00 3.533e-03 2.119e-07
Optimization stopped because the relative changes in all elements of x are
less than options.TolX = 1.000000e-08, and the relative maximum constraint
violation, 0.000000e+00, is less than options.TolCon = 1.000000e-08.
Optimization Metric Options
max(abs(delta_x./x)) = 8.55e-09 TolX = 1e-08 (selected)
relative max(constraint violation) = 0.00e+00 TolCon = 1e-08 (selected)
Fixing all but the first variable (from which I know that it's suboptimal)
idx = 1;
x_opt2 = fmincon( @(x) EvalObjective2D( [x_opt(1:idx-1); x; x_opt(idx+1:end)], AuxData ), ... x_opt(idx),[],[],[],[],LowerBounds(idx),UpperBounds(idx),[],FminconOptions)
Command Output:
First-order Norm of
Iter F-count f(x) Feasibility optimality step
0 1 7.989712e-01 0.000e+00 3.631e-03
1 2 7.989582e-01 0.000e+00 3.529e-03 3.535e-03
2 3 7.988922e-01 0.000e+00 3.520e-03 1.792e-02
...
14 15 7.840090e-01 0.000e+00 1.152e-07 1.443e-02
15 16 7.840090e-01 0.000e+00 3.082e-08 7.093e-03
16 17 7.840090e-01 0.000e+00 9.835e-09 3.308e-03
Optimization completed: The relative first-order optimality measure, 9.834665e-09,
is less than options.TolFun = 1.000000e-08, and the relative maximum constraint
violation, 0.000000e+00, is less than options.TolCon = 1.000000e-08.
Optimization Metric Options
relative first-order optimality = 9.83e-09 TolFun = 1e-08 (selected)
relative max(constraint violation) = 0.00e+00 TolCon = 1e-08 (selected)
0 Kommentare
Antworten (1)
Alan Weiss
am 17 Jun. 2014
You have not set your options to take advantage of your analytic gradient and Hessian. Set the GradObj option to 'on', the Hessian option to 'user-supplied', and the HessFcn option to a function handle in the appropriate form (your function needs to accept lambda, but not to use it because your constraints are simply bound constraints, and do not enter into the Hessian calculation). Using the analytic Hessian may well enable fmincon to get a better answer.
With that said, it is entirely possible to get a better answer simply by running fmincon twice, starting from the first run's solution the second time. See this explanation. I think that explains why you get a better result on your second run--I suspect that it is not because you fixed the variable, but just because you rand the problem again.
Alan Weiss
MATLAB mathematical toolbox documentation
3 Kommentare
Alan Weiss
am 17 Jun. 2014
I do not know why you set your subproblem algorithm option to cg. Did you try using the default? The default is usually more accurate.
There is a slight chance that the sqp algorithm would give you a different and more accurate result. However, it does not use the analytic Hessian, so I doubt that it would help. To use it set the Algorithm option to 'sqp'.
I don't think that it makes a difference, but you could also try setting the GradConstr option to 'on'.
Finally, it is entirely possible that your objective function is not convex. I mean, it might have more than one local minimum. Try starting fmincon from a variety of initial points, as explained in searching for a smaller minimum.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
Siehe auch
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!