FMINCON first order optimality non-zero

26 Ansichten (letzte 30 Tage)
Jurrien Plijter
Jurrien Plijter am 3 Nov. 2020
Bearbeitet: Bruno Luong am 5 Nov. 2020
Hi,
I run fmincon, sqp for a minimalization problem. The problem is setup as the IDF-architecture with a total of 34 design variables. The code exists out of multiple 'black-box' disciplines. 1 iteration takes 30 seconds.
I managed to find an optimum that meets 2 out of 3 KKT conditions. The condition it does not meet (I think) is that the gradient of the Lagrangian should be zero at the minimum. The initial point of the iteration is feasible with a first order optimality condition of 0.74. The optimum is found after 40 iterations with a total function count of around 3000 with a first order optimality condition of 0.15. The objective function decreased with 9%, which is a good result for this problem. The output message of Matlab is shown below.
However, my gut tells me that the first order optimality condition is too high, and nowhere near to zero. The optimization stopped because of the size of the current step is less than selected value, but a further decrease of this option does not lead to a lower first order optimality condition or significant changes in design vector.
How can i further investigate if this first order optimality condition is acceptable for my problem or not?
(The optimization problem has been run for multiple setups and algorithm. The options shown below gave, for the first time, an actual feasible optimized design after weeks of trying. However, is it a local minimum?)
options=optimoptions(@fmincon,...
'Algorithm','sqp',...
'Tolx',1e-6,... %size smallest step. Smaller step causes optimizer to stop
'TolFun',1e-3,... %TolFun is a lower bound on the change in the value of the objective function during a step
'TolCon',1e-3,...
'Display','Iter',...
'DiffMinChange',1e-2,...
'DiffMaxChange',1e-1,...
'FinDiffType','Central',...
'MaxFunEvals',10000,...
'MaxIter',1000,...
'OutputFcn',@outputFcn_global,...
'PlotFcn','optimplotfval');
tic
[x,val,exitflag,output,lambda, grad, hessian]=fmincon(@objectivefcn,x0,A,b,Aeq,beq,lb,ub,@constraints,options)
toc
Local minimum possible. Constraints satisfied. fmincon stopped because the size of the current step is less than the selected value of the step size tolerance and constraints are satisfied to within the selected value of the constraint tolerance.
Exitflag 2
output =
struct with fields:
iterations: 40
funcCount: 2928
algorithm: 'sqp'
message: 'Local minimum possible. Constraints satisfied.↵↵fmincon stopped because the size of the current step is less than↵the selected value of the step size tolerance and constraints are ↵satisfied to within the selected value of the constraint tolerance.↵↵Stopping criteria details:↵↵Optimization stopped because the relative changes in all elements of x are↵less than options.StepTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 1.309327e-04, is less than options.ConstraintTolerance = 1.000000e-03.↵↵Optimization Metric Options↵max(abs(delta_x./x)) = 5.25e-07 StepTolerance = 1e-06 (selected)↵relative max(constraint violation) = 1.31e-04 ConstraintTolerance = 1e-03 (selected)'
constrviolation: 1.309326961893564e-04
stepsize: 1.698262377802769e-06
lssteplength: 1.299348114471227e-06
firstorderopt: 0.145027707751111
  5 Kommentare
Jurrien Plijter
Jurrien Plijter am 3 Nov. 2020
Even if i set the minimum stepsize to 0, matlab takes such small steps that it does not further decrease my first optimality condition. That is what i try to say.
That is also the struggle i had in setting up the options correctly in the first place: first trial runs stopped because the stepsize became to small, while not even satisfying constraints etc.
Bruno Luong
Bruno Luong am 3 Nov. 2020
"Even if i set the minimum stepsize to 0, matlab takes such small steps that it does not further decrease my first optimality condition. "
Then it stops because one of the *other* criteria is triggered. Decrease the one that triggers until the first order triggs the stopping.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Alan Weiss
Alan Weiss am 4 Nov. 2020
From your problem description it is very likely that the objective function is not smooth. Your black box functions may respond in nonsmooth ways to very small changes in your optimization parameter x. You may never be able to decrease your first-order optimality measure to near zero. See Optimizing a Simulation or ODE for a discussion.
You seem to have done a good job setting larger-than-default finite differences and you use central finite differences. I would have suggested these steps, but you already figured it out. As for whether you are at a true local minimum, it is hard to know. You can try looking at When the Solver Succeeds for some guidance, but I suspect that you might have already figured out these checks.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
  1 Kommentar
Jurrien Plijter
Jurrien Plijter am 5 Nov. 2020
Hi Alan, yes already checked those links. Thank you for your support and understanding, it sure helps in understanding what is all going on!

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Bruno Luong
Bruno Luong am 5 Nov. 2020
Bearbeitet: Bruno Luong am 5 Nov. 2020
If the blackbox you use does some weird stuff such as thresholding, branching, truncation ... that in turn makes your cost function non-smooth (entirely possible as Alan suggested); then you might take the hand on computing the gradient by doing you own finite discrete estimation with a large-enought step to avoid the artefact of your blackbox.
% gradient calculation
fx = f(x); % fx is the objective function f evaluated at x
for i=1:n
deltax = zeros(size(x));
deltax(i) = something_not_too_small; % meaning relative to your decision parameter and YOU can control it
dfdx(i) = (f(x+deltax)-fx) / something_not_too_small;
end
Central finite help? Might be, might be NOT. It the step size that matter IMO. But you can implement also the central limit of your gradient-function implementation.
Alternatively you might also play with 'FiniteDifferenceStepSize' and use a bigger value than the default one.
That will provide more stable and robust gradient estimation and in turn might help FMINCON to go father to the convergent, but eventually it will fails when it refine the solution less than the blackbox "threshold". You might try twist the opimization parameters to reduce the error but the solution won't congerve correctly if the objective function is not "smooth".
There is no miracle here, FMINCON requires smooth objective function, if the input is not smooth then FMINCON cannot return good solution.
How do they call it? GIGO?

Produkte


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by