I have been doing some experiments to see how the number of function evaluations (F-count) change depending on the type of gradient information that is passed (objective gradient vs. constraint gradient). Here are the results of a test case (decision space: 60 variables) alongwith the time it took to converge to a solution (using tic-toc):
Case 1: No gradient info. is passed
F-count: 6451 Time: 71.03 secs
Case 2: Only objective gradient info. is passed
F-count: 6462 Time: 24.64 secs
Case 3: Only constraint gradient info. is passed
F-count: 6343 Time: 15.54 secs
I understand that F-count only reports the total number of objective function evaluations, regardless of the presence of constraint functions (Iterations and Function Counts). I also understand that because my constraints are fairly complex, providing constraint gradient info. results in more speedup than providing objective gradient info (Case 3 vs. Case 2). What I do not understand is why F-count is very similar in Cases 2 and 3? I was under the impression that Case 2 will have significantly lower F-count value since I am already providing objective gradient.
For reference, here is my options file:
options = optimoptions(@fmincon, 'ScaleProblem',true,'TolFun', 1.0e-06,'OptimalityTolerance',1.0e-06,'SpecifyObjectiveGradient',true,...
'MaxIter', 10000,'MaxFunEvals', 300000,'Display','iter', ...
'StepTolerance',1.0e-6,'DiffMinChange', 1.0e-3, 'Algorithm', 'interior-point');