Main Content

When the Solver Succeeds

What Can Be Wrong If The Solver Succeeds?

A solver can report that a minimization succeeded, and yet the reported solution can be incorrect. For a rather trivial example, consider minimizing the function f(x) = x3 for x between –2 and 2, starting from the point 1/3:

options = optimoptions('fmincon','Algorithm','active-set');
ffun = @(x)x^3;
xfinal = fmincon(ffun,1/3,[],[],[],[],-2,2,[],options)         

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is
non-decreasing in feasible directions, to within the default  
valueof the function tolerance, and constraints were satisfied 
to within the default value of the constraint tolerance.

No active inequalities.

xfinal =
 -1.5056e-008

The true minimum occurs at x = -2. fmincon gives this report because the function f(x) is so flat near x = 0.

Another common problem is that a solver finds a local minimum, but you might want a global minimum. For more information, see Local vs. Global Optima.

Lesson: check your results, even if the solver reports that it “found” a local minimum, or “solved” an equation.

This section gives techniques for verifying results.

1. Change the Initial Point

The initial point can have a large effect on the solution. If you obtain the same or worse solutions from various initial points, you become more confident in your solution.

For example, minimize f(x) = x3 + x4 starting from the point 1/4:

ffun = @(x)x^3 + x^4;
options = optimoptions('fminunc','Algorithm','quasi-newton');
[xfinal fval] = fminunc(ffun,1/4,options)

Local minimum found.

Optimization completed because the size of the gradient 
is less than the default value of the function tolerance.

x =
 -1.6764e-008

fval =
 -4.7111e-024

Change the initial point by a small amount, and the solver finds a better solution:

[xfinal fval] = fminunc(ffun,1/4+.001,options)

Local minimum found.

Optimization completed because the size of the gradient 
is less than the default value of the function tolerance.

xfinal =
   -0.7500

fval =
   -0.1055

x = -0.75 is the global solution; starting from other points cannot improve the solution.

For more information, see Local vs. Global Optima.

2. Check Nearby Points

To see if there are better values than a reported solution, evaluate your objective function and constraints at various nearby points.

For example, with the objective function ffun from What Can Be Wrong If The Solver Succeeds?, and the final point xfinal = -1.5056e-008, calculate ffun(xfinal±Δ) for some Δ:

delta = .1;
[ffun(xfinal),ffun(xfinal+delta),ffun(xfinal-delta)]

ans =
   -0.0000    0.0011   -0.0009

The objective function is lower at ffun(xfinal-Δ), so the solver reported an incorrect solution.

A less trivial example:

options = optimoptions(@fmincon,'Algorithm','active-set');
lb = [0,-1]; ub = [1,1];
ffun = @(x)(x(1)-(x(1)-x(2))^2);
[x fval exitflag] = fmincon(ffun,[1/2 1/3],[],[],[],[],...
                           lb,ub,[],options)

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is
non-decreasing in feasible directions, to within the default  
valueof the function tolerance, and constraints were satisfied 
to within the default value of the constraint tolerance.

Active inequalities (to within options.ConstraintTolerance = 1e-006):
  lower      upper     ineqlin   ineqnonlin
    1                                 

x =
  1.0e-007 *
         0    0.1614

fval =
 -2.6059e-016

exitflag =
     1

Evaluating ffun at nearby feasible points shows that the solution x is not a true minimum:

[ffun([0,.001]),ffun([0,-.001]),...
    ffun([.001,-.001]),ffun([.001,.001])]

ans =
  1.0e-003 *
   -0.0010   -0.0010    0.9960    1.0000

The first two listed values are smaller than the computed minimum fval.

If you have a Global Optimization Toolbox license, you can use the patternsearch (Global Optimization Toolbox) function to check nearby points.

3. Check your Objective and Constraint Functions

Double-check your objective function and constraint functions to ensure that they correspond to the problem you intend to solve. Suggestions:

  • Check the evaluation of your objective function at a few points.

  • Check that each inequality constraint has the correct sign.

  • If you performed a maximization, remember to take the negative of the reported solution. (This advice assumes that you maximized a function by minimizing the negative of the objective.) For example, to maximize f(x) = x – x2, minimize g(x) = –x + x2:

    options = optimoptions('fminunc','Algorithm','quasi-newton');
    [x fval] = fminunc(@(x)-x+x^2,0,options)
    
    Local minimum found.
    
    Optimization completed because the size of the gradient is 
    less than the default value of the function tolerance.
    
    x =
        0.5000
    
    fval =
       -0.2500

    The maximum of f is 0.25, the negative of fval.

  • Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.