# Documentation

## Choosing a Solver

### Optimization Decision Table

The following table is designed to help you choose a solver. It does not address multiobjective optimization or equation solving. There are more details on all the solvers in Problems Handled by Optimization Toolbox Functions.

Use the table as follows:

1. Identify your objective function as one of five types:

• Linear

• Sum-of-squares (Least squares)

• Smooth nonlinear

• Nonsmooth

2. Identify your constraints as one of five types:

• None (unconstrained)

• Bound

• Linear (including bound)

• General smooth

• Discrete (integer)

3. Use the table to identify a relevant solver.

In this table:

• * means relevant solvers are found in Global Optimization Toolbox functions (licensed separately from Optimization Toolbox™ solvers).

• `fmincon` applies to most smooth objective functions with smooth constraints. It is not listed as a preferred solver for least squares or linear or quadratic programming because the listed solvers are usually more efficient.

• The table has suggested functions, but it is not meant to unduly restrict your choices. For example, `fmincon` can be effective on some nonsmooth problems.

• The Global Optimization Toolbox `ga` function can address mixed-integer programming problems.

Solvers by Objective and Constraint

 Note:   This table does not list multiobjective solvers nor equation solvers. See Problems Handled by Optimization Toolbox Functions for a complete list of problems addressed by Optimization Toolbox functions.
 Note:   Some solvers have several algorithms. For help choosing, see Choosing the Algorithm.

### Choosing the Algorithm

#### fmincon Algorithms

`fmincon` has four algorithm options:

• `'interior-point'` (default)

• `'trust-region-reflective'`

• `'sqp'`

• `'active-set'`

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• Use the `'interior-point'` algorithm first.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

• To run an optimization again to obtain more speed on small- to medium-sized problems, try `'sqp'` next, and `'active-set'` last.

• Use `'trust-region-reflective'` when applicable. Your problem must have: objective function includes gradient, only bounds, or only linear equality constraints (but not both).

Reasoning Behind the Recommendations.

• `'interior-point'` handles large, sparse problems, as well as small dense problems. The algorithm satisfies bounds at all iterations, and can recover from `NaN` or `Inf` results. It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can use special techniques for large-scale problems. For details, see Interior-Point Algorithm.

• `'sqp'` satisfies bounds at all iterations. The algorithm can recover from `NaN` or `Inf` results. It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.

• `'active-set'` can take large steps, which adds speed. The algorithm is effective on some problems with nonsmooth constraints. It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.

• `'trust-region-reflective'` requires you to provide a gradient, and allows only bounds or linear equality constraints, but not both. Within these limitations, the algorithm handles both large sparse problems and small dense problems efficiently. It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can use special techniques to save memory usage, such as a Hessian multiply function. For details, see Trust-Region-Reflective Algorithm.

#### fsolve Algorithms

`fsolve` has three algorithms:

• `'trust-region-dogleg'` (default)

• `'trust-region-reflective'`

• `'levenberg-marquardt'`

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• Use the `'trust-region-dogleg'` algorithm first.

For help if `fsolve` fails, see When the Solver Fails or When the Solver Might Have Succeeded.

• To solve equations again if you have a Jacobian multiply function, or want to tune the internal algorithm (see Trust-Region-Reflective Algorithm Only), try `'trust-region-reflective'`.

• Try timing all the algorithms, including `'levenberg-marquardt'`, to find the algorithm that works best on your problem.

Reasoning Behind the Recommendations.

• `'trust-region-dogleg'` is the only algorithm that is specially designed to solve nonlinear equations. The others attempt to minimize the sum of squares of the function.

• The `'trust-region-reflective'` algorithm is effective on sparse problems. It can use special techniques such as a Jacobian multiply function for large-scale problems.

#### fminunc Algorithms

`fminunc` has two algorithms:

• `'trust-region'` (formerly ```LargeScale = 'on'```), the default

• `'quasi-newton'` (formerly ```LargeScale = 'off'```)

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• If your objective function includes a gradient, use `'Algorithm' = 'trust-region'`, and set the `GradObj` option to `'on'`.

• Otherwise, use `'Algorithm' = 'quasi-newton'`.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

#### Least Squares Algorithms

lsqlin.  `lsqlin` has three algorithms:

• `'trust-region-reflective'` (formerly ```LargeScale = 'on'```), the default

• `'active-set'` (formerly ```LargeScale = 'off'```)

• `'interior-point'`

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• If you have no constraints or only bound constraints, use `'trust-region-reflective'`.

• If you have linear constraints, try `'interior-point'` first.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

lsqcurvefit and lsqnonlin.  `lsqcurvefit` and `lsqnonlin` have two algorithms:

• `'trust-region-reflective'` (default)

• `'levenberg-marquardt'`

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• If you have no constraints or only bound constraints, use `'trust-region-reflective'`.

• If your problem is underdetermined (fewer equations than dimensions), use `'levenberg-marquardt'`.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

#### Linear Programming Algorithms

`linprog` has four algorithms:

• `'interior-point'`, the default

• `'dual-simplex'`

• `'simplex'`

• `'active-set'`

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations

Use the `'interior-point'` algorithm or the `'dual-simplex'` algorithm first.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

Reasoning Behind the Recommendations.

• The `'interior-point'` and `'dual-simplex'` algorithms are large-scale algorithms, while the other two are not. See Large-Scale vs. Medium-Scale Algorithms.

• Often, the `'interior-point'` and `'dual-simplex'` algorithms are faster and use less memory than the other two algorithms.

• The `'active-set'` and `'simplex'` algorithm will be removed in a future release.

`quadprog` has three algorithms:

• `'interior-point-convex'` (default)

• `'trust-region-reflective'`

• `'active-set'` (will be removed in a future release).

Use `optimoptions` to set the `Algorithm` option at the command line.

Recommendations
• If you have a convex problem, or if you don't know whether your problem is convex, use `'interior-point-convex'`.

• If you have a nonconvex problem with only bounds, or with only linear equalities, use `'trust-region-reflective'`.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

#### Large-Scale vs. Medium-Scale Algorithms

An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. This may be done internally by storing sparse matrices, and by using sparse linear algebra for computations whenever possible. Furthermore, the internal algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method.

In contrast, medium-scale methods internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute.

Don't let the name "large scale" mislead you; you can use a large-scale algorithm on a small problem. Furthermore, you do not need to specify any sparse matrices to use a large-scale algorithm. Choose a medium-scale algorithm to access extra functionality, such as additional constraint types, or possibly for better performance.

#### Potential Inaccuracy with Interior-Point Algorithms

Interior-point algorithms in `fmincon`, `quadprog`, and `linprog` have many good characteristics, such as low memory usage and the ability to solve large problems quickly. However, their solutions can be slightly less accurate than those from other algorithms. The reason for this potential inaccuracy is that the (internally calculated) barrier function keeps iterates away from inequality constraint boundaries.

For most practical purposes, this inaccuracy is usually quite small.

To reduce the inaccuracy, try to:

• Rerun the solver with smaller `TolX`, `TolFun`, and possibly `TolCon` tolerances (but keep the tolerances sensible.) See Tolerances and Stopping Criteria).

• Run a different algorithm, starting from the interior-point solution. This can fail, because some algorithms can use excessive memory or time, and some `linprog` and `quadprog` algorithms do not accept an initial point.

For example, try to minimize the function x when bounded below by 0. Using the `fmincon` `interior-point` algorithm:

```options = optimoptions(@fmincon,'Algorithm','interior-point','Display','off'); x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)```
```x = 2.0000e-08```

Using the `fmincon` `sqp` algorithm:

```options.Algorithm = 'sqp'; x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)```
```x2 = 0```

Similarly, solve the same problem using the `linprog` `interior-point` algorithm:

```opts = optimoptions(@linprog,'Display','off','Algorithm','interior-point'); x = linprog(1,[],[],[],[],0,[],1,opts)```
```x = 2.0833e-13```

Using the `linprog` `simplex` algorithm:

```opts.Algorithm = 'simplex'; x2 = linprog(1,[],[],[],[],0,[],1,opts)```
```x2 = 0```

In these cases, the `interior-point` algorithms are less accurate, but the answers are quite close to the correct answer.

### Problems Handled by Optimization Toolbox Functions

The following tables show the functions available for minimization, equation solving, multiobjective optimization, and solving least-squares or data-fitting problems.

Minimization Problems

TypeFormulationSolver

Scalar minimization

$\underset{x}{\mathrm{min}}f\left(x\right)$

such that lb < x < ub (x is scalar)

`fminbnd`

Unconstrained minimization

$\underset{x}{\mathrm{min}}f\left(x\right)$

Linear programming

$\underset{x}{\mathrm{min}}{f}^{T}x$

such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Mixed-integer linear programming

$\underset{x}{\mathrm{min}}{f}^{T}x$

such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub, x(intcon) is integer-valued.

`intlinprog`

$\underset{x}{\mathrm{min}}\frac{1}{2}{x}^{T}Hx+{c}^{T}x$

such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Constrained minimization

$\underset{x}{\mathrm{min}}f\left(x\right)$

such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Semi-infinite minimization

$\underset{x}{\mathrm{min}}f\left(x\right)$

such that K(x,w) ≤ 0 for all w, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Multiobjective Problems

TypeFormulationSolver

Goal attainment

$\underset{x,\gamma }{\mathrm{min}}\gamma$

such that F(x) – w·γ ≤ goal, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Minimax

$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}{F}_{i}\left(x\right)$

such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Equation Solving Problems

TypeFormulationSolver

Linear equations

C·x = d, n equations, n variables

`\` (matrix left division)

Nonlinear equation of one variable

f(x) = 0

Nonlinear equations

F(x) = 0, n equations, n variables

Least-Squares (Model-Fitting) Problems

TypeFormulationSolver

Linear least-squares

$\underset{x}{\mathrm{min}}\frac{1}{2}{‖C\cdot x-d‖}_{2}^{2}$

m equations, n variables

`\` (matrix left division)

Nonnegative linear-least-squares

$\underset{x}{\mathrm{min}}\frac{1}{2}{‖C\cdot x-d‖}_{2}^{2}$

such that x ≥ 0

Constrained linear-least-squares

$\underset{x}{\mathrm{min}}\frac{1}{2}{‖C\cdot x-d‖}_{2}^{2}$

such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub

Nonlinear least-squares

$\underset{x}{\mathrm{min}}{‖F\left(x\right)‖}_{2}^{2}=\underset{x}{\mathrm{min}}\sum _{i}{F}_{i}^{2}\left(x\right)$

such that lb ≤ x ≤ ub

Nonlinear curve fitting

$\underset{x}{\mathrm{min}}{‖F\left(x,xdata\right)-ydata‖}_{2}^{2}$

such that lb ≤ x ≤ ub