Main Content

Problem-Based Optimization Algorithms

Internally, the solve function solves optimization problems by calling a solver. For the default solver for the problem and supported solvers for the problem, see the solvers function. You can override the default by using the 'solver' name-value pair argument when calling solve.

Before solve can call a solver, the problems must be converted to solver form, either by solve or some other associated functions or objects. This conversion entails, for example, linear constraints having a matrix representation rather than an optimization variable expression.

The first step in the algorithm occurs as you place optimization expressions into the problem. An OptimizationProblem object has an internal list of the variables used in its expressions. Each variable has a linear index in the expression, and a size. Therefore, the problem variables have an implied matrix form. The prob2struct function performs the conversion from problem form to solver form. For an example, see Convert Problem to Structure.

For nonlinear optimization problems, solve uses automatic differentiation to compute the gradients of the objective function and nonlinear constraint functions. These derivatives apply when the objective and constraint functions are composed of Supported Operations for Optimization Variables and Expressions. When automatic differentiation does not apply, solvers estimate derivatives using finite differences. For details of automatic differentiation, see Automatic Differentiation Background. You can control how solve uses automatic differentiation with the ObjectiveDerivative name-value argument.

For the algorithm that intlinprog uses to solve MILP problems, see Legacy intlinprog Algorithm. For the algorithms that linprog uses to solve linear programming problems, see Linear Programming Algorithms. For the algorithms that quadprog uses to solve quadratic programming problems, see Quadratic Programming Algorithms. For linear or nonlinear least-squares solver algorithms, see Least-Squares (Model Fitting) Algorithms. For nonlinear solver algorithms, see Unconstrained Nonlinear Optimization Algorithms and Constrained Nonlinear Optimization Algorithms. For Global Optimization Toolbox solver algorithms, see Global Optimization Toolbox documentation.

For nonlinear equation solving, solve internally represents each equation as the difference between the left and right sides. Then solve attempts to minimize the sum of squares of the equation components. For the algorithms for solving nonlinear systems of equations, see Equation Solving Algorithms. When the problem also has bounds, solve calls lsqnonlin to minimize the sum of squares of equation components. See Least-Squares (Model Fitting) Algorithms.

Note

If your objective function is a sum of squares, and you want solve to recognize it as such, write it as either norm(expr)^2 or sum(expr.^2), and not as expr'*expr or any other form. The internal parser recognizes a sum of squares only when represented as a square of a norm or an explicit sums of squares. For details, see Write Objective Function for Problem-Based Least Squares. For an example, see Nonnegative Linear Least Squares, Problem-Based.

See Also

| |

Related Topics