quadprog
Quadratic programming
Syntax
Description
Solver for quadratic objective functions with linear constraints.
quadprog finds a minimum for a problem specified by
$$\underset{x}{\mathrm{min}}\frac{1}{2}{x}^{T}Hx+{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x\le b,\\ Aeq\cdot x=beq,\\ lb\le x\le ub.\end{array}$$
H, A, and Aeq are matrices, and f, b, beq, lb, ub, and x are vectors.
You can pass f, lb, and ub as vectors or matrices; see Matrix Arguments.
Note
quadprog
applies only to the solverbased approach. For a discussion
of the two optimization approaches, see First Choose ProblemBased or SolverBased Approach.
solves the preceding problem subject to the additional restrictions
x
= quadprog(H
,f
,A
,b
,Aeq
,beq
,lb
,ub
)lb
≤ x
≤ ub
.
The inputs lb
and ub
are vectors of
doubles, and the restrictions hold for each x
component. If
no equalities exist, set Aeq = []
and
beq = []
.
Note
If the specified input bounds for a problem are inconsistent, the
output x
is x0
and the output
fval
is []
.
quadprog
resets components of
x0
that violate the bounds
lb
≤ x
≤ ub
to the interior of the box defined by the bounds.
quadprog
does not change components that
respect the bounds.
returns the minimum for x
= quadprog(problem
)problem
, a structure described in
problem
. Create the
problem
structure using dot notation or the struct
function. Alternatively,
create a problem
structure from an
OptimizationProblem
object by using prob2struct
.
[
starts wsout
,fval
,exitflag
,output
,lambda
]
= quadprog(H
,f
,A
,b
,Aeq
,beq
,lb
,ub
,ws
)quadprog
from the data in the warm start object
ws
, using the options in ws
. The
returned argument wsout
contains the solution point in
wsout.X
. By using wsout
as the initial
warm start object in a subsequent solver call, quadprog
can work faster.
Examples
Quadratic Program with Linear Constraints
Find the minimum of
$$f(x)=\frac{1}{2}{x}_{1}^{2}+{x}_{2}^{2}{x}_{1}{x}_{2}2{x}_{1}6{x}_{2}$$
subject to the constraints
$$\begin{array}{l}{x}_{1}+{x}_{2}\le 2\\ {x}_{1}+2{x}_{2}\le 2\\ 2{x}_{1}+{x}_{2}\le 3.\end{array}$$
In quadprog
syntax, this problem is to minimize
$$f(x)=\frac{1}{2}{x}^{T}Hx+{f}^{T}x$$,
where
$$\begin{array}{l}\mathit{H}=\left[\begin{array}{cc}1& 1\\ 1& 2\end{array}\right]\\ \mathit{f}=\left[\begin{array}{c}2\\ 6\end{array}\right],\end{array}$$
subject to the linear constraints.
To solve this problem, first enter the coefficient matrices.
H = [1 1; 1 2]; f = [2; 6]; A = [1 1; 1 2; 2 1]; b = [2; 2; 3];
Call quadprog
.
[x,fval,exitflag,output,lambda] = ...
quadprog(H,f,A,b);
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the final point, function value, and exit flag.
x,fval,exitflag
x = 2×1
0.6667
1.3333
fval = 8.2222
exitflag = 1
An exit flag of 1
means the result is a local minimum. Because H
is a positive definite matrix, this problem is convex, so the minimum is a global minimum.
Confirm that H
is positive definite by checking its eigenvalues.
eig(H)
ans = 2×1
0.3820
2.6180
Quadratic Program with Linear Equality Constraint
Find the minimum of
$$f(x)=\frac{1}{2}{x}_{1}^{2}+{x}_{2}^{2}{x}_{1}{x}_{2}2{x}_{1}6{x}_{2}$$
subject to the constraint
$${x}_{1}+{x}_{2}=0.$$
In quadprog
syntax, this problem is to minimize
$$f(x)=\frac{1}{2}{x}^{T}Hx+{f}^{T}x$$,
where
$$\begin{array}{l}\mathit{H}=\left[\begin{array}{cc}1& 1\\ 1& 2\end{array}\right]\\ \mathit{f}=\left[\begin{array}{c}2\\ 6\end{array}\right],\end{array}$$
subject to the linear constraint.
To solve this problem, first enter the coefficient matrices.
H = [1 1; 1 2]; f = [2; 6]; Aeq = [1 1]; beq = 0;
Call quadprog
, entering []
for the inputs A
and b
.
[x,fval,exitflag,output,lambda] = ...
quadprog(H,f,[],[],Aeq,beq);
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the final point, function value, and exit flag.
x,fval,exitflag
x = 2×1
0.8000
0.8000
fval = 1.6000
exitflag = 1
An exit flag of 1
means the result is a local minimum. Because H
is a positive definite matrix, this problem is convex, so the minimum is a global minimum.
Confirm that H
is positive definite by checking its eigenvalues.
eig(H)
ans = 2×1
0.3820
2.6180
Quadratic Minimization with Linear Constraints and Bounds
Find the x that minimizes the quadratic expression
$$\frac{1}{2}{x}^{T}Hx+{f}^{T}x$$
where
$\mathit{H}=\left[\begin{array}{ccc}1& 1& 1\\ 1& 2& 2\\ 1& 2& 4\end{array}\right]$, $\mathit{f}=\left[\begin{array}{c}2\\ 3\\ 1\end{array}\right]$,
subject to the constraints
$$0\le x\le 1$$, $$\sum x=1/2$$.
To solve this problem, first enter the coefficients.
H = [1,1,1 1,2,2 1,2,4]; f = [2;3;1]; lb = zeros(3,1); ub = ones(size(lb)); Aeq = ones(1,3); beq = 1/2;
Call quadprog
, entering []
for the inputs A
and b
.
x = quadprog(H,f,[],[],Aeq,beq,lb,ub)
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
0.0000
0.5000
0.0000
Quadratic Minimization with Nondefault Options
Set options to monitor the progress of quadprog
.
options = optimoptions('quadprog','Display','iter');
Define a problem with a quadratic objective and linear inequality constraints.
H = [1 1; 1 2]; f = [2; 6]; A = [1 1; 1 2; 2 1]; b = [2; 2; 3];
To help write the quadprog
function call, set the unnecessary inputs to []
.
Aeq = []; beq = []; lb = []; ub = []; x0 = [];
Call quadprog
to solve the problem.
x = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options)
Iter Fval Primal Infeas Dual Infeas Complementarity 0 8.884885e+00 3.214286e+00 1.071429e01 1.000000e+00 1 8.331868e+00 1.321041e01 4.403472e03 1.910489e01 2 8.212804e+00 1.676295e03 5.587652e05 1.009601e02 3 8.222204e+00 8.381476e07 2.793826e08 1.809485e05 4 8.222222e+00 3.064216e14 1.352696e12 7.525735e13 Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 2×1
0.6667
1.3333
Quadratic Problem from prob2struct
Create a problem
structure using a ProblemBased Optimization Workflow. Create an optimization problem equivalent to Quadratic Program with Linear Constraints.
x = optimvar('x',2); objec = x(1)^2/2 + x(2)^2  x(1)*x(2)  2*x(1)  6*x(2); prob = optimproblem('Objective',objec); prob.Constraints.cons1 = sum(x) <= 2; prob.Constraints.cons2 = x(1) + 2*x(2) <= 2; prob.Constraints.cons3 = 2*x(1) + x(2) <= 3;
Convert prob
to a problem
structure.
problem = prob2struct(prob);
Solve the problem using quadprog
.
[x,fval] = quadprog(problem)
Warning: Your Hessian is not symmetric. Resetting H=(H+H')/2.
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 2×1
0.6667
1.3333
fval = 8.2222
Return quadprog
Objective Function Value
Solve a quadratic program and return both the solution and the objective function value.
H = [1,1,1 1,2,2 1,2,4]; f = [7;12;15]; A = [1,1,1]; b = 3; [x,fval] = quadprog(H,f,A,b)
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
3.5714
2.9286
3.6429
fval = 47.1786
Check that the returned objective function value matches the value computed from the quadprog
objective function definition.
fval2 = 1/2*x'*H*x + f'*x
fval2 = 47.1786
Examine quadprog
Optimization Process
To see the optimization process for quadprog
, set options to show an iterative display and return four outputs. The problem is to minimize
$$\frac{1}{2}{x}^{T}Hx+{f}^{T}x$$
subject to
$$0\le x\le 1$$,
where
$\mathit{H}=\left[\begin{array}{ccc}2& 1& 1\\ 1& 3& \frac{1}{2}\\ 1& \frac{1}{2}& 5\end{array}\right]$, $\mathit{f}=\left[\begin{array}{c}4\\ 7\\ 12\end{array}\right]$.
Enter the problem coefficients.
H = [2 1 1 1 3 1/2 1 1/2 5]; f = [4;7;12]; lb = zeros(3,1); ub = ones(3,1);
Set the options to display iterative progress of the solver.
options = optimoptions('quadprog','Display','iter');
Call quadprog
with four outputs.
[x fval,exitflag,output] = quadprog(H,f,[],[],[],[],lb,ub,[],options)
Iter Fval Primal Infeas Dual Infeas Complementarity 0 2.691769e+01 1.582123e+00 1.712849e+01 1.680447e+00 1 3.889430e+00 0.000000e+00 8.564246e03 9.971731e01 2 5.451769e+00 0.000000e+00 4.282123e06 2.710131e02 3 5.499997e+00 0.000000e+00 1.221903e10 6.939689e07 4 5.500000e+00 0.000000e+00 5.842173e14 3.469847e10 Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1
0.0000
1.0000
0.0000
fval = 5.5000
exitflag = 1
output = struct with fields:
message: 'Minimum found that satisfies the constraints....'
algorithm: 'interiorpointconvex'
firstorderopt: 1.5921e09
constrviolation: 0
iterations: 4
linearsolver: 'dense'
cgiterations: []
Return quadprog
Lagrange Multipliers
Solve a quadratic programming problem and return the Lagrange multipliers.
H = [1,1,1 1,2,2 1,2,4]; f = [7;12;15]; A = [1,1,1]; b = 3; lb = zeros(3,1); [x,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb);
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
Examine the Lagrange multiplier structure lambda
.
disp(lambda)
ineqlin: 12.0000 eqlin: [0x1 double] lower: [3x1 double] upper: [3x1 double]
The linear inequality constraint has an associated Lagrange multiplier of 12
.
Display the multipliers associated with the lower bound.
disp(lambda.lower)
5.0000 0.0000 0.0000
Only the first component of lambda.lower
has a nonzero multiplier. This generally means that only the first component of x
is at the lower bound of zero. Confirm by displaying the components of x
.
disp(x)
0.0000 1.5000 1.5000
Return Warm Start Object
To speed subsequent quadprog
calls, create a warm start object.
options = optimoptions('quadprog','Algorithm','activeset'); x0 = [1 2 3]; ws = optimwarmstart(x0,options);
Solve a quadratic program using ws
.
H = [1,1,1 1,2,2 1,2,4]; f = [7;12;15]; A = [1,1,1]; b = 3; lb = zeros(3,1); tic [ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
toc
Elapsed time is 0.060411 seconds.
Change the objective function and solve the problem again.
f = [10;15;20]; tic [ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints. Optimization completed because the objective function is nondecreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
toc
Elapsed time is 0.010756 seconds.
Input Arguments
H
— Quadratic objective term
symmetric real matrix
Quadratic objective term, specified as a symmetric real matrix.
H
represents the quadratic in the expression
1/2*x'*H*x + f'*x
. If H
is not symmetric, quadprog
issues a warning and uses the
symmetrized version (H + H')/2
instead.
If the quadratic matrix H
is sparse, then by default,
the 'interiorpointconvex'
algorithm uses a slightly
different algorithm than when H
is dense. Generally,
the sparse algorithm is faster on large, sparse problems, and the dense
algorithm is faster on dense or small problems. For more information, see
the LinearSolver
option description and interiorpointconvex quadprog Algorithm.
Example: [2,1;1,3]
Data Types: single
 double
f
— Linear objective term
real vector
Linear objective term, specified as a real vector. f
represents the linear term in the expression
1/2*x'*H*x + f'*x
.
Example: [1;3;2]
Data Types: single
 double
A
— Linear inequality constraints
real matrix
Linear inequality constraints, specified as a real matrix. A
is
an M
byN
matrix, where M
is
the number of inequalities, and N
is the number
of variables (number of elements in x0
). For
large problems, pass A
as a sparse matrix.
A
encodes the M
linear
inequalities
A*x <= b
,
where x
is the column vector of N
variables x(:)
,
and b
is a column vector with M
elements.
For example, consider these inequalities:
x_{1} + 2x_{2} ≤
10
3x_{1} +
4x_{2} ≤ 20
5x_{1} +
6x_{2} ≤ 30,
Specify the inequalities by entering the following constraints.
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the x components sum to 1 or less, use A =
ones(1,N)
and b = 1
.
Data Types: single
 double
b
— Linear inequality constraints
real vector
Linear inequality constraints, specified as a real vector. b
is
an M
element vector related to the A
matrix.
If you pass b
as a row vector, solvers internally
convert b
to the column vector b(:)
.
For large problems, pass b
as a sparse vector.
b
encodes the M
linear
inequalities
A*x <= b
,
where x
is the column vector of N
variables x(:)
,
and A
is a matrix of size M
byN
.
For example, consider these inequalities:
x_{1}
+ 2x_{2} ≤
10
3x_{1}
+ 4x_{2} ≤
20
5x_{1}
+ 6x_{2} ≤
30.
Specify the inequalities by entering the following constraints.
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the x components sum to 1 or less, use A =
ones(1,N)
and b = 1
.
Data Types: single
 double
Aeq
— Linear equality constraints
real matrix
Linear equality constraints, specified as a real matrix. Aeq
is
an Me
byN
matrix, where Me
is
the number of equalities, and N
is the number of
variables (number of elements in x0
). For large
problems, pass Aeq
as a sparse matrix.
Aeq
encodes the Me
linear
equalities
Aeq*x = beq
,
where x
is the column vector of N
variables x(:)
,
and beq
is a column vector with Me
elements.
For example, consider these inequalities:
x_{1} + 2x_{2} +
3x_{3} = 10
2x_{1} +
4x_{2} + x_{3} =
20,
Specify the inequalities by entering the following constraints.
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the x components sum to 1, use Aeq = ones(1,N)
and
beq = 1
.
Data Types: single
 double
beq
— Linear equality constraints
real vector
Linear equality constraints, specified as a real vector. beq
is
an Me
element vector related to the Aeq
matrix.
If you pass beq
as a row vector, solvers internally
convert beq
to the column vector beq(:)
.
For large problems, pass beq
as a sparse vector.
beq
encodes the Me
linear
equalities
Aeq*x = beq
,
where x
is the column vector of N
variables
x(:)
, and Aeq
is a matrix of size
Me
byN
.
For example, consider these equalities:
x_{1}
+ 2x_{2} +
3x_{3} =
10
2x_{1}
+ 4x_{2} +
x_{3} =
20.
Specify the equalities by entering the following constraints.
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the x components sum to 1, use Aeq = ones(1,N)
and
beq = 1
.
Data Types: single
 double
lb
— Lower bounds
real vector  real array
Lower bounds, specified as a real vector or real array. If the number of elements in
x0
is equal to the number of elements in lb
,
then lb
specifies that
x(i) >= lb(i)
for all i
.
If numel(lb) < numel(x0)
, then lb
specifies
that
x(i) >= lb(i)
for 1 <=
i <= numel(lb)
.
If lb
has fewer elements than x0
, solvers issue a
warning.
Example: To specify that all x components are positive, use lb =
zeros(size(x0))
.
Data Types: single
 double
ub
— Upper bounds
real vector  real array
Upper bounds, specified as a real vector or real array. If the number of elements in
x0
is equal to the number of elements in ub
,
then ub
specifies that
x(i) <= ub(i)
for all i
.
If numel(ub) < numel(x0)
, then ub
specifies
that
x(i) <= ub(i)
for 1 <=
i <= numel(ub)
.
If ub
has fewer elements than x0
, solvers issue
a warning.
Example: To specify that all x components are less than 1, use ub =
ones(size(x0))
.
Data Types: single
 double
x0
— Initial point
real vector
Initial point, specified as a real vector. The length of
x0
is the number of rows or columns of
H
.
x0
applies to the
'trustregionreflective'
algorithm when the problem
has only bound constraints. x0
also applies to the
'activeset'
algorithm.
Note
x0
is a required argument for the 'activeset'
algorithm.
If you do not specify x0
, quadprog
sets all components of x0
to a point in the interior of
the box defined by the bounds. quadprog
ignores
x0
for the 'interiorpointconvex'
algorithm and for the 'trustregionreflective'
algorithm
with equality constraints.
Example: [1;2;1]
Data Types: single
 double
options
— Optimization options
output of optimoptions
 structure such as optimset
returns
Optimization options, specified as the output of
optimoptions
or a structure such as
optimset
returns.
Some options are absent from the
optimoptions
display. These options appear in italics in the following
table. For details, see View Optimization Options.
All Algorithms
Algorithm  Choose the algorithm:
The

Diagnostics  Display diagnostic information about the function
to be minimized or solved. The choices are

Display  Level of display (see Iterative Display):
The

MaxIterations  Maximum number of iterations allowed; a nonnegative integer.
For 
OptimalityTolerance  Termination tolerance on the firstorder optimality; a nonnegative scalar.
See Tolerances and Stopping Criteria. For 
StepTolerance  Termination tolerance on
For 
'trustregionreflective'
Algorithm Only
FunctionTolerance  Termination tolerance on the
function value; a nonnegative scalar. The default value
depends on the problem type: boundconstrained problems
use For 
 Hessian multiply
function, specified as a function handle. For
largescale structured problems, this function computes
the Hessian matrix product W = hmfun(Hinfo,Y) where
See Quadratic Minimization with Dense, Structured Hessian for an example that uses this option. For

MaxPCGIter  Maximum number of PCG (preconditioned conjugate
gradient) iterations; a positive scalar. The default is

PrecondBandWidth  Upper bandwidth of the preconditioner for PCG; a
nonnegative integer. By default,

SubproblemAlgorithm  Determines how the iteration
step is calculated. The default,

TolPCG  Termination tolerance on the PCG iteration; a
positive scalar. The default is

TypicalX  Typical 
'interiorpointconvex'
Algorithm Only
ConstraintTolerance  Tolerance on the constraint violation; a
nonnegative scalar. The default is
For

LinearSolver  Type of internal linear solver in the algorithm:

'activeset'
Algorithm Only
ConstraintTolerance  Tolerance on the constraint violation; a
nonnegative scalar. The default value is
For

ObjectiveLimit  A tolerance (stopping
criterion) that is a scalar. If the objective function
value goes below 
SinglePrecision Code Generation
Algorithm  Must be 
ConstraintTolerance  Tolerance on the constraint violation, a positive scalar. The default value is For 
MaxIterations  Maximum number of iterations allowed, a nonnegative integer. The default value is 
ObjectiveLimit  A tolerance (stopping criterion) that is a scalar. If the objective function value goes below 
OptimalityTolerance  Termination tolerance on the firstorder optimality, a positive scalar. The default value is For 
StepTolerance  Termination tolerance on For 
problem
— Problem structure
structure
Problem structure, specified as a structure with these fields:
 Symmetric matrix in
1/2*x'*H*x 
 Vector in linear term
f'*x 
 Matrix in linear inequality
constraints
Aineq*x ≤ bineq 
 Vector in linear inequality
constraints
Aineq*x ≤ bineq 
 Matrix in linear equality
constraints Aeq*x = beq 
 Vector in linear equality
constraints Aeq*x = beq 
lb  Vector of lower bounds 
ub  Vector of upper bounds 
 Initial point for
x 
 'quadprog' 
 Options created using optimoptions or
optimset 
The required fields are H
, f
,
solver
, and options
. When solving,
quadprog
ignores any fields in
problem
other than those listed.
Note
You cannot use warm start with the problem
argument.
Data Types: struct
ws
— Warm start object
object created using optimwarmstart
Warm start object, specified as an object created using optimwarmstart
. The warm start object contains the start point and
options, and optional data for memory size in code generation. See Warm Start Best Practices.
Example: ws = optimwarmstart(x0,options)
Output Arguments
x
— Solution
real vector
Solution, returned as a real vector. x
is the vector
that minimizes 1/2*x'*H*x + f'*x
subject to all
bounds and linear constraints. x
can be a local minimum
for nonconvex problems. For convex problems, x
is a
global minimum. For more information, see Local vs. Global Optima.
wsout
— Solution warm start object
QuadprogWarmStart
object
Solution warm start object, returned as a
QuadprogWarmStart
object. The solution point is
wsout.X
.
You can use wsout
as the input warm start object in a
subsequent quadprog
call.
fval
— Objective function value at solution
real scalar
Objective function value at the solution, returned as a real scalar.
fval
is the value of
1/2*x'*H*x + f'*x
at the solution
x
.
exitflag
— Reason quadprog
stopped
integer
Reason quadprog
stopped, returned as an integer
described in this table.
All Algorithms  
 Function converged to the
solution 
 Number of iterations exceeded

 Problem is infeasible. Or,
for 
 Problem is unbounded. 
 
 Step size was smaller than

 Nonconvex problem detected. 
 Unable to compute a step direction. 
 
 Local minimum found; minimum is not unique. 
 Change in the objective
function value was smaller than

 Current search direction was not a direction of descent. No further progress could be made. 
 
 Nonconvex problem detected;
projection of 
Note
Occasionally, the 'activeset'
algorithm halts with
exit flag 0
when the problem is, in fact, unbounded.
Setting a higher iteration limit also results in exit flag
0
.
output
— Information about optimization process
structure
Information about the optimization process, returned as a structure with these fields:
 Number of iterations taken 
 Optimization algorithm used 
 Total number of PCG
iterations ( 
constrviolation  Maximum of constraint functions 
firstorderopt  Measure of firstorder optimality 
linearsolver  Type of internal linear
solver, 
message  Exit message 
lambda
— Lagrange multipliers at solution
structure
Lagrange multipliers at the solution, returned as a structure with these fields:
 Lower bounds

 Upper bounds

 Linear inequalities 
 Linear equalities 
For details, see Lagrange Multiplier Structures.
More About
Enhanced Exit Messages
The next few items list the possible enhanced exit messages from
quadprog
. Enhanced exit messages give a link for more
information as the first sentence of the message.
Minimum Found That Satisfies The Constraints
The solver found a minimizing point that satisfies all bounds and linear constraints. Since the problem is convex, the minimizing point is a global minimum. For more information, see Local vs. Global Optima.
Solver Stalled, Constraints Satisfied
The solver stopped because the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end. Sometimes, this means that the solver located the minimum. However, the firstorder optimality measure was not less than the OptimalityTolerance, so it is possible that the result is inaccurate. All constraints were satisfied.
To proceed, try the following:
Examine the firstorder optimality measure in the
output
structure. If the firstorder optimality measure is small, then it is likely that the returned solution is accurate.Set the
StepTolerance
option to0
. Sometimes, this setting helps the solver proceed, though sometimes the solver remains stalled because of other issues.Try a different algorithm. If the solver offers a choice of algorithms, sometimes a different algorithm can succeed.
Try removing dependent constraints. This means ensure that none of the linear constraints are redundant.
Problem Appears Unbounded
quadprog
stopped because it appears to have found a direction
that satisfies all constraints and causes the objective to decrease without
bound.
To proceed,
Ensure that you have finite bound for each component.
Check the objective function to ensure that it is strictly convex (the quadratic matrix has strictly positive eigenvalues).
See if the associated linear programming problem (the original problem without the quadratic term) has a finite solution.
Unable to Compute a Step Direction
The solver was unable to proceed because it could not compute a direction leading to a minimum. It is likely that this trouble is due to redundant linear constraints or tolerances that are too small.
To proceed,
Check your linear constraint matrices for redundancy. Try to identify and remove redundant linear constraints.
Ensure that your
FunctionTolerance
,OptimalityTolerance
, andConstraintTolerance
options are above1e14
, and are preferably above1e12
. See Tolerances and Stopping Criteria.
The Problem Is NonConvex
quadprog
determined that the problem is not Convex. Try a different
algorithm. For more information, see Quadratic Programming Algorithms.
Solution Found During Presolve
The solver found the solution during the presolve phase. This means the bounds,
linear constraints, and f
(linear objective coefficient)
immediately lead to a solution. For more information, see Presolve/Postsolve.
The Problem Is Infeasible
During presolve, the solver found that the problem has an inconsistent formulation. Inconsistent means not all constraints can be satisfied at a single point x. For more information, see Presolve/Postsolve.
The Problem Is Unbounded
During presolve, the solver found a feasible direction where the objective function decreases without bound. For more information, see Presolve/Postsolve.
Converged to an Infeasible Point
quadprog
converged to a point that does not satisfy all
constraints to within the constraint tolerance called ConstraintTolerance. The reason
quadprog
stopped is that the last step was too small. When
the relative step size goes below the StepTolerance tolerance, then
the iterations end.
For suggestions on how to proceed, see quadprog Converges to an Infeasible Point.
No feasible solution found
The solver converged to a point that does not satisfy all constraints to within the constraint tolerance called ConstraintTolerance. The reason the solver stopped is that the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end.
No feasible solution found
There is no point satisfying all of the bounds and linear constraints. For help examining the inconsistent linear constraints, see Investigate Linear Infeasibilities.
Optimal Solution Found
There is only one feasible point. The number of independent linear equality constraints is the same as the number of variables in the problem.
Optimal Solution Found
The solver stopped because the firstorder optimality
measure is less than the OptimalityTolerance
tolerance.
The firstorder optimality measure is the infinity norm of the projected gradient.
The projection is onto the null space of the linear equality matrix
Aeq
.
Local Minimum Found
The solver stopped at a point of zero curvature that is a local minimum. There are other feasible points that have the same objective function value.
The Problem Is Unbounded
There are directions of zero or negative curvature along which the objective function decreases indefinitely. Therefore, for any target value, there are feasible points with objective value smaller than the target. Check whether you included enough constraints in the problem, such as bounds on all variables.
Optimal Solution Found
The solver stopped because the firstorder optimality measure is less than the OptimalityTolerance tolerance.
Local Minimum Possible
The solver stopped because the relative change in function value was below the
FunctionTolerance
tolerance. To check
solution quality, see Local Minimum Possible.
Local Minimum Possible
The solver stopped because the relative change in function value was below the
square root of the FunctionTolerance
tolerance, and the change
of function values in the previous iterations is decreasing by less than a factor of
3.5. This criterion stops the solver when the difference of objective function
values is relatively small, but does not decrease to zero quickly enough. To check
solution quality, see Local Minimum Possible.
Definitions for Exit Messages
The next few items contain definitions for terms in the quadprog
exit messages.
tolerance
Generally, a tolerance is a threshold which, if crossed, stops the iterations of a solver. For more information on tolerances, see Tolerances and Stopping Criteria.
Convex
A quadratic program is convex if, from any feasible point, there is no feasible direction with negative curvature. A convex problem has only one local minimum, which is also the global minimum.
Feasible Directions
The feasible directions from a feasible point x are those vectors v such that for small enough positive a, x + av is feasible.
A feasible point is one satisfying all the constraints.
StepTolerance
StepTolerance
is a tolerance for the size of
the last step, meaning the size of the change in location where
fsolve
was evaluated.
OptimalityTolerance
The tolerance called OptimalityTolerance
relates to the
firstorder optimality measure. Iterations end when the firstorder optimality
measure is less than OptimalityTolerance
.
For constrained problems, the firstorder optimality measure is the maximum of the following two quantities:
$$\begin{array}{l}\Vert {\nabla}_{x}L(x,\lambda \Vert =\Vert \nabla f(x)+{A}^{T}{\lambda}_{ineqlin}+Ae{q}^{T}{\lambda}_{eqlin}\\ \text{}+{\displaystyle \sum {\lambda}_{ineqnonlin,i}\nabla {c}_{i}(x)+{\displaystyle \sum {\lambda}_{eqnonlin,i}\nabla ce{q}_{i}(x)}}\Vert ,\end{array}$$
$$\Vert \overrightarrow{\left{l}_{i}{x}_{i}\right{\lambda}_{lower,i}},\overrightarrow{\left{x}_{i}{u}_{i}\right{\lambda}_{upper,i}},\overrightarrow{\left{(Axb)}_{i}\right{\lambda}_{ineqlin,i}},\overrightarrow{\left{c}_{i}(x)\right{\lambda}_{ineqnonlin,i}}\Vert .$$
For unconstrained problems, the firstorder optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm).
The firstorder optimality measure should be zero at a minimizing point.
For more information, including definitions of all the variables in these equations, see FirstOrder Optimality Measure.
firstorder optimality measure for problems with bounds
For unconstrained problems, the firstorder optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm of the gradient). This should be zero at a minimizing point.
For problems with bounds, the firstorder optimality measure is the maximum over i of v_{i}*g_{i}. Here g_{i} is the ith component of the gradient, x is the current point, and
$${v}_{i}=\{\begin{array}{ll}\left{x}_{i}{b}_{i}\right\hfill & \text{ifthenegativegradientpointstowardbound}{b}_{i}\hfill \\ 1\hfill & \text{otherwise}\text{.}\hfill \end{array}$$
If x_{i} is at a bound, v_{i} is zero. If x_{i} is not at a bound, then at a minimizing point the gradient g_{i} should be zero. Therefore the firstorder optimality measure should be zero at a minimizing point.
For more information, see FirstOrder Optimality Measure.
ConstraintTolerance
The constraint tolerance called
ConstraintTolerance
is the maximum of the values of all
constraint functions at the current point.
ConstraintTolerance
operates differently from other tolerances.
If ConstraintTolerance
is not satisfied (i.e., if the magnitude
of the constraint function exceeds ConstraintTolerance
), the
solver attempts to continue, unless it is halted for another reason. A solver does
not halt simply because ConstraintTolerance
is satisfied.
Relative Dual Feasibility
The dual feasibility r_{d} is defined in terms of the KKT conditions for the problem. The relative dual feasibility stopping condition is
r_{d} ≤
ρOptimalityTolerance ,  (1) 
where ρ is a scale factor.
For more information, see PredictorCorrector.
Dual Feasibility
The KKT conditions state that at an optimum x, there are Lagrange multipliers $${\overline{\lambda}}_{\text{ineq}}$$ and λ_{eq} such that
$$\begin{array}{c}Hx+c+{A}_{\text{eq}}^{T}{\lambda}_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda}}_{\text{ineq}}=0\\ \overline{A}x\overline{b}+s=0\\ {A}_{\text{eq}}x{b}_{\text{eq}}=0\\ {s}_{i}{\overline{\lambda}}_{\text{ineq},i}=0\\ {s}_{i}\ge 0\\ {\overline{\lambda}}_{\text{ineq},i}\ge 0.\end{array}$$
The variables $$\overline{A}$$, $${\overline{\lambda}}_{\text{ineq}}$$, and $$\overline{b}$$ include bounds as part of the linear inequalities.
The dual feasibility r_{d} is the absolute value of $${r}_{\text{d}}=Hx+c+{A}_{\text{eq}}^{T}{\lambda}_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda}}_{\text{ineq}}$$.
Scale Factor
The scale factor ρ is
$$\rho =\mathrm{max}\left(1,\Vert H\Vert ,\Vert \overline{A}\Vert ,\Vert {A}_{eq}\Vert ,\Vert c\Vert ,\Vert \overline{b}\Vert ,\Vert {b}_{eq}\Vert \right).$$
The norm $$\Vert \cdot \Vert $$ is the maximum absolute value of the elements in the expression.
Complementarity Measure
The complementarity measure is defined in terms of the KKT conditions for the problem. At an optimum x, there are Lagrange multipliers $${\overline{\lambda}}_{\text{ineq}}$$ and λ_{eq} such that
$$\begin{array}{c}Hx+c+{A}_{\text{eq}}^{T}{\lambda}_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda}}_{\text{ineq}}=0\\ \overline{A}x\overline{b}+s=0\\ {A}_{\text{eq}}x{b}_{\text{eq}}=0\\ {s}_{i}{\overline{\lambda}}_{\text{ineq},i}=0\\ {s}_{i}\ge 0\\ {\overline{\lambda}}_{\text{ineq},i}\ge 0.\end{array}$$
The variables $$\overline{A}$$, $${\overline{\lambda}}_{\text{ineq}}$$, and $$\overline{b}$$ include bounds as part of the linear inequalities.
The complementarity measure is :
$$\sum _{i}{s}_{i}{\overline{\lambda}}_{\text{ineq},i}}.$$
For more information, see PredictorCorrector.
Total Relative Error
The total relative error is defined in terms of the KKT conditions for the problem. The total relative error stopping condition holds when the Merit Function φ satisfies
φ ≥
max(OptimalityTolerance ,10^{5}φ_{min}).  (2) 
When this stopping condition holds, the solver determines that the quadratic program is infeasible.
Merit Function
The KKT conditions state that at an optimum x, there are Lagrange multipliers $${\overline{\lambda}}_{\text{ineq}}$$ and λ_{eq} such that
$$\begin{array}{c}Hx+c+{A}_{\text{eq}}^{T}{\lambda}_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda}}_{\text{ineq}}=0\\ \overline{A}x\overline{b}+s=0\\ {A}_{\text{eq}}x{b}_{\text{eq}}=0\\ {s}_{i}{\overline{\lambda}}_{\text{ineq},i}=0\\ {s}_{i}\ge 0\\ {\overline{\lambda}}_{\text{ineq},i}\ge 0.\end{array}$$
The variables $$\overline{A}$$, $${\overline{\lambda}}_{\text{ineq}}$$, and $$\overline{b}$$ include bounds as part of the linear inequalities.
The merit function φ is
$$\frac{1}{\rho}\left(\mathrm{max}\left({\Vert {r}_{\text{eq}}\Vert}_{\infty},{\Vert {r}_{\text{ineq}}\Vert}_{\infty},{\Vert {r}_{\text{d}}\Vert}_{\infty}\right)+g\right).$$
The terms in the definition of φ are:
$$\begin{array}{c}\rho =\mathrm{max}\left(1,\Vert H\Vert ,\Vert \overline{A}\Vert ,\Vert {A}_{eq}\Vert ,\Vert c\Vert ,\Vert \overline{b}\Vert ,\Vert {b}_{eq}\Vert \right)\\ {r}_{\text{eq}}={A}_{\text{eq}}x{b}_{\text{eq}}\\ {r}_{\text{ineq}}=\overline{A}x\overline{b}+s\\ {r}_{\text{d}}=Hx+c+{A}_{\text{eq}}^{T}{\lambda}_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda}}_{\text{ineq}}\\ g={x}^{T}Hx+{f}^{T}x\overline{b}{\overline{\lambda}}_{\text{ineq}}{b}_{\text{eq}}{\lambda}_{\text{eq}}.\end{array}$$
The expression φ_{min} means the minimum of φ seen in all iterations.
Presolve
Presolve is a set of algorithms that simplify a linear or quadratic programming problem. The algorithms look for simple inconsistencies such as inconsistent bounds and linear constraints. They also look for redundant bounds and linear inequalities. For more information, see Presolve/Postsolve.
The Problem Appears to Be IllConditioned
The internallycalculated search direction does not decrease the objective
function value. Perhaps the problem is poorly scaled or has an illconditioned
matrix (H
for quadprog
, C
for lsqlin
). For suggestions on how to proceed, see When the Solver Fails or Local Minimum Possible.
Algorithms
'interiorpointconvex'
The 'interiorpointconvex'
algorithm attempts to follow a path
that is strictly inside the constraints. It uses a presolve module to remove
redundancies and to simplify the problem by solving for components that are
straightforward.
The algorithm has different implementations for a sparse Hessian matrix
H
and for a dense matrix. Generally, the sparse
implementation is faster on large, sparse problems, and the dense implementation is
faster on dense or small problems. For more information, see interiorpointconvex quadprog Algorithm.
'trustregionreflective'
The 'trustregionreflective'
algorithm is a subspace
trustregion method based on the interiorreflective Newton method described in
[1]. Each iteration involves the approximate solution of a large linear system using
the method of preconditioned conjugate gradients (PCG). For more information, see
trustregionreflective quadprog Algorithm.
'activeset'
The 'activeset'
algorithm is a projection method, similar to
the one described in [2]. The algorithm is not largescale; see LargeScale vs. MediumScale Algorithms. For more information, see activeset quadprog Algorithm.
Warm Start
A warm start object maintains a list of active constraints from the previous solved problem. The solver carries over as much active constraint information as possible to solve the current problem. If the previous problem is too different from the current one, no active set information is reused. In this case, the solver effectively executes a cold start in order to rebuild the list of active constraints.
Alternative Functionality
App
The Optimize Live Editor task provides a visual interface for quadprog
.
References
[1] Coleman, T. F., and Y. Li. “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables.” SIAM Journal on Optimization. Vol. 6, Number 4, 1996, pp. 1040–1058.
[2] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization. London: Academic Press, 1981.
[3] Gould, N., and P. L. Toint. “Preprocessing for quadratic programming.” Mathematical Programming. Series B, Vol. 100, 2004, pp. 95–132.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
quadprog
supports code generation using either thecodegen
(MATLAB Coder) function or the MATLAB^{®} Coder™ app. You must have a MATLAB Coder license to generate code.The target hardware must support standard doubleprecision floatingpoint computations or standard singleprecision floatingpoint computations.
Code generation targets do not use the same math kernel libraries as MATLAB solvers. Therefore, code generation solutions can vary from solver solutions, especially for poorly conditioned problems.
quadprog
does not support theproblem
argument for code generation.[x,fval] = quadprog(problem) % Not supported
All
quadprog
input matrices such asA
,Aeq
,lb
, andub
must be full, not sparse. You can convert sparse matrices to full by using thefull
function.The
lb
andub
arguments must have the same number of entries as the number of columns inH
or must be empty[]
.If your target hardware does not support infinite bounds, use
optim.coder.infbound
.For advanced code optimization involving embedded processors, you also need an Embedded Coder^{®} license.
You must include options for
quadprog
and specify them usingoptimoptions
. The options must include theAlgorithm
option, set to'activeset'
.options = optimoptions('quadprog','Algorithm','activeset'); [x,fval,exitflag] = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options);
Code generation supports these options:
Algorithm
— Must be'activeset'
ConstraintTolerance
MaxIterations
ObjectiveLimit
OptimalityTolerance
StepTolerance
Generated code has limited error checking for options. The recommended way to update an option is to use
optimoptions
, not dot notation.opts = optimoptions('quadprog','Algorithm','activeset'); opts = optimoptions(opts,'MaxIterations',1e4); % Recommended opts.MaxIterations = 1e4; % Not recommended
Do not load options from a file. Doing so can cause code generation to fail. Instead, create options in your code.
If you specify an option that is not supported, the option is typically ignored during code generation. For reliable results, specify only supported options.
For an example, see Generate Code for quadprog.
Version History
Introduced before R2006aR2024b: Generate singleprecision code
You can generate code using quadprog
for singleprecision
floating point hardware. For instructions, see SinglePrecision Code Generation.
MATLABBefehl
Sie haben auf einen Link geklickt, der diesem MATLABBefehl entspricht:
Führen Sie den Befehl durch Eingabe in das MATLABBefehlsfenster aus. Webbrowser unterstützen keine MATLABBefehle.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
 América Latina (Español)
 Canada (English)
 United States (English)
Europe
 Belgium (English)
 Denmark (English)
 Deutschland (Deutsch)
 España (Español)
 Finland (English)
 France (Français)
 Ireland (English)
 Italia (Italiano)
 Luxembourg (English)
 Netherlands (English)
 Norway (English)
 Österreich (Deutsch)
 Portugal (English)
 Sweden (English)
 Switzerland
 United Kingdom (English)