Hauptinhalt

Generate Single-Precision quadprog Code

This example shows how to generate code for a single-precision target using the quadprog solver. The quadratic programming problem is to generate a relatively sparse solution to a quadratic problem by using an L1 penalty. This technique, called Lasso regression, is available with the lasso (Statistics and Machine Learning Toolbox) function.

Specifically, given a matrix A, vector b, and penalty parameter λ, Lasso regression solves the problem

minx12Ax-b22+λx1.

Because of the L1 penalty, the result of Lasso regression typically has several zero entries.

Recast this problem into the form of a quadratic program suitable for solution by quadprog. Let e represent the column vector of ones of length n, where n is the number of columns in A. Let s represent a column vector of length n, so eTs is the dot product of e and s. The following quadratic program is equivalent to the original Lasso problem:

minx,z,s12zTz+λeTssubject toAx-b=z-sxss0.

Create single-precision data for the A, b, and λ coefficients. Set A as a 30-by-8 pseudorandom matrix. Set b as a 30-by-1 pseudorandom vector made from a vector x_exp that has roughly half its entries equal to zero. Create λ as a single-precision scalar 1.

rng default;
m = 30;
n = 8;

datatype = "single";
x_exp = randi([0 1],n,1,datatype) .* randn(n,1,datatype);
A = rand(m,n,datatype);
b = A*x_exp;
lambda = cast(1,datatype);

Create Function to Solve Problem in Single Precision

Save the solveLassoProb function code, given below, as a file on your MATLAB® path.

function [x,reserror,flag] = solveLassoProb(A,b,lambda)

[m,n] = size(A);

% Build the objective.
H = blkdiag(zeros(n,"like",b), ...  % x
            eye(m,"like",b), ...    % z
            zeros(n,"like",b));     % s

f = [zeros(n,1,"like",b);           % x
     zeros(m,1,"like",b);           % z
     lambda*ones(n,1,"like",b)];    % s

% Define z as the residual (Ax - b).
Aeq = [A  -eye(m,m,"like",b) zeros(m,n,"like",b)];
beq = b;

% Add inequalities between x and s. Ensure datatype consistency.
Aineq = [eye(n,"like",b) zeros(n,m,"like",b) -eye(n,"like",b)
         -eye(n,"like",b) zeros(n,m,"like",b) -eye(n,"like",b)];
bineq = zeros(2*n,1,"like",b);

% s must be nonnegative.
lb = [-optim.coder.infbound(n+m,1,"like",b); zeros(n,1,"like",b)];
ub = optim.coder.infbound(2*n+m,1,"like",b);

x0 = zeros(2*n+m,1,"like",b);

% Set options to use the "active-set" algorithm (required for code
% generation) and UseCodegenSolver=true to have the resulting code be the
% same whether run in MATLAB or by using code generation.
options = optimoptions("quadprog",Algorithm="active-set",UseCodegenSolver=true);

[xall,~,flag] = quadprog(H,f,Aineq,bineq,Aeq,beq,lb,ub,x0,options);
x = xall(1:n);
reserror = norm(A*x - b,2);
end

Solve Problem in Double Precision

Before solving the problem using single-precision code generation, ensure that the problem formulation is correct by solving the problem using double-precision coefficients.

Ad = double(A);
bd = double(b);
lambdad = double(lambda);
[x,reserror,flag] = solveLassoProb(Ad,bd,lambdad);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

Compare the solution variables x to the original variables x_exp.

array2table([x,x_exp],"VariableNames",["LASSO Solution","Original Variables"])
ans=8×2 table
         3.5587     3.5784
         2.7068     2.7694
     5.0182e-16          0
         2.9592     3.0349
         0.5613     0.7254
    -1.1020e-17          0
     1.7786e-16          0
    -2.3311e-16    -0.2050

Lasso regression affects variables 5 and 8 to a large extent. The residual error overall is fairly large.

disp(reserror)
    0.4694

To achieve a more accurate solution, lower the penalty term λ to 0.025.

lambdad = double(0.025);
[x,reserror,flag] = solveLassoProb(Ad,bd,lambdad);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
array2table([x,x_exp],"VariableNames",["LASSO Solution","Original Variables"])
ans=8×2 table
        3.5765     3.5784
        2.7621     2.7694
    7.6581e-16          0
        3.0298     3.0349
        0.7141     0.7254
    1.7349e-16          0
    3.7139e-16          0
       -0.1797    -0.2050

Variables 5 and 8 are closer to their target (original) values. Check the value of the residual.

disp(reserror)
    0.0357

The residual is roughly a factor of 10 smaller than before. Use the new value of λ to generate single-precision code.

Generate Single-Precision Code and Solve Problem

Call codegen to create a MEX file target.

argList = {A,b,lambda};
codegen -config:mex -args argList solveLassoProb
Code generation successful.

MATLAB creates the file solveLassoProb_mex in the current folder.

Solve the problem using the single-precision generated code.

lambda = single(lambdad);
[xs,reserrors,flag] = solveLassoProb_mex(A,b,lambda);

Compare the single-precision results to both the double-precision results and the original data.

array2table([xs,x,x_exp],"VariableNames",["Codegen Single Solution","Double Solution","Original Variables"])
ans=8×3 table
         3.5765        3.5765     3.5784
         2.7621        2.7621     2.7694
    -4.4278e-08    7.6581e-16          0
         3.0298        3.0298     3.0349
         0.7141        0.7141     0.7254
     1.3336e-07    1.7349e-16          0
     7.4389e-08    3.7139e-16          0
        -0.1797       -0.1797    -0.2050

To display precision, the single-precision solution variables are the same as the double-precision solution variables except for the values near zero.

See Also

|

Topics