Why is the lsqlin interior-point algorithm slower in R2017a compared to R2016b?
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
MathWorks Support Team
am 9 Jan. 2017
Bearbeitet: MathWorks Support Team
am 19 Apr. 2021
Running the lsqlin interior-point algorithm for specific optimization problems is much slower in R2017a than it was in R2016b.
Example:
n=2000;
A(:,1)=(1:n)';
A(:,2)=ones(n,1);
b=rand(2000,1);
tic; x1 = lsqlin(A,b,A,b,[],[],[],[],[],optimoptions(@lsqlin,'Algorithm', 'active-set')); toc
tic; x2 = lsqlin(A,b,A,b,[],[],[],[],[],optimoptions(@lsqlin,'Algorithm', 'interior-point')); toc
returns
Elapsed time is 0.021589 seconds.
Elapsed time is 3.926372 seconds. (!)
Akzeptierte Antwort
MathWorks Support Team
am 9 Jan. 2017
This is due to changes that were made in "quadprog" for R2017a.
In the R2017a release we introduced a specialized algorithm for problems where the H matrix in quadprog is full.
Under the hood, the lsqlin function reduces its least squares problem to a quadratic program and calls quadprog.
For lsqlin, if the C matrix (using doc notation) is full, then our new algorithm will be used. If the C matrix is sparse, the old algorithm is used.
In this case, the problem is ill conditioned for interior point methods because there are way too many inequality constraints.
In fact, this is the reason the new algorithm is much slower.
For "N" linear constraints, "N^2" zero elements are added to the algorithm's internal formulation.
For the sparse algorithm, these zeros are not stored; but they are in the full algorithm.
Converting the "A" matrix to sparse format is the easy workaround:
tic; x2 = lsqlin(sparse(A),b,A,b,[],[],[],[],[],struct('Algorithm', 'interior-point')); toc
This will use the old interior point algorithm in R2016b.
By far the best thing to do would be reducing the number of inequality constraints.
2 Kommentare
Royi Avital
am 27 Feb. 2017
Well, though the answer states there is a new algorithm it misses the point. If we paid for slower algorithm, what did we gain? What's better in the new algorithm that you decided to use it?
Adam Hug
am 23 Mai 2017
Bearbeitet: MathWorks Support Team
am 19 Apr. 2021
Hello Royi and Ben,
I understand your frustration with the new quadprog algorithm in R2017a. Hopefully I can shed some light on what is going on in the new release and how everyone using quadprog can take advantage of it.
In previous versions (R2016b and earlier), the default quadprog algorithm implicitly converted the Hessian and constraint matrices to sparse matrix format regardless of structure. The underlying sparse matrix data structures were then used throughout quadprog to solve a quadratic programming problem. However, this approach is very slow for problems which can have medium to large dense matrices. Such problems frequently occur in such areas as data acquisition and finance to name a few. Converting to sparse and solving in these cases was very slow and memory inefficient -- especially when some applications involve solving tens to hundreds of individual dense quadratic programs.
In R2017a, we introduced a full matrix algorithm to address both performance and memory issues associated with dense problems. This mainly includes smarter memory management/data structures and heavy use of multi-threading -- based on their structure and internal storage, full matrices can more efficiently utilize CPU resources than sparse matrices. To decide between the full and sparse algorithms, we chose to switch depending on whether the Hessian is stored as a full or sparse matrix. This design choice was motivated by the understanding that our users know their individual application better than any automated heuristic. Also consider that relying on a heuristic would prevent users from switching algorithms should the slower one be chosen.
Hopefully this helps explain why quadprog can be slower in some applications compared to R2016b. In many cases, switching matrix types can save memory in addition to using the better quadprog algorithm. If there are any technical questions about the differences between the two algorithms, this information can be found in the documentation (under interior-point-convex):
Finally, please feel free to contact our technical support team concerning any improvements we could make. We read and catalog all enhancement requests.
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Linear Least Squares finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!