GPU backslash performance much slower than CPU
8 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I am doing numerical power flow caclulation by modifying the functions of matpower, an open source toolbox. By modifying its function newtonpf.m, GPU computation can be implemented. However, I found that GPU performance is much much slower than CPU. When calculating the built-in case3012wp of matpower, the matrix in newtonpf.m will be :
A: 5725 * 5725 sparse double, b: 5725 * 1 double.
The process of A \ b in the 1st iteration of newtonpf() will generally take around 0.01 sec on my i7-10750H + RTX 2070super MSI-GL65.
But if A and b are changed into GPU arrays, the process of A \ b will take the following time if A is the following types:
full double, 0.8 sec
sparse double, 4 sec
full single, 0.1 sec
(sparse single is not supported)
So why is the diference in performance? I thought GPU could do things much faster than CPU.
Files are attached as follows. Atest is sparse and Agpu is a sparse gpu array. All are doubles.
9 Kommentare
Antworten (1)
Matt J
am 27 Dez. 2020
This thread looks relevant. It appears that sparse mldivide on the GPU is not expected to be faster.
13 Kommentare
Joss Knight
am 10 Jan. 2021
Bearbeitet: Joss Knight
am 10 Jan. 2021
Yes, PCG, GMRES, CGS, LSQR, QMR, TFQMR, BICG, BICGSTAB. Try them all, play with tolerance, iterations and preconditioning - something is likely to work. I'm not an expert in this field but this is what the sparse community tend to do.
Siehe auch
Kategorien
Mehr zu GPU Computing in MATLAB finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!