# GPU speed up for pcg() is disappointing

62 Ansichten (letzte 30 Tage)
Dan R am 7 Sep. 2022
Kommentiert: Nils Betancourt am 4 Sep. 2024 um 18:43
I am using the pcg() to solve x =A\b. b is about 2e6 long. I am using R2021b on Ubuntu 20.04 with a Ryzen 9 5950x CPU and an nVidia A4000 GPU.
Running this code...
tol = 1e-4;
%solve on the CPU
tic
L = ichol(A, struct('michol','on'));
x = pcg(A, b, tol, 5000, L, L');
fprintf('solve time: %0.4g s\n', toc);
%solve on the GPU
tic
x = pcg(gpuArray(A), b, tol, 50000);
fprintf('solve time: %0.4g s\n', toc);
...gives
pcg converged at iteration 347 to a solution with relative residual 9.6e-05.
solve time: 16.24 s
pcg converged at iteration 7281 to a solution with relative residual 9.9e-05.
solve time: 13.91 s
So, I am seeing a small GPU speed up (nice, but not very exciting). The fact that the GPU takes 20x the iterations of the CPU makes me think there could be a >20x speed up possible, which would be much more exciting.
The L arguments make a big difference to the CPU speed, but I can't get the GPU version to take it. Doing
x = pcg(gpuArray(A), b, tol, 5000, L, L');
throws "Error using gpuArray/pcg (line 58) When the first input argument is a sparse matrix, the second preconditioner cannot be a matrix. Use functions for both preconditioners, or multiply the precondition matrices". Doing
x = pcg(gpuArray(A), b, tol, 50000, L*L');
seems to hang MATLAB (after waiting 5 minutes I gave up and had to terminate by restarting MATLAB; ctrl-C did nothing).
Can anyone tell me what is going on here? Is it simply that the GPU version is using a different algorithm and so it makes no sense to compare number of iterations (and so the 15% speed up I see is all I should hope for)? Or is it that I need a different preconditioning approach?
I see there is a possible bug related to this: https://uk.mathworks.com/support/bugreports/details/2534618.
##### 0 Kommentare-2 ältere Kommentare anzeigen-2 ältere Kommentare ausblenden

Melden Sie sich an, um zu kommentieren.

### Akzeptierte Antwort

Joss Knight am 11 Sep. 2022
I'm guessing LL' is extremely dense, which will explain why the solver stalls. On the GPU the preconditioning is (currently) performed using ILU, which, like most sparse operations should be passed a satisfactorily sparse matrix. Try just passing A as the preconditioner and you may get a better result. Also, try the other solvers (CGS, GMRES, LSQR, BICG etc).
The reason why the solvers on the GPU work differently is because a sparse direct solve does not parallelize well, which is why sparse backslash (\) is generally slow - mostly because of the amount of memory needed. That doesn't explain why the solvers do not accept two triangular sparse matrices as preconditioner input - that is something that should be rectified. But the ILU should have much the same effect as ICHOL does.
I thought I'd be telling you that the GPU is slow because your card isn't very fast for double precision (only 599 GFLOPS). But actually you're doing 20 times the iterations in less time, so it seems you're right, if you hit the right combination of solver and preconditioner there's a good chance you'll get to the result much faster.
##### 19 Kommentare17 ältere Kommentare anzeigen17 ältere Kommentare ausblenden
Joss Knight am 1 Sep. 2024 um 13:00
The bug report shows that this was fixed in R2024a. It took so long because it involved waiting for a fix from NVIDIA who were also changing their APIs at the time. Anyway, I reran the original reproduction steps and can confirm that it's fixed in R2024a and the GPU performance is considerably better than CPU even on a device designed mainly for single precision.
>> tic; pcg(A,b,tol,5000,L,L'); toc
pcg converged at iteration 347 to a solution with relative residual 9.8e-05.
Elapsed time is 30.916549 seconds.
>> tic; pcg(gpuArray(A),b,tol,5000,L*L'); toc
pcg converged at iteration 343 to a solution with relative residual 0.0001.
Elapsed time is 4.651888 seconds.
In R2024b, coming out in a few weeks, we have added support for two triangular preconditioners to the GPU sparse solvers. Using this you get an even better performance as you would expect.
>> tic; pcg(gpuArray(A),b,tol,50000,L,L'); toc
pcg converged at iteration 343 to a solution with relative residual 9.7e-05.
Elapsed time is 3.093898 seconds.
Nils Betancourt am 4 Sep. 2024 um 18:43
Hello Joss.
Great! thank you. I appreciate the update.

Melden Sie sich an, um zu kommentieren.

### Weitere Antworten (2)

Yair Altman am 8 Sep. 2022
The gpuArray version of pcg has not been updated since 2018, so it is somewhat lagging compared to the CPU version. Preconditioner input for sparse input is only supported for a single preconditioner matrix, not two as in the CPU case. Refer to the help of the GPU version (type help('gpuArray/pcg') or edit(...) for details).
Perhaps you could try to use the GPU version with a dense (non-sparse) input - it's wasteful in memory but perhaps possibly faster than the sparse-CPU version:
x = pcg(gpuArray(double(A)), b, tol, 5000, L, L');
##### 7 Kommentare5 ältere Kommentare anzeigen5 ältere Kommentare ausblenden
Bruno Luong am 9 Sep. 2022
Bearbeitet: Bruno Luong am 9 Sep. 2022
You shouldn't dream much on 20x acceleration. The preconditioning is efficient only if it improves the condition number and the resolution of M1*M2*x is cheap compared to A*x.
incomplete cholesky ichol and lu ilu are relatively expensive, since it is almost like solving A*x.
That's why people looks for permutations (cheap) that render matrix diagonal dominant, and approximate the matrix by narrow band matrix for M (cheap to solve).
Dan R am 9 Sep. 2022
Bearbeitet: Dan R am 12 Sep. 2022
But ichol() seems fast for me:
tic; L = ichol(A, struct('michol','on')); toc; %this runs on the CPU and takes ~0.07s
tic; x = pcg(A, b, tol, 5000, L, L'); toc; %this runs on the CPU and takes ~16s
In summary:
1. CPU version of pcg() with L: 347 iterations, 16 sec.
2. CPU version of pcg() without L: 7285 iterations, 106 sec.
3. GPU version of pcg() with L: does not work.
4. GPU version of pcg() without L: 7281 iterations, 14 sec.
2 and 4 look very similar in terms of behaviour. I am hoping that 1 and 3 would also have similar behaviour with a corresponding speed up of 106/16 = 6.6 times (so not 20x as I said above), i.e. I expect/hope 3 would take 14/6.6 = 2.1 s if it could be made to work.
I confess I am not an expert on the numerical methods being used here, so I may be missing something...

Melden Sie sich an, um zu kommentieren.

Christine Tobler am 12 Sep. 2022
It looks like you can simply replace your current call to pcg with
x = pcg(A, b, tol, 5000, @(y) L\y, @(y)L'\y);
as the error message just says that it only supports function handle input here (though it's not clear to me why that restriction is there).
Perhaps it makes sense to also cast the matrix L to a gpuArray?
##### 1 Kommentar-1 ältere Kommentare anzeigen-1 ältere Kommentare ausblenden
Dan R am 12 Sep. 2022
I tried this after advice from Bruno Luong above - no luck I'm afraid.

Melden Sie sich an, um zu kommentieren.

### Kategorien

Mehr zu GPU Computing in MATLAB finden Sie in Help Center und File Exchange

R2021b

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by