Matrix multiplication optimization using GPU parallel computation

5 Ansichten (letzte 30 Tage)
Nick
Nick am 18 Aug. 2022
Kommentiert: Nick am 23 Jan. 2023
Dear all,
I have two questions.
(1) How do I monitor GPU core usage when I am running a simulation? Is there any visual tool to dynamically check GPU core usage?
(2) Mathematically the new and old approaches are same, but why is the new approach is 5-10 times faster?
%%% Code for new approach %%%
M = gpuArray(M) ;
for nt=1:STEPs
if (there is a periodic boundary condition)
M = A1 * M + A2 * f * M
else
% diffusion
M = A1 * M ;
end
end
  6 Kommentare
Jan
Jan am 19 Aug. 2022
Okay. As far as I understand, you do not want to tell me the speed difference between
M = A1 * M + A2 * f * M;
and
M = (A1 + A2 * f) * M
and you do not want to show the complete code for the "old" implementation. Then I cannot estimate, if storing the data in "B(t_n)" is a cause of the problem.
Nick
Nick am 20 Aug. 2022
Hi Jan,
The following table summarizes the computation time comparison over different approach and GPU enabled/disabled.
New one-step app 1 doesn't have any improvement.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Matt J
Matt J am 18 Aug. 2022
Bearbeitet: Matt J am 18 Aug. 2022
Because in your second formulation, there is no need to build a table of non-zero entries for the sparse matrix B. The table-building step requires sorting operations, which your second version avoids.
Also, if B has many columns, it will consume a lot of memory in proportion to the number of columns (independent of the sparsity). That is avoided as well by the second implementation.
  10 Kommentare
Matt J
Matt J am 19 Jan. 2023
Bearbeitet: Matt J am 19 Jan. 2023
Do you know how MATLAB manages sparse array elements?
Here is some detail on how sparse matrices are stored,
If so, will any operation on those non-zero elements cause the sorting operations you mentioned above?
If a new sparsity pattern is generated, then it will. Here's maybe another example to show how this can make sparse operations slower than full operations:
N=5000;
A=sprand(N,N,1/5);
B=sprand(N,N,1/5);
tic;
A+B;
toc; %sparse matrix addition
Elapsed time is 0.085529 seconds.
A=full(A); B=full(B);
tic
A+B;
toc %full matrix addition
Elapsed time is 0.049478 seconds.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Joss Knight
Joss Knight am 19 Aug. 2022
The Windows Task Manager lets you track GPU utilization and memory graphically, and the utility nvidia-smi lets you do it in a terminal window.
Neither the CUDA driver nor the runtime provide access to which core is running what, although you might be able to hand-code something using NVML.
  3 Kommentare
Joss Knight
Joss Knight am 20 Aug. 2022
Ah, I forgot that you cannot see utilization information for GeForce cards, sorry. Those charts are for graphics and so not relevant for compute (except the memory one).
You'll have to use nvidia-smi.
Nick
Nick am 29 Aug. 2022
Hi Joss, thanks for your info!

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Parallel Computing Fundamentals finden Sie in Help Center und File Exchange

Produkte


Version

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by