Distributed and spmd not running faster

7 Ansichten (letzte 30 Tage)
James
James am 25 Jan. 2025
Kommentiert: siyu guo am 26 Mär. 2025
I think I'm missing something fundamental about using distributed arrays with spmd. If I run the following the distributed version takes ~0.04s while the non-distributed version completes in ~0.2s (with a process pool matching the cores on my machine).
x = ones(10000, 10000);
tic
x = x * 2.3;
toc
Elapsed time is 0.035079 seconds.
x
x = 10000×10000
2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
x = distributed(ones(10000, 10000));
tic
spmd
x = x * 2.3;
end
toc
Elapsed time is 0.226569 seconds.
gather(x)
ans = 10000×10000
2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000 2.3000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
What am I missing?
Edit: I moved the tic and toc to after the array initialization and before displaying x to not include that as I realized calling distributed is taking longer while it distributes the array across the processes and gather is taking time.
  2 Kommentare
Walter Roberson
Walter Roberson am 25 Jan. 2025
x = distributed(ones(10000, 10000));
Better would be
x = ones(10000, 10000, 'distributed');
That should reduce the overall execution, but should not change the parts you are timing.
James
James am 25 Jan. 2025
True, thanks!

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Edric Ellis
Edric Ellis am 27 Jan. 2025
You're not missing anything. If you're only using the cores on your local machine, distributed is unlikely to be much use to you. The primary goal of distributed is to run on the memory of multiple machines, and enable computations that would otherwise not be possible. A simple breakdown would be:
  • Desktop MATLAB is generally good for large array operations that fit in memory
  • gpuArray can be even better, if you have a suitable GPU (better still if you can run in lower precision such as single)
  • distributed is best for array operations that fit in the memory of multiple machines
  • tall works well for operations on data backed by some form of storage (e.g. disk), and whole arrays can never fit in memory even across a cluster
Desktop MATLAB already runs many suitable operations in a multi-threaded manner - there is no way even in principle that distributed could perform better. In fact, for basic operations - if desktop MATLAB cannot multi-thread it - that may well mean that a distributed implementation is either not possible or not efficient.
In your case, one of the main things you're timing is the overhead of going into and out of an spmd context.
  2 Kommentare
James
James am 27 Jan. 2025
Perfect, thanks for breaking it all down.
siyu guo
siyu guo am 26 Mär. 2025
Hello, senior, it was not easy to find an expert who also used matlab to do spmd. Recently, I tried to use spmd in matlab to design parallel "conjugate gradient method" to accelerate the iterative calculation of large sparse matrices. I just raised a question about spmdCat:my question about spmdCat
Sincerely hope you can help answer after watching this message, and I would like to ask you about the acceleration effect of matlab. Initially, the main body of my program was edited in matlab, but as the mesh is denser, the calculation time will increase geometrically.
So I want to seek advice from you whether to continue to write parallel programs in matlab, or switch to another language (or software) as soon as possible?

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Parallel Computing finden Sie in Help Center und File Exchange

Produkte


Version

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by