Multiple GPU setup slower than single GPU
Ältere Kommentare anzeigen
For my research I have to perform a lot of repetition of the same optimization (for statistics). I already found out that my fitness function is way faster on the GPU and as such I am performing those calculations on the available GPUs. Fortunately, I have 3 GPUs at my disposal, I worked out a scheme where I open a parallel pool and using parfeval I assign each GPU to a different optimization.
When I checked the performance of the this setup, I noticed that the speed of a single GPU decreases a lot (by half) when it is used in the multiple GPU setup (3 workers) compared to a single GPU setup (1 worker).
I rechecked the implementations and saw no signs that data has to be sent from one GPU to the other so they never have to be synchronized.
Solutions I have tried: - Make a fitness function mfile for each GPU (did not work) - Open a matlab instance for each GPU separately (did not work)
Suggestions on this problem are appreciated?
9 Kommentare
Joss Knight
am 28 Apr. 2018
On the face of it you are accidentally using the same GPU on all the workers. What Index is reported by gpuDevice on each of your workers?
You should be able to get this working by running three MATLAB sessions. You just have to manually select a different gpuDevice on each one.
Beyond that I think we'd have to see some example code that reproduces your problem.
arvid Martens
am 9 Mai 2018
Bearbeitet: arvid Martens
am 9 Mai 2018
Joss Knight
am 10 Mai 2018
Can you explain exactly what you mean by that? The memory on GPUs 2 and 3 changes when you select GPU 1?
arvid Martens
am 11 Mai 2018
arvid Martens
am 11 Mai 2018
Bearbeitet: arvid Martens
am 11 Mai 2018
arvid Martens
am 11 Mai 2018
Joss Knight
am 14 Mai 2018
Bearbeitet: Joss Knight
am 14 Mai 2018
It doesn't surprise me that your code runs slower on 3 GPUs than one, because the Quadro K6000 will be hundreds of times slower at double-precision computation than the other two cards; your whole computation is sitting waiting for the Quadro card to finish.
As for what is going on with memory usage, can you explain more? The above code: when you run it as you wrote, do you see memory being used on the unselected cards? And if so how much? And how are you measuring that? And what is your Operating System?
I do see impact on a Quadro card from loading and running MATLAB but of course that card is doing graphics so it's not particularly surprising.
I ran this on a machine with 4 Titan XP GPUs in TCC mode. I found that there was a very small impact on unselected devices for each worker (8MiB of memory). Loading the CUDA driver into a process incurs some memory costs for each device; and then when you create a CUDA context by selecting a device, a large chunk of memory is reserved, for each process.
I also ran this on a machine with three different GPUs, like yours and saw much the same behaviour, with or without pools.
arvid Martens
am 14 Mai 2018
Joss Knight
am 17 Mai 2018
You're right, sorry (about the double precision performance).
I wouldn't put too much stock in the Utilization measure, it is only weakly linked to performance. Much better would be to look at how long it is taking to run your code.
The only thing I can think of is that you are being limited by shared system resources. All three processes are sharing the PCI bus and system memory - perhaps there is a lot of data transfer. Or perhaps you are doing some large computations on the CPU that use all your cores? Even some GPU functions do that because they are hybrid algorithms (e.g. mldivide, eig, chol etc). Waiting for the CPU would slow the rate at which kernels are being launched on the GPU.
If you are running on Linux it would be interesting to see whether you can get any benefit out of using the Multi-Process Service.
Antworten (0)
Kategorien
Mehr zu Matrix Indexing finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!