I have 4 cores + CUDA supported graphics card. Is this equivalent to 5 cores?

3 Ansichten (letzte 30 Tage)
Hello
I want to maximize my computer's resources for using parallel computing, probably using spmd. I have 4 cores and a CUDA-supported graphics card, through gpuArray. Does this mean that I can use 5 cores, or does the gpu also require a core from the start?
If this is equivalent to 5 cores, how can I use these?
Thank you

Akzeptierte Antwort

Matt J
Matt J am 1 Jan. 2013
Bearbeitet: Matt J am 2 Jan. 2013
No, the kinds of computations that a GPU can do is different from a CPU, and therefore it cannot function like an additional CPU core. The GPU actually contains many hundred cores of its own, but these cores are specialized, capable of only very simple operations. You can only use the graphics card in conjunction with gpuArray.
  3 Kommentare
Walter Roberson
Walter Roberson am 3 Jan. 2013
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
If I recall, it is possible for the individual smpd labs to connect with the gpu, at least in the more recent versions. I do not recall the restrictions now; what I recall is that it used to be described as requiring one gpu per smpd lab, but that now there is a way to share.
What I have no idea about is whether, if you start gpuarray() going, and then start smpd sessions, whether the task of managing the gpu would get any cpu time. I am not aware of any Tesla-based graphics cards, and without Tesla the GPU remains in a mode of being limited to 30ms kernels (because the graphics subsystem needs to use the card too.)
It would not surprise me in the least if I got some of the details wrong in this; I do not have the toolbox to play with, so I've just been following along as people say interesting things. But perhaps something in what I wrote might trigger you to ask your question a different way.
Matt J
Matt J am 3 Jan. 2013
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
That seems very strange to me if you can do that. Wouldn't you need some kind of M-code version of _syncthreads() that you could call from your mfile to make sure that both the SPMD and gpu operations are simultaneously finished before proceeding?

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by