MATLAB Answers

GPU time slower than CPU time in Mandrelbolt set example?

6 views (last 30 days)
Dang Manh Truong
Dang Manh Truong on 28 Jan 2017
Commented: Walter Roberson on 30 Jan 2017
Hi, I'm following the Mandrelbolt set example featured on Mathworks's blog: I'm using Windows 10, 16GB of RAM, and my GPU information:
>> gpuDevice
ans =
CUDADevice with properties:
Name: 'Quadro M1000M'
Index: 1
ComputeCapability: '5.0'
SupportsDouble: 1
DriverVersion: 8
ToolkitVersion: 7.5000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 2.1475e+09
AvailableMemory: 1.5948e+09
MultiprocessorCount: 4
ClockRateKHz: 1071500
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
Here are the results:
The thing is, the time it took with GPU is much longer than simply using CPU (arrayfun is fine). Why is it? Please help me, thank you very much :)


Sign in to comment.

Answers (2)

Joss Knight
Joss Knight on 29 Jan 2017
Your Quadro GPU is not intended for intensive double precision computation (I can't find published figures, but it's going to be something like 50 gigaflops as opposed to 5 teraflops for a proper compute GPU). Try converting the example to single precision. It will probably be about 30 times faster.


Show 4 older comments
Walter Roberson
Walter Roberson on 30 Jan 2017
Single precision is usually fine for k-nearest: distances that are within that range of each other are typically intended to be equal distances. You might end up with a different order between neighbours that are the same distance away.
Joss Knight
Joss Knight on 30 Jan 2017
You can't put mobile GPU chips into TCC mode as far as I'm aware. The basic issue is that you're trying to do high performance computing on a laptop.
Walter Roberson
Walter Roberson on 30 Jan 2017
Okay, further research says that the M1000M is Maxwell architecture GM107 series, and that the double precision performance is 1/32 of the single precision performance.

Sign in to comment.

Walter Roberson
Walter Roberson on 28 Jan 2017
This is not uncommon. There is communication overhead with the GPU. It is most effective if you have extensive GPU computation with little data transfer (which does not necessarily mean small matrices being computed with.) In cases where you do a little bit of computing on large matrices being transferred then although the computations might be very fast you have to wait for the results to data transfer in both directions. If you are going to do further computation on data then leave a copy of it on the GPU even if you want a CPU copy, so that you do not need to transfer it up to the GPU again .

  1 Comment

Dang Manh Truong
Dang Manh Truong on 28 Jan 2017
But there was no data transfer from the CPU to the GPU, because it was created directly on the GPU :( Can you explain this phenomenon? :(

Sign in to comment.


Translated by