I am experimenting with GPU and the runtime is interesting for two test functions.
Input in both cases:
d = rand(1,100000000,'single','gpuArray');
b = rand(1,1,'single','gpuArray');
First function:
function d = gputest1(d,b)
tic
for i=1:10000
d=d./(d-b);
end
wait(gpuDevice)
toc
end
Second:
function d = gputest2(d,b)
tic
for i=1:10000
d=d./b;
end
wait(gpuDevice)
toc
end
I expect longer runtime for gputest1 because it has to do two operations in one iteration, but the measured runtime is 12 s for gputest1, and 27 s for gputest2. Does anyone have an explanation for this?
Tests are performed on a GTX 1060 6GB (CPU: i7-7700, RAM: 32 GB).
3 Comments
Joss Knight (view profile)
Direct link to this comment
https://de.mathworks.com/matlabcentral/answers/412379-rdivide-and-minus-operation-runs-faster-on-gpu-than-rdivide-alone#comment_594096
Prabhakar (view profile)
Direct link to this comment
https://de.mathworks.com/matlabcentral/answers/412379-rdivide-and-minus-operation-runs-faster-on-gpu-than-rdivide-alone#comment_594113
Laszlo Kormoczi (view profile)
Direct link to this comment
https://de.mathworks.com/matlabcentral/answers/412379-rdivide-and-minus-operation-runs-faster-on-gpu-than-rdivide-alone#comment_594114
Sign in to comment.