Converting parfor operations to gpuArray

1 Ansicht (letzte 30 Tage)
nah
nah am 5 Aug. 2013
I have a working parallel version of a code that does some likelihood calculations on a reasonably large matrix in parallel (using parfar) It is a trivially parallel operation as the calculation is performed column-wise & the parfor is employed to operate on the columns of data (one worker per column)
How could I achieve the same thing using a GPU (since the matrix is quite big & I have limited number of workers). All the operations are all GPU supported functions (matrix algebra ones like eig, diag & matrix multiplications only )
ie.,
data = 1000 by 200 (1000 rows by 200 cols matrix)
[nrows, ncols] = size(data);
parfor ix = 1:ncols
workerData = data(:,ix);
likelihood(ix) = funcCalcLikelihood(workerData, params);
end
This is fast enough. But i need to repeat such calculations many times so as to do a parameter sweep, so any speed increment will be good. Also, since my dataset is getting bigger (ncols = 1500 & I only have 144 max workers)
I have 2 Tesla (c2050) GPUS and was wondering if I could convert this into a gpuArray operation.
Thanks for your inputs.
  3 Kommentare
nah
nah am 19 Aug. 2013
Thanks +Edric Ellis or your comment. I didn't quite get what you mean by converting put data though. Calling gpuArray automatically slices the big matrix by columns you mean ?
nah
nah am 6 Sep. 2013
Any updates on this, anyone ?

Melden Sie sich an, um zu kommentieren.

Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by