How can I accelerate deep learning training using GPU?

158 Ansichten (letzte 30 Tage)
Kyungsik Shin
Kyungsik Shin am 20 Mai 2019
Bearbeitet: KSSV am 27 Mai 2021
I've made a simple neural network
It classifies MNIST handwritten digit using fully-connected layers
lgraph_2 = [ ...
imageInputLayer([28 28 1])
fullyConnectedLayer(512)
reluLayer
fullyConnectedLayer(256)
reluLayer
fullyConnectedLayer(128)
reluLayer
fullyConnectedLayer(10)
softmaxLayer
classificationLayer];
And the options in the neural network is
miniBatchSize = 10;
valFrequency = 5;
options = trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
'MaxEpochs',5, ...
'InitialLearnRate',3e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',augimdsValidation, ...
'ValidationFrequency',valFrequency, ...
'Verbose',true, ...
'Plots','training-progress', ...
'ExecutionEnvironment', 'parallel');
I expected when i use a GPU, it's training speed will be high
But when I train this network using Macbook(sigle CPU)
it takes 1 hour for around 2500 iterations
And when I use my desktop using RTX 2080Ti,
It takes much longer time to train.
MATLAB detects my GPU properly(I checked the GPU information using gpuDevice)
I don't know how can I accelerate the training proess.
Thank you in advance

Antworten (2)

Joss Knight
Joss Knight am 2 Jun. 2019
Your mini-batch size is far too small. You're not going to get any benefit of GPU over CPU with that little GPU utilisation. Increase it to 512 or 1024, or higher (MNIST is a toy network - you could probably train the whole thing in a single mini-batch).
Also, the ExecutionEnvironment option you're looking for is gpu or auto, not parallel. parallel may be slowing things down in your case, if you have a second supported graphics card.
  1 Kommentar
Ali Al-Saegh
Ali Al-Saegh am 22 Okt. 2020
Hello Knight,
I need to know how things are going on when a GPU is used for deep learning, does the CPU also involved in the training process or any other stuff? It will be great for me if I get some explanation for that!
Also, is it possible to measure the overhead time required for transferring data between memory and GPU?

Melden Sie sich an, um zu kommentieren.


Shivam Sardana
Shivam Sardana am 29 Mai 2019
Bearbeitet: KSSV am 27 Mai 2021
Considering CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher and Parallel Computing Toolbox™ are installed. Consider changing ‘ExecutionEnivronement’ to ‘gpu’. You can refer to the documentation link to see if this helps:
  1 Kommentar
NOSHEEN SOHAIL
NOSHEEN SOHAIL am 23 Okt. 2019
I'm almost facing the same issue. my GeForce GTX1080 GPU does not show training progress, no update on a single iteration, single epoch even waited watching 3 days.. or it seems to be much slower than CPU training??
How is this happening, instead of being faster computation, it shows nothing. only big noisy sound from my GPU card is heard but no training plot progresses??
All requirements of CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher and Parallel Computing Toolbox™ are installed. Done changing ‘ExecutionEnivronement’ to ‘gpu
please help

Melden Sie sich an, um zu kommentieren.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by