MATLAB Answers

0

How can I fix the CUDNN errors when I'm running train with RTX 2080?

Asked by Aydin Sümer on 5 Dec 2018
Latest activity Commented on by Joss Knight
on 5 Dec 2018
Hello,
Recently, I bought 2 RTX 2080 gpu. We currently have 2 RTX 2080 and 1 TITAN xp gpu. I want these gpu to work in parallel. But, I am constantly getting the CUDNN_STATUS_EXECUTION_FAILED error. I've added the cuda cache to the system requirements in the environment variables as 512mb, but I still get the same error.
Training across multiple GPUs.
Initializing image normalization.
|=======================================================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Validation | Mini-batch | Validation | Base Learning|
| | | (seconds) | Loss | Loss | RMSE | RMSE | Rate |
|=======================================================================================================================|
Error using trainNetwork (line 150)
Unexpected error calling cuDNN: *CUDNN_STATUS_EXECUTION_FAILED.*
Error in segnet_deadpixel_train (line 86)
[net info] = trainNetwork(pximds,lgraph,options);
Caused by:
Error using nnet.internal.cnn.ParallelTrainer/train (line 68)
Error detected on worker 1.
Error using nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 16)
Unexpected error calling cuDNN: CUDNN_STATUS_EXECUTION_FAILED._
Expect this error, when working with a single gpu with the TITAN xp is working well but the RTX 2080,it's working slowly and giving the following warning.
Warning: GPU is low on memory, which can slow performance due to additional data transfers with main memory. Try reducing
the 'MiniBatchSize' training option. This warning will not appear again unless you run the command :
warning('on','nnet_cnn:warning:GPULowMemory').
I've tried MATLAB 2018a and 2018b versions with windows 10 64bit. Which version of MATLAB should I use to resolve these issues? Which versions of CUDA and CUDNN support RTX 2080? How can i fix this errors ?

  0 Comments

Sign in to comment.

Tags

2 Answers

Answer by Joss Knight
on 5 Dec 2018
Edited by Joss Knight
on 5 Dec 2018
 Accepted Answer

This a known issue. Before you start anything else run
try
nnet.internal.cnngpu.reluForward(1);
catch ME
end
That should clear the issue.

  5 Comments

Thank you so much. That's code fix my problem.
I want to ask different question because of I didn't understand why it's happen like that. When I use a single GPU with TITAN XP, it iterates more quickly.
Training on single GPU.
Initializing image normalization.
|========================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Mini-batch | Base Learning |
| | | (hh:mm:ss) | Accuracy | Loss | Rate |
|========================================================================================|
| 1 | 1 | 00:00:02 | 63.61% | 0.7633 | 0.0010 |
| 1 | 2 | 00:00:04 | 63.68% | 0.7207 | 0.0010 |
| 1 | 4 | 00:00:07 | 63.86% | 0.7527 | 0.0010 |
| 1 | 6 | 00:00:09 | 64.10% | 0.7661 | 0.0010 |
| 1 | 8 | 00:00:12 | 64.05% | 0.7409 | 0.0010 |
| 1 | 10 | 00:00:15 | 64.64% | 0.7467 | 0.0010 |
If you use multiple gpu,iterates longer.
Starting parallel pool (parpool) using the 'local' profile ...
connected to 3 workers.
Lab 1:
Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the libraries. Recompiling can take several minutes.
In spmdlang.remoteBlockExecution (line 50)
Lab 2:
Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the libraries. Recompiling can take several minutes.
In spmdlang.remoteBlockExecution (line 50)
Initializing image normalization.
|========================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Mini-batch | Base Learning |
| | | (hh:mm:ss) | Accuracy | Loss | Rate |
|========================================================================================|
| 1 | 1 | 00:00:08 | 60.96% | 0.9061 | 0.0010 |
| 1 | 2 | 00:00:16 | 60.31% | 0.8097 | 0.0010 |
| 1 | 4 | 00:00:31 | 60.24% | 0.8561 | 0.0010 |
| 1 | 6 | 00:00:46 | 60.14% | 0.7761 | 0.0010 |
| 1 | 8 | 00:01:01 | 60.57% | 0.7926 | 0.0010 |
| 1 | 10 | 00:01:16 | 60.69% | 0.7847 | 0.0010 |
Lab 1:
Warning: GPU is low on memory, which can slow performance due to additional data transfers with main memory.
Try reducing the 'MiniBatchSize' training option. This warning will not appear again unless you run the command: warning('on','nnet_cnn:warning:GPULowOnMemory').
what is the reason for this?
When your device has to start paging, it goes A LOT slower. This is just a backup to help your training finish, but it will slow things down.
Secondly, if you're on Windows, multi-GPU training is slower.
Thirdly, if the MiniBatchSize is 1, multi-GPU training is pointless because there is no way to divide the mini-batch between workers. Set the miniBatchSize to 2. But the 2080 will still run out of memory.
Thanks for your reply. I understood better now.

Sign in to comment.


Answer by Joss Knight
on 5 Dec 2018

Regarding issues with memory, the Titan XP has 12GB of memory while the RTX 2080 has only 8GB. You'll need to reduce your MiniBatchSize further to train SegNet on the 2080.

  4 Comments

Show 1 older comment
Ah. No. You need to use a smaller SegNet seed network. Which one are you using?
I'm using that configuration. I think, it's original version.
lgraph = segnetLayers(imageSize,numClasses,'vgg16');
That's my training options.
options = trainingOptions('sgdm', ...
'Momentum',0.9, ...
'InitialLearnRate',1e-3, ...
'L2Regularization',0.0005, ...
'MaxEpochs',100, ...
'MiniBatchSize',1, ...
'Shuffle','every-epoch', ...
'CheckpointPath', ConvnetFolder, ...
'VerboseFrequency',2,'ExecutionEnvironment','multi-gpu');
I'm looking into this. It may be that the options to segnetLayers do not allow a small enough network for training on an 8GB GPU. You may have to edit the network manually to create a smaller one.

Sign in to comment.