Recently, I bought 2 RTX 2080 gpu. We currently have 2 RTX 2080 and 1 TITAN xp gpu. I want these gpu to work in parallel. But, I am constantly getting the CUDNN_STATUS_EXECUTION_FAILED error. I've added the cuda cache to the system requirements in the environment variables as 512mb, but I still get the same error.
Training across multiple GPUs.
Initializing image normalization.
| Epoch | Iteration | Time Elapsed | Mini-batch | Validation | Mini-batch | Validation | Base Learning|
| | | (seconds) | Loss | Loss | RMSE | RMSE | Rate |
Error using trainNetwork (line 150)
Unexpected error calling cuDNN: *CUDNN_STATUS_EXECUTION_FAILED.*
Error in segnet_deadpixel_train (line 86)
[net info] = trainNetwork(pximds,lgraph,options);
Error using nnet.internal.cnn.ParallelTrainer/train (line 68)
Error detected on worker 1.
Error using nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 16)
Unexpected error calling cuDNN: CUDNN_STATUS_EXECUTION_FAILED._
Expect this error, when working with a single gpu with the TITAN xp is working well but the RTX 2080,it's working slowly and giving the following warning.
Warning: GPU is low on memory, which can slow performance due to additional data transfers with main memory. Try reducing
the 'MiniBatchSize' training option. This warning will not appear again unless you run the command :
I've tried MATLAB 2018a and 2018b versions with windows 10 64bit. Which version of MATLAB should I use to resolve these issues? Which versions of CUDA and CUDNN support RTX 2080? How can i fix this errors ?