MATLAB Answers

GPU out of memory

23 views (last 30 days)
Reddyprasad Vankireddy
Reddyprasad Vankireddy on 6 Feb 2020
Answered: Joss Knight on 9 Feb 2020
Hi.
I am working with applying one of the MATLAB neural network examples to a data set that I have. My operating system is Windows !0. When I run the program on the CPU there are no errors. However, when I run it on the GPU I am getting an error:
"Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'."
Error using trainNetwork (line 170)
GPU out of memory. Try reducing 'MiniBatchSize' using the trainingOptions function.
Error in Untitled (line 36)
convnet = trainNetwork(imds,layers,options);
Caused by:
Error using nnet.internal.cnngpu.batchNormalizationForwardTrain
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If
the problem persists, reset the GPU by calling 'gpuDevice(1)'.
My Code As Follows:
clc
clear
close all
close all hidden;
[file1,path1]=uigetfile('*.*');
rgb=imread([path1,file1]);
figure,imshow(rgb);title('Input image');
rgb=imresize(rgb,[512 734]);
matlabroot = 'E:\project files';
digitDatasetPath = fullfile(matlabroot,'Dataset');
imds = imageDatastore(digitDatasetPath,'IncludeSubfolders',true,'LabelSource','foldernames');
layers = [
imageInputLayer([512 734 3])
convolution2dLayer(3,32,'Stride',1,'Padding','same','Name','conv_1')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2,'Name','maxpool_1')
convolution2dLayer(3,64,'Stride',1,'Padding','same','Name','conv_2')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2,'Name','maxpool_2')
convolution2dLayer(3,128,'Stride',1,'Padding','same','Name','conv_3')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2,'Name','maxpool_3')
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
options = trainingOptions('sgdm','Plots','training-progress','MaxEpochs',10,'initialLearnRate',0.001);
convnet = trainNetwork(imds,layers,options);
YPred = classify(convnet,rgb);
output=char(YPred);
if output=='1'
msgbox('No tumor is detected')
else
msgbox('Tumor is detected')
end
my GPU Device Details are as follows
gpuDevice
ans =
CUDADevice with properties:
Name: 'GeForce GTX 1650'
Index: 1
ComputeCapability: '7.5'
SupportsDouble: 1
DriverVersion: 10.2000
ToolkitVersion: 10.1000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 4.2950e+09
AvailableMemory: 3.0421e+09
MultiprocessorCount: 16
ClockRateKHz: 1560000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1

  2 Comments

Joss Knight
Joss Knight on 9 Feb 2020
What is your question?
Reddyprasad Vankireddy
Reddyprasad Vankireddy on 9 Feb 2020
How to solve this GPU out of memory issue.. i tried previous answers in the community but I didn't work.

Sign in to comment.

Answers (1)

Joss Knight
Joss Knight on 9 Feb 2020
In your example code you are using the default mini-batch size of 128. Reduce the MiniBatchSize training option until you stop getting the out of memory error.

  0 Comments

Sign in to comment.

Sign in to answer this question.

Products


Release

R2019b

Translated by