Out of memory problem can you healp me please

1 Ansicht (letzte 30 Tage)
Hind Haboubi
Hind Haboubi am 26 Apr. 2021
Bearbeitet: Image Analyst am 5 Jun. 2022
Hello guys this is my code:
and I get this error: Error using nnet.internal.cnn.DistributedDispatcher/computeInParallel (line 193)
Error detected on worker 2.
Error in nnet.internal.cnn.ParallelTrainer/computeInputStatisticsInEnvironment (line 142)
[results,workerIdx] = data.computeInParallel(func,1,data,stats,net);
Error in nnet.internal.cnn.Trainer/initializeNetwork (line 73)
stats = this.computeInputStatisticsInEnvironment(data,stats,net);
Error in vision.internal.cnn.trainNetwork (line 106)
trainedNet = trainer.initializeNetwork(trainedNet, inputStatsDispatcher);
Error in trainYOLOv2ObjectDetector>iTrainYOLOv2 (line 434)
[yolov2Net, info] = vision.internal.cnn.trainNetwork(...
Error in trainYOLOv2ObjectDetector (line 198)
[net, info] = iTrainYOLOv2(ds, lgraph, params, mapping, options, checkpointSaver);
Error in h (line 21)
[detector,info] = trainYOLOv2ObjectDetector(damageDataset,lgraph,options);
Caused by:
Error using hsv2rgb (line 100)
Out of memory.
Can you help me please
imageSize = [224 224 3];
load Damagevehicle
numClasses = 1;
anchorBoxes = [
43 59
18 22
23 29
84 109];
base = resnet50;
inputlayer = base.Layers(1);
middle = base.Layers(2:174);
finallayer = base.Layers(174:end);
baseNetwork = [inputlayer
middle
finallayer];
featureLayer = 'activation_40_relu';
lgraph = yolov2Layers(imageSize,numClasses,anchorBoxes,base,featureLayer);
options = trainingOptions('sgdm','MiniBatchSize',128,'InitialLearnRate',1e-3,'MaxEpochs',10,'CheckpointPath',tempdir,'Shuffle','every-epoch','ExecutionEnvironment','parallel');
damageDataset = ddd;
[detector,info] = trainYOLOv2ObjectDetector(damageDataset,lgraph,options);
  2 Kommentare
DGM
DGM am 26 Apr. 2021
Any information about the contents of ddd?
Hind Haboubi
Hind Haboubi am 26 Apr. 2021
Yes,It's a table from image labeler.

Melden Sie sich an, um zu kommentieren.

Antworten (2)

Yukta Maurya
Yukta Maurya am 5 Jun. 2022
Assuming that you might be using Faster R-CNN. The issue can be likely caused due to the following reasons:
1) Large input dimensions of the images (Big Images)
2) Limited memory availability on the GPU card.
3) Size and complexity of the network architecture (Big Network like VGG-16)
4) Combination of (1) and (3).
Further, the Faster R-CNN method processes the entire input image without resizing. This is in contrast to the 'RCNNObjectDetector' method which performs inherent cropping and resizing of regions to be comparable to the input dimensions of the network.

Image Analyst
Image Analyst am 5 Jun. 2022
Bearbeitet: Image Analyst am 5 Jun. 2022
I had the same problem and a Mathworks deep learning expert engineer helped us figure it out. I had a GPU (two actually) in my computer and the "ExecutionEnvironment" (I think it was called that) was set up as auto, the default, so it tried to use the GPU. The problem was that my GPU was only 16 GB and we got the "Out of memory" error, even for mini-batches of size 2. So we changed the ExecutionEnvironment option to CPU and it worked. Even though I had only 32 GB of RAM on my computer, I actually had hundreds of GB of memory because if the CPU runs out of RAM it uses disk space as "virtual memory".
By the way, I see you're using sgdm. He said don't use that old method. The new and much better method you should be using is adam.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by