DNN training 3D Parameters
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
I am training a DNN on a small dataset of MRI images in 3D with a scratch network I created with 4 sets of convolutional layer, batch normalization + relu + max pooling, followed by a global average pooling and 2 fully connected layers with a dropout in between them. I am experiencing a lot of low accuracy for both my training and validation curves, and my loss curve does not decay and is more horizontal around 1. I have tried to use l2 regularization, change momentum, and add a learn rate drop factor but it doesn't improve the accuracy. This model worked well with 2D images, but I am unable to get an accuracy above 60% for my 3D network. Would be helpful to recieve some suggestions on what paramters I could try to change
0 Kommentare
Antworten (1)
Matt J
am 11 Apr. 2024
Bearbeitet: Matt J
am 11 Apr. 2024
The parameters you mention experimenting with do not include all the training options (see below for a more complete list). You could also try a different training algorithm, e.g., adam. Because it is a larger input/output dimension, you may also need to change the network architecture so that it has more weights to manipulate.
options = trainingOptions('adam', ...
'MiniBatchSize',5, ...
'MaxEpochs',100, ...
'InitialLearnRate',ilr, ...
'L2Regularization',1e-4,...
'LearnRateSchedule','piece', ...
'LearnRateDropFactor',0.8, ...
'LearnRateDropPeriod',5);
0 Kommentare
Siehe auch
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!