validation accuracy for cnn showing different than in the plot

in the plot it shows validation accuracy curve reached above 75% but the written validation accuraccy is just 66%! Is something wrong??

 Akzeptierte Antwort

Srivardhan Gadila
Srivardhan Gadila am 29 Dez. 2021

0 Stimmen

When training finishes, the Results shows the finalized validation accuracy and the reason that training is finished. If the 'OutputNetwork' training option is set to 'last-iteration' (which is default), the finalized metrics correspond to the last training iteration. If the 'OutputNetwork' training option is set to 'best-validation-loss', the finalized metrics correspond to the iteration with the lowest validation loss. The iteration from which the final validation metrics are calculated is labeled Final in the plots. And from the plot, it is clear that the validation accuracy dropped after training on the final iteration of the data
Refer to the following pages for more information: Monitor Deep Learning Training Progress, trainingOptions & trainNetwork.

4 Kommentare

new_user
new_user am 29 Dez. 2021
Bearbeitet: new_user am 29 Dez. 2021
Minibatch_Size = 32;
Validation_Frequency = floor(numel(Resized_Training_image.Files) / Minibatch_Size);
Training_Options = trainingOptions('sgdm', ...
'MiniBatchSize', Minibatch_Size, ...
'MaxEpochs', 2, ...
'InitialLearnRate', 1e-2, ...
'Shuffle', 'every-epoch', ...
'ValidationData', Resized_Validation_image, ...
'ValidationFrequency', Validation_Frequency, ...
'Verbose', false, ...
'Plots', 'training-progress');
net = trainNetwork(Resized_Training_image, New_Network, Training_Options)
What to exactly modify to get the best validation accuracy? If I use best-validation-loss then which one result will be the validation accuracy (lowest validation loss point will conclude the final validation accuracy)?
The said functionality has been introduced in R2021b version (Check the following release notes for more information) and you have to use the R2021b version in order to use this functionality. You have to add the name-value argument: 'OutputNetwork' with value 'best-validation-loss' in the trainingOptions (refer to https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html#d123e136007) :
Training_Options = trainingOptions('sgdm', ...
'MiniBatchSize', Minibatch_Size, ...
'MaxEpochs', 2, ...
'InitialLearnRate', 1e-2, ...
'Shuffle', 'every-epoch', ...
'ValidationData', Resized_Validation_image, ...
'ValidationFrequency', Validation_Frequency, ...
'Verbose', false, ...
'Plots', 'training-progress', ...
'OutputNetwork', 'best-validation-loss');
The trainNetwork will return the network corresponding to the training iteration with the lowest validation loss.
'OutputNetwork', 'best-validation-loss'
When I am adding these two functions then I am getting error messag, GPU out of memory.
But when I am not using these two functions then I can run smoothly.
In that case, either you can reduce the value of "MiniBatchSize" and try it or train the network on cpu by setting the "ExecutionEnvironment" to "cpu". Both of these are input arguments of trainingOptions.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by