Validation accuracy that appears different from the graph in DeepNetworkDesigner

3 Ansichten (letzte 30 Tage)
Hi all!
I've been using DeepNetworkDesigner and have noticed that the final validation accuracy and validation loss do not seem to match what I would read off the plot. For instance, in the image below I would have expected training validation to be about 90% and the loss to be ~2. Instead, it reported 48% and a loss of nearly 8. Can anyone explain to me why this is happening and it is shooting up at the end?

Antworten (1)

Jack Xiao
Jack Xiao am 21 Feb. 2021
it is overfit. add more training data, or add dropout layer, or reduce net layers.

Kategorien

Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange

Produkte


Version

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by