- Validation dataset is easier to learn as compared to training dataset. So, check whether validation dataset follows same distribution as training dataset.
- Regularization: Dropout is applied during training only. It helps in achieving better generalization on unseen datasets.
Would this be considered underfitting?
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Lucas Ferreira-Correia
am 31 Aug. 2020
Kommentiert: Torsten K
am 21 Okt. 2020
Training an LSTM (with 410 datasets) to simulate the response of a system.
Network settings are as follows:
layer = [
sequenceInputLayer(3,"Name","Sequential Input Layer")
lstmLayer(240,"Name","LSTM Layer")
fullyConnectedLayer(50,"Name","Fully Connected Layer")
dropoutLayer(0.5)
fullyConnectedLayer(1,"Name","Fully Connected Layer2")
regressionLayer("Name","Regression Output Layer")];
When training, the following learning curve is shown. The training and validation RMSE never converge and remain offset.
Does this indicate underfitting? If not what am I looking at, and is it acceptable?
Thank you in advance!
0 Kommentare
Akzeptierte Antwort
Anshika Chaurasia
am 3 Sep. 2020
Hi Lucas,
It is my understanding that you want to know whether your model is underfit or if it is not, then why training and validation loss are not converging.
“Underfitting occurs when the model is not able to obtain a sufficiently low error value on the training set.” – Deep Learning, by Ian Goodfellow
On seeing the graph, training and validation loss curves have low values. So, we can say model is not underfit.
In graph, validation loss is less than training loss because of the following reasons:
The reason for both validation and training never converge and remain offset could be that the model is not learning after certain epochs. You could try to experiment with hyperparameters like learning rate, no. of layers, dropout layer probability etc.
0 Kommentare
Weitere Antworten (1)
Greg Heath
am 10 Sep. 2020
Bearbeitet: Greg Heath
am 10 Sep. 2020
A model is UNDERFIT
if and only if
No. of independent training equations < No. of unknowns
Hope this helps
Thank you for formally accepting my answer.
Greg
1 Kommentar
Torsten K
am 21 Okt. 2020
Dear Greg,
how to calculate the number of training equations Ntrneq = prod(size(ttrn)) = Ntrn*O if I have 1 Output and 105 timeseries with 600 timesteps each? The targets are organized as cell-array of dimension 1x600, where each cell contains a 1x105 double-array with the target value (so T{1,1}(1,1) contains the 1st timestep/1st timeseries Target, T{1,2}(1,1) contains the 2nd timestep/1st timeseries Target and so on).
I guess the mentioned equation is only for a single time-series, which means 1 sample. So, how can I calculate Ntrneq in my case?
Best regards
Torsten
Siehe auch
Kategorien
Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!