Why my test accuracy higher than validation accuracy?
Ältere Kommentare anzeigen
I am using classification learner app. I get test accuracy higher than validation accuracy. For example 94.61% Accuracy (Validation), 94.81% Accuracy (Test). I'm sure I've splitted the train and test sets correctly. Why is test accuracy higher? How can i solve this? I would be grateful if you help.
4 Kommentare
John D'Errico
am 29 Apr. 2023
Consider the two numbers are within 0.2% of each other. What are the odds, if you had split the sets differently (and also randomly), that you might have gotten a subtly different result?
Anyway, you build a model using the test set. It is optimized to fit that data as well as possible. Then you give it another set of data (the validation set), that was not used to build the model. I would expect this second set to fit at least a little more poorly. And that is what you seem to have observed. I'm not at all surprised. But again, the difference is a small one.
the cyclist
am 29 Apr. 2023
- training -- to fit the model
- validation -- to tune hyperparameters
- test -- to evaluate the final model choice
(This oversimplified, for brevity.)
Typically, training performace > validation performance > test performace.
(Again, oversimplified for brevity.)
So, his result is slightly more surprising than the two-stage method you describe. (I expect he did not train on the test set, as you are describing.)
See my answer for my take on the whole thing, which is effectively the same as your broader point, which is that the difference is small and not surprising.
John D'Errico
am 29 Apr. 2023
Ok. That makes sense. Regardless, the difference is tiny, and could easily have been the other way.
Akzeptierte Antwort
Weitere Antworten (0)
Kategorien
Mehr zu Classification Learner App finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!