Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?
0 Kommentare
Antworten (1)
Stephan
am 16 Jul. 2018
Bearbeitet: Stephan
am 16 Jul. 2018
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
6 Kommentare
Siehe auch
Kategorien
Mehr zu Classification Learner App finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!