K-Fold validation with Hyperparameter Optimization, doesn't yield a Classifica​tionPartit​ionedModel

13 Ansichten (letzte 30 Tage)
Hello people, I have a problem making cross-validation with the fitctree function, if I enabled Hyper parameter optimization. Normally, I use,
CTModel = fitctree(trainData,trainLabels,'KFold', 50 ,'ClassNames',[0 1]);
- which yields a ClassificationPartitionedModel class. If I tried to use ('OptimizeHyperparameters','all') options, I cannot use 'KFold', 50 pair, and I should use 'HyperparameterOptimizationOptions', struct('KFold',50) instead, otherwise it yields an error of "When optimizing parameters, validation arguments may only appear in the 'HyperparameterOptimizationOptions' argument". The problem that using this option, it produces a normal ClassificationTree not a ClassificationPartitionedModel. What should I do to produce the same first output with enabling hyper-parameter optimization?

Akzeptierte Antwort

Don Mathis
Don Mathis am 12 Jul. 2018
You will need to take the model you got and run crossval on it:
M = crossval(CTModel, 'KFold',50)
When you passed
'HyperparameterOptimizationOptions',struct('KFold',50),
you were telling fitctree to use 50-fold crossvalidation loss as the objective function of the optimization. After the optimization, fitctree fits the entire dataset using the best hyperparameters found and returns that single model.
To get a partitioned model using those hyperparameters (which are now saved inside the model) you need to do the 50-fold crossval again.
  5 Kommentare
Don Mathis
Don Mathis am 12 Jul. 2018
I see. That's perfect. Sounds like crossval will give you what you need then.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

sanjeev kumar T M
sanjeev kumar T M am 4 Sep. 2018
hello,
I am using SVR to develop a model for motor positioning and i need help regarding how to use cross validation to make a good model. If any one is there please help me regarding this. I am using three ways to find a good model 1) i trained the model without partitioning the data and later i used same data set for validation. during optimization also i used 5-fold and 10-fold cross validation loss minimization for the complete data set. 2) first i partition the data set into training and testing next i used trained set set to develop a model with and without hyperparameter optimization techniques later i used testing set to validate the model. but in this case when i used 5-fold cross / 10-fold cross validation without optimization am getting high error but when i used bayesian optimization for hyperparameter tuning. After the optimization am with the help of parameter i trained the complete model and tested with testing data set but when i used the cross validation for this data set again am getting large errors. Similarly i followed the steps for both 1st and 2nd step from 2 to 10 folds each part am getting high losses after optimizing the model for respective model but when i compare the models with above two section 7, 9 and 10 fold cross validation losses able to reduce the cross validation losses when i used tuned hyperparameters. please any one help me regarding this whatever the step i am following is it right or wrong. can i take the better model with respect to the comparison of above two condition is it suitable procedure to select the best model. Please any one help me regarding this. Thank you

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by