When you build a high-quality, predictive classification model, it is important to select the right features (or predictors) and tune hyperparameters (model parameters that are not estimated).
Feature selection and hyperparameter tuning can yield multiple models. You can compare the k-fold misclassification rates, receiver operating characteristic (ROC) curves, or confusion matrices among the models. Or, conduct a statistical test to detect whether a classification model significantly outperforms another.
To engineer new features before training a classification model, use
To build and assess classification models interactively, use the Classification Learner app.
To automatically select a model with tuned hyperparameters, use
fitcauto. This function tries a selection of classification model types with different hyperparameter values and returns a final model that is expected to perform well on new data. Use
fitcauto when you are uncertain which classifier types best suit your data.
To tune hyperparameters of a specific model, select the hyperparameter values and
cross-validate the model using those values. For example, to tune an SVM model, choose a set of
box constraints and kernel scales, and then cross-validate a model for each pair of values.
Certain Statistics and Machine Learning Toolbox™ classification functions offer automatic hyperparameter tuning through Bayesian
optimization, grid search, or random search.
bayesopt, the main function for implementing Bayesian optimization, is flexible
enough for many other applications as well. See Bayesian Optimization Workflow.
|Classification Learner||Train models to classify data using supervised machine learning|
|Univariate feature ranking for classification using chi-square tests|
|Rank features for classification using minimum redundancy maximum relevance (MRMR) algorithm|
|Feature selection using neighborhood component analysis for classification|
|Predictor importance estimates by permutation of out-of-bag predictor observations for random forest of classification trees|
|Estimates of predictor importance for classification tree|
|Estimates of predictor importance for classification ensemble of decision trees|
|Sequential feature selection using custom criterion|
|Rank importance of predictors using ReliefF or RReliefF algorithm|
|Local interpretable model-agnostic explanations (LIME)|
|Fit simple model of local interpretable model-agnostic explanations (LIME)|
|Plot results of local interpretable model-agnostic explanations (LIME)|
Workflow for training, comparing and improving classification models, including automated, manual, and parallel training.
Compare model accuracy scores, visualize results by plotting class predictions, and check performance per class in the Confusion Matrix.
Identify useful predictors using plots, manually select features to include, and transform features using PCA in Classification Learner.
Learn about feature selection algorithms and explore the functions available for feature selection.
This topic introduces to sequential feature selection and provides an example that selects features sequentially using a custom criterion and the
Neighborhood component analysis (NCA) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.
This example shows how to tune the regularization parameter in
fscnca using cross-validation.
Make a more robust and simpler model by removing predictors without compromising the predictive power of the model.
This example shows how to select features for classifying high-dimensional data.
gencfeatures to engineer new features before training a
classification model. Before making predictions on new data, apply the same feature
transformations to the new data set.
fitcauto to automatically try a selection of classification model types with different hyperparameter values, given training predictor and response data.
Perform Bayesian optimization using a fit function
or by calling
Create variables for Bayesian optimization.
Create the objective function for Bayesian optimization.
Set different types of constraints for Bayesian optimization.
Minimize cross-validation loss using Bayesian Optimization.
Minimize cross-validation loss using the
name-value argument in a fitting function.
Visually monitor a Bayesian optimization.
Monitor a Bayesian optimization.
Understand the underlying algorithms for Bayesian optimization.
How Bayesian optimization works in parallel.
Speed up cross-validation using parallel computing.
Examine the performance of a classification algorithm on a specific test data set using a receiver operating characteristic curve.