Hauptinhalt

fitctree

Fit binary decision tree for multiclass classification

Description

Mdl = fitctree(Tbl,ResponseVarName) returns a fitted binary classification decision tree based on the input variables (also known as predictors, features, or attributes) contained in the table Tbl and output (response or labels) contained in Tbl.ResponseVarName. The returned binary tree splits branching nodes based on the values of a column of Tbl.

Mdl = fitctree(Tbl,formula) returns a fitted binary classification decision tree based on the input variables contained in the table Tbl. formula is an explanatory model of the response and a subset of predictor variables in Tbl used to fit Mdl.

Mdl = fitctree(Tbl,Y) returns a fitted binary classification decision tree based on the input variables contained in the table Tbl and output in vector Y.

Mdl = fitctree(X,Y) returns a fitted binary classification decision tree based on the input variables contained in matrix X and output Y. The returned binary tree splits branching nodes based on the values of a column of X.

example

Mdl = fitctree(___,Name,Value) fits a tree with additional options specified by one or more name-value pair arguments, using any of the previous syntaxes. For example, you can specify the algorithm used to find the best split on a categorical predictor, grow a cross-validated tree, or hold out a fraction of the input data for validation.

example

[Mdl,AggregateOptimizationResults] = fitctree(___) also returns AggregateOptimizationResults, which contains hyperparameter optimization results when you specify the OptimizeHyperparameters and HyperparameterOptimizationOptions name-value arguments. You must also specify the ConstraintType and ConstraintBounds options of HyperparameterOptimizationOptions. You can use this syntax to optimize on compact model size instead of cross-validation loss, and to perform a set of multiple optimization problems that have the same options but different constraint bounds.

Note

For a list of supported syntaxes when the input variables are tall arrays, see Tall Arrays.

Examples

collapse all

Grow a classification tree using the ionosphere data set.

load ionosphere
tc = fitctree(X,Y)
tc = 
  ClassificationTree
             ResponseName: 'Y'
    CategoricalPredictors: []
               ClassNames: {'b'  'g'}
           ScoreTransform: 'none'
          NumObservations: 351


  Properties, Methods

You can control the depth of the trees using the MaxNumSplits, MinLeafSize, or MinParentSize name-value pair parameters. fitctree grows deep decision trees by default. You can grow shallower trees to reduce model complexity or computation time.

Load the ionosphere data set.

load ionosphere

The default values of the tree depth controllers for growing classification trees are:

  • n - 1 for MaxNumSplits. n is the training sample size.

  • 1 for MinLeafSize.

  • 10 for MinParentSize.

These default values tend to grow deep trees for large training sample sizes.

Train a classification tree using the default values for tree depth control. Cross-validate the model by using 10-fold cross-validation.

rng(1); % For reproducibility
MdlDefault = fitctree(X,Y,'CrossVal','on');

Draw a histogram of the number of imposed splits on the trees. Also, view one of the trees.

numBranches = @(x)sum(x.IsBranch);
mdlDefaultNumSplits = cellfun(numBranches, MdlDefault.Trained);

figure;
histogram(mdlDefaultNumSplits)

Figure contains an axes object. The axes object contains an object of type histogram.

view(MdlDefault.Trained{1},'Mode','graph')

Figure Classification tree viewer contains an axes object and other objects of type uimenu, uicontrol. The axes object contains 51 objects of type line, text. One or more of the lines displays its values using only markers

The average number of splits is around 15.

Suppose that you want a classification tree that is not as complex (deep) as the ones trained using the default number of splits. Train another classification tree, but set the maximum number of splits at 7, which is about half the mean number of splits from the default classification tree. Cross-validate the model by using 10-fold cross-validation.

Mdl7 = fitctree(X,Y,'MaxNumSplits',7,'CrossVal','on');
view(Mdl7.Trained{1},'Mode','graph')

Figure Classification tree viewer contains an axes object and other objects of type uimenu, uicontrol. The axes object contains 21 objects of type line, text. One or more of the lines displays its values using only markers

Compare the cross-validation classification errors of the models.

classErrorDefault = kfoldLoss(MdlDefault)
classErrorDefault = 
0.1168
classError7 = kfoldLoss(Mdl7)
classError7 = 
0.1311

Mdl7 is much less complex and performs only slightly worse than MdlDefault.

Automatically optimize hyperparameters of a classification tree model by using fitctree.

Load the fisheriris data.

load fisheriris
X = meas;
Y = species;

Find hyperparameters that minimize the 5-fold cross-validation loss by using automatic hyperparameter optimization. For reproducibility, set the random seed and use the "expected-improvement-plus" acquisition function.

rng(0,"twister")
hpoOptions = hyperparameterOptimizationOptions(AcquisitionFunctionName="expected-improvement-plus");
Mdl = fitctree(X,Y,OptimizeHyperparameters="auto", ...
    HyperparameterOptimizationOptions=hpoOptions)
|======================================================================================|
| Iter | Eval   | Objective   | Objective   | BestSoFar   | BestSoFar   |  MinLeafSize |
|      | result |             | runtime     | (observed)  | (estim.)    |              |
|======================================================================================|
|    1 | Best   |    0.066667 |     0.98175 |    0.066667 |    0.066667 |           31 |
|    2 | Accept |    0.066667 |     0.44437 |    0.066667 |    0.066667 |           12 |
|    3 | Best   |        0.06 |     0.12311 |        0.06 |    0.060001 |            2 |
|    4 | Accept |     0.66667 |     0.11846 |        0.06 |     0.18681 |           73 |
|    5 | Accept |    0.066667 |     0.13978 |        0.06 |    0.060018 |           28 |
|    6 | Accept |        0.06 |    0.069515 |        0.06 |     0.06001 |            4 |
|    7 | Accept |    0.066667 |    0.061311 |        0.06 |    0.055628 |            7 |
|    8 | Accept |        0.06 |    0.055036 |        0.06 |    0.054969 |            1 |
|    9 | Accept |    0.066667 |    0.046816 |        0.06 |    0.060608 |           20 |
|   10 | Accept |        0.06 |    0.046229 |        0.06 |    0.061024 |            3 |
|   11 | Accept |    0.066667 |    0.045281 |        0.06 |        0.06 |           25 |
|   12 | Accept |    0.066667 |    0.045756 |        0.06 |    0.059937 |            5 |
|   13 | Accept |    0.066667 |    0.044168 |        0.06 |    0.059918 |            9 |
|   14 | Accept |        0.06 |    0.048586 |        0.06 |    0.059961 |            3 |
|   15 | Accept |        0.06 |    0.062679 |        0.06 |    0.059963 |            1 |
|   16 | Accept |        0.06 |    0.069308 |        0.06 |    0.059965 |            2 |
|   17 | Accept |    0.066667 |    0.075559 |        0.06 |    0.059959 |           15 |
|   18 | Accept |        0.06 |    0.050333 |        0.06 |     0.05996 |            4 |
|   19 | Accept |        0.06 |    0.058298 |        0.06 |    0.059974 |            3 |
|   20 | Accept |        0.06 |    0.053002 |        0.06 |    0.059976 |            1 |
|======================================================================================|
| Iter | Eval   | Objective   | Objective   | BestSoFar   | BestSoFar   |  MinLeafSize |
|      | result |             | runtime     | (observed)  | (estim.)    |              |
|======================================================================================|
|   21 | Accept |        0.06 |    0.050273 |        0.06 |    0.059977 |            2 |
|   22 | Accept |        0.06 |    0.056171 |        0.06 |    0.059977 |            4 |
|   23 | Accept |        0.06 |    0.077502 |        0.06 |    0.059984 |            3 |
|   24 | Accept |        0.06 |    0.056772 |        0.06 |    0.059984 |            1 |
|   25 | Accept |        0.06 |    0.072056 |        0.06 |    0.059985 |            2 |
|   26 | Accept |        0.06 |    0.082843 |        0.06 |    0.059985 |            4 |
|   27 | Accept |     0.33333 |    0.052657 |        0.06 |    0.059995 |           49 |
|   28 | Accept |    0.066667 |      0.0569 |        0.06 |        0.06 |           38 |
|   29 | Accept |    0.066667 |    0.051762 |        0.06 |        0.06 |           35 |
|   30 | Accept |    0.066667 |    0.042793 |        0.06 |        0.06 |            6 |

__________________________________________________________
Optimization completed.
MaxObjectiveEvaluations of 30 reached.
Total function evaluations: 30
Total elapsed time: 24.0722 seconds
Total objective function evaluation time: 3.2391

Best observed feasible point:
    MinLeafSize
    ___________

         2     

Observed objective function value = 0.06
Estimated objective function value = 0.06
Function evaluation time = 0.12311

Best estimated feasible point (according to models):
    MinLeafSize
    ___________

         3     

Estimated objective function value = 0.06
Estimated function evaluation time = 0.071904

Figure contains an axes object. The axes object with title Min objective vs. Number of function evaluations, xlabel Function evaluations, ylabel Min objective contains 2 objects of type line. These objects represent Min observed objective, Estimated min objective.

Figure contains an axes object. The axes object with title Objective function model, xlabel MinLeafSize, ylabel Estimated objective function value contains 8 objects of type line. One or more of the lines displays its values using only markers These objects represent Observed points, Model mean, Model error bars, Noise error bars, Next point, Model minimum feasible.

Mdl = 
  ClassificationTree
                         ResponseName: 'Y'
                CategoricalPredictors: []
                           ClassNames: {'setosa'  'versicolor'  'virginica'}
                       ScoreTransform: 'none'
                      NumObservations: 150
    HyperparameterOptimizationResults: [1×1 classreg.learning.paramoptim.SupervisedLearningBayesianOptimization]


  Properties, Methods

The trained classifier Mdl corresponds to the best estimated feasible point and uses the same MinLeafSize hyperparameter value.

Find the hyperparameter value used to train Mdl by using the bestPoint function. By default, bestPoint uses the same best point criterion used by fitctree during the hyperparameter optimization ("min-visited-upper-confidence-interval"). In general, fit functions determine the best hyperparameter values based on the "min-visited-upper-confidence-interval" criterion (instead of the "min-observed" criterion) to avoid overfitting to noise in the data set.

bestEstimatedPoint = bestPoint(Mdl.HyperparameterOptimizationResults)
bestEstimatedPoint=table
    MinLeafSize
    ___________

         3     

Verify that the result matches the property of Mdl.

Mdl.ModelParameters.MinLeaf
ans = 
3

Load the census1994 data set. Consider a model that predicts a person's salary category given their age, working class, education level, martial status, race, sex, capital gain and loss, and number of working hours per week.

load census1994
X = adultdata(:,{'age','workClass','education_num','marital_status','race',...
    'sex','capital_gain','capital_loss','hours_per_week','salary'});

Display the number of categories represented in the categorical variables using summary.

summary(X)
X: 32561×10 table

Variables:

    age: double
    workClass: categorical (8 categories)
    education_num: double
    marital_status: categorical (7 categories)
    race: categorical (5 categories)
    sex: categorical (2 categories)
    capital_gain: double
    capital_loss: double
    hours_per_week: double
    salary: categorical (2 categories)

Statistics for applicable variables:

                      NumMissing     Min      Median       Max            Mean              Std      

    age                     0         17        37           90           38.5816           13.6404  
    workClass            1836                                                                        
    education_num           0          1        10           16           10.0807            2.5727  
    marital_status          0                                                                        
    race                    0                                                                        
    sex                     0                                                                        
    capital_gain            0          0         0        99999        1.0776e+03        7.3853e+03  
    capital_loss            0          0         0         4356           87.3038          402.9602  
    hours_per_week          0          1        40           99           40.4375           12.3474  
    salary                  0                                                                        

Because there are few categories represented in the categorical variables compared to levels in the continuous variables, the standard CART, predictor-splitting algorithm prefers splitting a continuous predictor over the categorical variables.

Train a classification tree using the entire data set. To grow unbiased trees, specify usage of the curvature test for splitting predictors. Because there are missing observations in the data, specify usage of surrogate splits.

Mdl = fitctree(X,'salary','PredictorSelection','curvature',...
    'Surrogate','on');

Estimate predictor importance values by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes. Compare the estimates using a bar graph.

imp = predictorImportance(Mdl);

figure;
bar(imp);
title('Predictor Importance Estimates');
ylabel('Estimates');
xlabel('Predictors');
h = gca;
h.XTickLabel = Mdl.PredictorNames;
h.XTickLabelRotation = 45;
h.TickLabelInterpreter = 'none';

Figure contains an axes object. The axes object with title Predictor Importance Estimates, xlabel Predictors, ylabel Estimates contains an object of type bar.

In this case, capital_gain is the most important predictor, followed by education_num.

Input Arguments

collapse all

Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

  • If Tbl contains the response variable, and you want to use all remaining variables in Tbl as predictors, then specify the response variable by using ResponseVarName.

  • If Tbl contains the response variable, and you want to use only a subset of the remaining variables in Tbl as predictors, then specify a formula by using formula.

  • If Tbl does not contain the response variable, then specify a response variable by using Y. The length of the response variable and the number of rows in Tbl must be equal.

Response variable name, specified as the name of a variable in Tbl.

You must specify ResponseVarName as a character vector or string scalar. For example, if the response variable Y is stored as Tbl.Y, then specify it as "Y". Otherwise, the software treats all columns of Tbl, including Y, as predictors when training the model.

The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If Y is a character array, then each element of the response variable must correspond to one row of the array.

A good practice is to specify the order of the classes by using the ClassNames name-value argument.

Data Types: char | string

Explanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form "Y~x1+x2+x3". In this form, Y represents the response variable, and x1, x2, and x3 represent the predictor variables.

To specify a subset of variables in Tbl as predictors for training the model, use a formula. If you specify a formula, then the software does not use any variables in Tbl that do not appear in formula.

The variable names in the formula must be both variable names in Tbl (Tbl.Properties.VariableNames) and valid MATLAB® identifiers. You can verify the variable names in Tbl by using the isvarname function. If the variable names are not valid, then you can convert them by using the matlab.lang.makeValidName function.

Data Types: char | string

Class labels, specified as a numeric vector, categorical vector, logical vector, character array, string array, or cell array of character vectors. Each row of Y represents the classification of the corresponding row of X.

When fitting the tree, fitctree considers NaN, '' (empty character vector), "" (empty string), <missing>, and <undefined> values in Y to be missing values. fitctree does not use observations with missing values for Y in the fit.

For numeric Y, consider fitting a regression tree using fitrtree instead.

Data Types: single | double | categorical | logical | char | string | cell

Predictor data, specified as a numeric matrix. Each row of X corresponds to one observation, and each column corresponds to one predictor variable.

fitctree considers NaN values in X as missing values. fitctree does not use observations with all missing values for X in the fit. fitctree uses observations with some missing values for X to find splits on variables for which these observations have valid values.

Data Types: single | double

Name-Value Arguments

expand all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'CrossVal','on','MinLeafSize',40 specifies a cross-validated classification tree with a minimum of 40 observations per leaf.

Note

You cannot use any cross-validation name-value argument together with the OptimizeHyperparameters name-value argument. You can modify the cross-validation for OptimizeHyperparameters only by using the HyperparameterOptimizationOptions name-value argument.

Model Parameters

expand all

Algorithm to find the best split on a categorical predictor with C categories for data and K ≥ 3 classes, specified as the comma-separated pair consisting of 'AlgorithmForCategorical' and one of the following values.

ValueDescription
'Exact'Consider all 2C–1 – 1 combinations.
'PullLeft'Start with all C categories on the right branch. Consider moving each category to the left branch as it achieves the minimum impurity for the K classes among the remaining categories. From this sequence, choose the split that has the lowest impurity.
'PCA'Compute a score for each category using the inner product between the first principal component of a weighted covariance matrix (of the centered class probability matrix) and the vector of class probabilities for that category. Sort the scores in ascending order, and consider all C – 1 splits.
'OVAbyClass'Start with all C categories on the right branch. For each class, order the categories based on their probability for that class. For the first class, consider moving each category to the left branch in order, recording the impurity criterion at each move. Repeat for the remaining classes. From this sequence, choose the split that has the minimum impurity.

fitctree automatically selects the optimal subset of algorithms for each split using the known number of classes and levels of a categorical predictor. For K = 2 classes, fitctree always performs the exact search. To specify a particular algorithm, use the 'AlgorithmForCategorical' name-value pair argument.

For more details, see Splitting Categorical Predictors in Classification Trees.

Example: 'AlgorithmForCategorical','PCA'

Categorical predictors list, specified as one of the values in this table.

ValueDescription
Vector of positive integers

Each entry in the vector is an index value indicating that the corresponding predictor is categorical. The index values are between 1 and p, where p is the number of predictors used to train the model.

If fitctree uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. The CategoricalPredictors values do not count any response variable, observation weights variable, or other variable that the function does not use.

Logical vector

A true entry means that the corresponding predictor is categorical. The length of the vector is p.

Character matrixEach row of the matrix is the name of a predictor variable. The names must match the entries in PredictorNames. Pad the names with extra blanks so each row of the character matrix has the same length.
String array or cell array of character vectorsEach element in the array is the name of a predictor variable. The names must match the entries in PredictorNames.
"all"All predictors are categorical.

By default, if the predictor data is a table (Tbl), fitctree assumes that a variable is categorical if it is a logical vector, unordered categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (X), fitctree assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the CategoricalPredictors name-value argument.

Example: CategoricalPredictors="all"

Data Types: single | double | logical | char | string | cell

Names of classes to use for training, specified as a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. ClassNames must have the same data type as the response variable in Tbl or Y.

If ClassNames is a character array, then each element must correspond to one row of the array.

Use ClassNames to:

  • Specify the order of the classes during training.

  • Specify the order of any input or output argument dimension that corresponds to the class order. For example, use ClassNames to specify the order of the dimensions of Cost or the column order of classification scores returned by predict.

  • Select a subset of classes for training. For example, suppose that the set of all distinct class names in Y is ["a","b","c"]. To train the model using observations from classes "a" and "c" only, specify ClassNames=["a","c"].

The default value for ClassNames is the set of all distinct class names in the response variable in Tbl or Y.

Example: ClassNames=["b","g"]

Data Types: categorical | char | string | logical | single | double | cell

Cost of misclassification of a point, specified as the comma-separated pair consisting of 'Cost' and one of the following:

  • Square matrix, where Cost(i,j) is the cost of classifying a point into class j if its true class is i (i.e., the rows correspond to the true class and the columns correspond to the predicted class). To specify the class order for the corresponding rows and columns of Cost, also specify the ClassNames name-value pair argument.

  • Structure S having two fields: S.ClassNames containing the group names as a variable of the same data type as Y, and S.ClassificationCosts containing the cost matrix.

The default is Cost(i,j)=1 if i~=j, and Cost(i,j)=0 if i=j.

Data Types: single | double | struct

Maximum tree depth, specified as the comma-separated pair consisting of 'MaxDepth' and a positive integer. Specify a value for this argument to return a tree that has fewer levels and requires fewer passes through the tall array to compute. Generally, the algorithm of fitctree takes one pass through the data and an additional pass for each tree level. The function does not set a maximum tree depth, by default.

Note

This option applies only when you use fitctree on tall arrays. See Tall Arrays for more information.

Maximum category levels, specified as the comma-separated pair consisting of 'MaxNumCategories' and a nonnegative scalar value. fitctree splits a categorical predictor using the exact search algorithm if the predictor has at most MaxNumCategories levels in the split node. Otherwise, fitctree finds the best categorical split using one of the inexact algorithms.

Passing a small value can lead to loss of accuracy and passing a large value can increase computation time and memory overload.

Example: 'MaxNumCategories',8

Maximal number of decision splits (or branch nodes), specified as the comma-separated pair consisting of 'MaxNumSplits' and a nonnegative scalar. fitctree splits MaxNumSplits or fewer branch nodes. For more details on splitting behavior, see Algorithms.

Example: 'MaxNumSplits',5

Data Types: single | double

Leaf merge flag, specified as the comma-separated pair consisting of 'MergeLeaves' and 'on' or 'off'.

If MergeLeaves is 'on', then fitctree:

  • Merges leaves that originate from the same parent node, and that yields a sum of risk values greater than or equal to the risk associated with the parent node

  • Estimates the optimal sequence of pruned subtrees, but does not prune the classification tree

Otherwise, fitctree does not merge leaves.

Example: 'MergeLeaves','off'

Minimum number of leaf node observations, specified as the comma-separated pair consisting of 'MinLeafSize' and a positive integer value. Each leaf has at least MinLeafSize observations per tree leaf. If you supply both MinParentSize and MinLeafSize, fitctree uses the setting that gives larger leaves: MinParentSize = max(MinParentSize,2*MinLeafSize).

Example: 'MinLeafSize',3

Data Types: single | double

Minimum number of branch node observations, specified as the comma-separated pair consisting of 'MinParentSize' and a positive integer value. Each branch node in the tree has at least MinParentSize observations. If you supply both MinParentSize and MinLeafSize, fitctree uses the setting that gives larger leaves: MinParentSize = max(MinParentSize,2*MinLeafSize).

Example: 'MinParentSize',8

Data Types: single | double

Number of bins for numeric predictors, specified as a positive integer scalar.

  • If the NumBins value is empty (default), then fitctree does not bin any predictors.

  • If you specify the NumBins value as a positive integer scalar (numBins), then fitctree bins every numeric predictor into at most numBins equiprobable bins, and then grows trees on the bin indices instead of the original data.

    • The number of bins can be less than numBins if a predictor has fewer than numBins unique values.

    • fitctree does not bin categorical predictors.

When you use a large training data set, this binning option speeds up training but might cause a potential decrease in accuracy. You can try NumBins=50 first, and then change the value depending on the accuracy and training speed.

A trained model stores the bin edges in the BinEdges property.

Example: NumBins=50

Data Types: single | double

Number of predictors to select at random for each split, specified a positive integer value. Alternatively, you can specify "all" to use all available predictors.

If the training data includes many predictors and you want to analyze predictor importance, then specify NumVariablesToSample as "all". Otherwise, the software might not select some predictors, underestimating their importance.

To reproduce the random selections, you must set the seed of the random number generator by using rng and specify Reproducible=true.

Example: 'NumVariablesToSample',3

Data Types: char | string | single | double

Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of PredictorNames depends on the way you supply the training data.

  • If you supply X and Y, then you can use PredictorNames to assign names to the predictor variables in X.

    • The order of the names in PredictorNames must correspond to the column order of X. That is, PredictorNames{1} is the name of X(:,1), PredictorNames{2} is the name of X(:,2), and so on. Also, size(X,2) and numel(PredictorNames) must be equal.

    • By default, PredictorNames is {'x1','x2',...}.

  • If you supply Tbl, then you can use PredictorNames to choose which predictor variables to use in training. That is, fitctree uses only the predictor variables in PredictorNames and the response variable during training.

    • PredictorNames must be a subset of Tbl.Properties.VariableNames and cannot include the name of the response variable.

    • By default, PredictorNames contains the names of all predictor variables.

    • A good practice is to specify the predictors for training using either PredictorNames or formula, but not both.

Example: PredictorNames=["SepalLength","SepalWidth","PetalLength","PetalWidth"]

Data Types: string | cell

Algorithm used to select the best split predictor at each node, specified as the comma-separated pair consisting of 'PredictorSelection' and a value in this table.

ValueDescription
'allsplits'

Standard CART — Selects the split predictor that maximizes the split-criterion gain over all possible splits of all predictors [1].

'curvature'Curvature test — Selects the split predictor that minimizes the p-value of chi-square tests of independence between each predictor and the response [4]. Training speed is similar to standard CART.
'interaction-curvature'Interaction test — Chooses the split predictor that minimizes the p-value of chi-square tests of independence between each predictor and the response, and that minimizes the p-value of a chi-square test of independence between each pair of predictors and response [3]. Training speed can be slower than standard CART.

For 'curvature' and 'interaction-curvature', if all tests yield p-values greater than 0.05, then fitctree stops splitting nodes.

Tip

  • Standard CART tends to select split predictors containing many distinct values, e.g., continuous variables, over those containing few distinct values, e.g., categorical variables [4]. Consider specifying the curvature or interaction test if any of the following are true:

    • If there are predictors that have relatively fewer distinct values than other predictors, for example, if the predictor data set is heterogeneous.

    • If an analysis of predictor importance is your goal. For more on predictor importance estimation, see predictorImportance and Introduction to Feature Selection.

  • Trees grown using standard CART are not sensitive to predictor variable interactions. Also, such trees are less likely to identify important variables in the presence of many irrelevant predictors than the application of the interaction test. Therefore, to account for predictor interactions and identify importance variables in the presence of many irrelevant variables, specify the interaction test [3].

  • Prediction speed is unaffected by the value of 'PredictorSelection'.

For details on how fitctree selects split predictors, see Node Splitting Rules and Choose Split Predictor Selection Technique.

Example: 'PredictorSelection','curvature'

Prior probabilities for each class, specified as one of the following:

  • Character vector or string scalar.

    • 'empirical' determines class probabilities from class frequencies in the response variable in Y or Tbl. If you pass observation weights, fitctree uses the weights to compute the class probabilities.

    • 'uniform' sets all class probabilities to be equal.

  • Vector (one scalar value for each class). To specify the class order for the corresponding elements of 'Prior', set the 'ClassNames' name-value argument.

  • Structure S with two fields.

    • S.ClassNames contains the class names as a variable of the same type as the response variable in Y or Tbl.

    • S.ClassProbs contains a vector of corresponding probabilities.

fitctree normalizes the weights in each class ('Weights') to add up to the value of the prior probability of the respective class.

Example: 'Prior','uniform'

Data Types: char | string | single | double | struct

Flag to estimate the optimal sequence of pruned subtrees, specified as the comma-separated pair consisting of 'Prune' and 'on' or 'off'.

If Prune is 'on', then fitctree grows the classification tree without pruning it, but estimates the optimal sequence of pruned subtrees. If Prune is 'off' and MergeLeaves is also 'off', then fitctree grows the classification tree without estimating the optimal sequence of pruned subtrees.

To prune a trained ClassificationTree model, pass it to prune.

Example: 'Prune','off'

Pruning criterion, specified as the comma-separated pair consisting of 'PruneCriterion' and 'error' or 'impurity'.

If you specify 'impurity', then fitctree uses the impurity measure specified by the 'SplitCriterion' name-value pair argument.

For details, see Impurity and Node Error.

Example: 'PruneCriterion','impurity'

Flag to enforce reproducibility over repeated runs of training a model, specified as either false or true.

If NumVariablesToSample is not "all", then the software selects predictors at random for each split. To reproduce the random selections, you must specify Reproducible=true and set the seed of the random number generator by using rng. Note that setting Reproducible to true can slow down training.

Example: Reproducible=true

Data Types: logical

Response variable name, specified as the comma-separated pair consisting of 'ResponseName' and a character vector or string scalar representing the name of the response variable.

This name-value pair is not valid when using the ResponseVarName or formula input arguments.

Example: 'ResponseName','IrisType'

Data Types: char | string

Score transformation, specified as a character vector, string scalar, or function handle.

This table summarizes the available character vectors and string scalars.

ValueDescription
"doublelogit"1/(1 + e–2x)
"invlogit"log(x / (1 – x))
"ismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
"logit"1/(1 + ex)
"none" or "identity"x (no transformation)
"sign"–1 for x < 0
0 for x = 0
1 for x > 0
"symmetric"2x – 1
"symmetricismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
"symmetriclogit"2/(1 + ex) – 1

For a MATLAB function or a function you define, use its function handle for the score transform. The function handle must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Example: ScoreTransform="logit"

Data Types: char | string | function_handle

Split criterion, specified as the comma-separated pair consisting of 'SplitCriterion' and 'gdi' (Gini's diversity index), 'twoing' for the twoing rule, or 'deviance' for maximum deviance reduction (also known as cross entropy).

For details, see Impurity and Node Error.

Example: 'SplitCriterion','deviance'

Surrogate decision splits flag, specified as the comma-separated pair consisting of 'Surrogate' and 'on', 'off', 'all', or a positive integer value.

  • When set to 'on', fitctree finds at most 10 surrogate splits at each branch node.

  • When set to 'all', fitctree finds all surrogate splits at each branch node. The 'all' setting can use considerable time and memory.

  • When set to a positive integer value, fitctree finds at most the specified number of surrogate splits at each branch node.

Use surrogate splits to improve the accuracy of predictions for data with missing values. The setting also lets you compute measures of predictive association between predictors. For more details, see Node Splitting Rules.

Example: 'Surrogate','on'

Data Types: single | double | char | string

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a vector of scalar values or the name of a variable in Tbl. The software weights the observations in each row of X or Tbl with the corresponding value in Weights. The size of Weights must equal the number of rows in X or Tbl.

If you specify the input data as a table Tbl, then Weights can be the name of a variable in Tbl that contains a numeric vector. In this case, you must specify Weights as a character vector or string scalar. For example, if the weights vector W is stored as Tbl.W, then specify it as 'W'. Otherwise, the software treats all columns of Tbl, including W, as predictors when training the model.

fitctree normalizes the weights in each class to add up to the value of the prior probability of the respective class. Inf weights are not supported.

Data Types: single | double | char | string

Cross-Validation Options

expand all

Flag to grow a cross-validated decision tree, specified as the comma-separated pair consisting of 'CrossVal' and 'on' or 'off'.

If 'on', fitctree grows a cross-validated decision tree with 10 folds. You can override this cross-validation setting using one of the 'KFold', 'Holdout', 'Leaveout', or 'CVPartition' name-value pair arguments. You can only use one of these four arguments at a time when creating a cross-validated tree.

Alternatively, cross-validate Mdl later using the crossval method.

Example: 'CrossVal','on'

Partition to use in a cross-validated tree, specified as the comma-separated pair consisting of 'CVPartition' and an object created using cvpartition.

If you use 'CVPartition', you cannot use any of the 'KFold', 'Holdout', or 'Leaveout' name-value pair arguments.

Fraction of data used for holdout validation, specified as the comma-separated pair consisting of 'Holdout' and a scalar value in the range (0,1). Holdout validation tests the specified fraction of the data, and uses the rest of the data for training.

If you use 'Holdout', you cannot use any of the 'CVPartition', 'KFold', or 'Leaveout' name-value pair arguments.

Example: 'Holdout',0.1

Data Types: single | double

Number of folds to use in a cross-validated classifier, specified as the comma-separated pair consisting of 'KFold' and a positive integer value greater than 1. If you specify, e.g., 'KFold',k, then the software:

  1. Randomly partitions the data into k sets

  2. For each set, reserves the set as validation data, and trains the model using the other k – 1 sets

  3. Stores the k compact, trained models in the cells of a k-by-1 cell vector in the Trained property of the cross-validated model.

To create a cross-validated model, you can use one of these four options only: CVPartition, Holdout, KFold, or Leaveout.

Example: 'KFold',8

Data Types: single | double

Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of 'Leaveout' and 'on' or 'off'. Specify 'on' to use leave-one-out cross-validation.

If you use 'Leaveout', you cannot use any of the 'CVPartition', 'Holdout', or 'KFold' name-value pair arguments.

Example: 'Leaveout','on'

Hyperparameter Optimization Options

expand all

Parameters to optimize, specified as the comma-separated pair consisting of 'OptimizeHyperparameters' and one of the following:

  • 'none' — Do not optimize.

  • 'auto' — Use {'MinLeafSize'}

  • 'all' — Optimize all eligible parameters.

  • String array or cell array of eligible parameter names

  • Vector of optimizableVariable objects, typically the output of hyperparameters

The optimization attempts to minimize the cross-validation loss (error) for fitctree by varying the parameters. To control the cross-validation type and other aspects of the optimization, use the HyperparameterOptimizationOptions name-value argument. When you use HyperparameterOptimizationOptions, you can use the (compact) model size instead of the cross-validation loss as the optimization objective by setting the ConstraintType and ConstraintBounds options.

Note

The values of OptimizeHyperparameters override any values you specify using other name-value arguments. For example, setting OptimizeHyperparameters to "auto" causes fitctree to optimize hyperparameters corresponding to the "auto" option and to ignore any specified values for the hyperparameters.

The eligible parameters for fitctree are:

  • MaxNumSplitsfitctree searches among integers, by default log-scaled in the range [1,max(2,NumObservations-1)].

  • MinLeafSizefitctree searches among integers, by default log-scaled in the range [1,max(2,floor(NumObservations/2))].

  • SplitCriterion — For two classes, fitctree searches among 'gdi' and 'deviance'. For three or more classes, fitctree also searches among 'twoing'.

  • NumVariablesToSamplefitctree does not optimize over this hyperparameter. If you pass NumVariablesToSample as a parameter name, fitctree simply uses the full number of predictors. However, fitcensemble does optimize over this hyperparameter.

Set nondefault parameters by passing a vector of optimizableVariable objects that have nondefault values. For example,

load fisheriris
params = hyperparameters('fitctree',meas,species);
params(1).Range = [1,30];

Pass params as the value of OptimizeHyperparameters.

By default, the iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function is the misclassification rate. To control the iterative display, set the Verbose option of the HyperparameterOptimizationOptions name-value argument. To control the plots, set the ShowPlots field of the HyperparameterOptimizationOptions name-value argument.

For an example, see Optimize Classification Tree.

Example: 'auto'

Options for optimization, specified as a HyperparameterOptimizationOptions object or a structure. This argument modifies the effect of the OptimizeHyperparameters name-value argument. If you specify HyperparameterOptimizationOptions, you must also specify OptimizeHyperparameters. All the options are optional. However, you must set ConstraintBounds and ConstraintType to return AggregateOptimizationResults. The options that you can set in a structure are the same as those in the HyperparameterOptimizationOptions object.

OptionValuesDefault
Optimizer
  • "bayesopt" — Use Bayesian optimization. Internally, this setting calls bayesopt.

  • "gridsearch" — Use grid search with NumGridDivisions values per dimension. "gridsearch" searches in a random order, using uniform sampling without replacement from the grid. After optimization, you can get a table in grid order by using the sortrows function.

  • "randomsearch" — Search at random among MaxObjectiveEvaluations points.

"bayesopt"
ConstraintBounds

Constraint bounds for N optimization problems, specified as an N-by-2 numeric matrix or []. The columns of ConstraintBounds contain the lower and upper bound values of the optimization problems. If you specify ConstraintBounds as a numeric vector, the software assigns the values to the second column of ConstraintBounds, and zeros to the first column. If you specify ConstraintBounds, you must also specify ConstraintType.

[]
ConstraintTarget

Constraint target for the optimization problems, specified as "matlab" or "coder". If ConstraintBounds and ConstraintType are [] and you set ConstraintTarget, then the software sets ConstraintTarget to []. The values of ConstraintTarget and ConstraintType determine the objective and constraint functions. For more information, see HyperparameterOptimizationOptions.

If you specify ConstraintBounds and ConstraintType, then the default value is "matlab". Otherwise, the default value is [].
ConstraintType

Constraint type for the optimization problems, specified as "size" or "loss". If you specify ConstraintType, you must also specify ConstraintBounds. The values of ConstraintTarget and ConstraintType determine the objective and constraint functions. For more information, see HyperparameterOptimizationOptions.

[]
AcquisitionFunctionName

Type of acquisition function:

  • "expected-improvement-per-second-plus"

  • "expected-improvement"

  • "expected-improvement-plus"

  • "expected-improvement-per-second"

  • "lower-confidence-bound"

  • "probability-of-improvement"

Acquisition functions whose names include per-second do not yield reproducible results, because the optimization depends on the run time of the objective function. Acquisition functions whose names include plus modify their behavior when they overexploit an area. For more details, see Acquisition Function Types.

"expected-improvement-per-second-plus"
LossFunType of validation loss to optimize, specified as "auto", "classifcost", "classiferror", or "mincost". In the case of fitctree, the "auto" and "mincost" options are equivalent, and the software uses the minimal expected misclassification cost. "classifcost" indicates to use the observed misclassification cost, and "classiferror" indicates to use the misclassified rate in decimal."auto"
MaxObjectiveEvaluationsMaximum number of objective function evaluations. If you specify multiple optimization problems using ConstraintBounds, the value of MaxObjectiveEvaluations applies to each optimization problem individually.30 for "bayesopt" and "randomsearch", and the entire grid for "gridsearch"
MaxTime

Time limit for the optimization, specified as a nonnegative real scalar. The time limit is in seconds, as measured by tic and toc. The software performs at least one optimization iteration, regardless of the value of MaxTime. The run time can exceed MaxTime because MaxTime does not interrupt function evaluations. If you specify multiple optimization problems using ConstraintBounds, the time limit applies to each optimization problem individually.

Inf
NumGridDivisionsFor Optimizer="gridsearch", the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. The software ignores this option for categorical variables.10
ShowPlotsLogical value indicating whether to show plots of the optimization progress. If this option is true, the software plots the best observed objective function value against the iteration number. If you use Bayesian optimization (Optimizer="bayesopt"), the software also plots the best estimated objective function value. The best observed objective function values and best estimated objective function values correspond to the values in the BestSoFar (observed) and BestSoFar (estim.) columns of the iterative display, respectively. You can find these values in the properties ObjectiveMinimumTrace and EstimatedObjectiveMinimumTrace of the SupervisedLearningBayesianOptimization object. If the problem includes one or two optimization parameters for Bayesian optimization, then ShowPlots also plots a model of the objective function against the parameters.true
SaveIntermediateResultsLogical value indicating whether to save the optimization results. If this option is true, the software overwrites a workspace variable named SupervisedLearningBayesoptResults at each iteration. The variable is a SupervisedLearningBayesianOptimization object. If you specify multiple optimization problems using ConstraintBounds, the workspace variable is an AggregateBayesianOptimization object named AggregateBayesoptResults.false
Verbose

Display level at the command line:

  • 0 — No iterative display

  • 1 — Iterative display

  • 2 — Iterative display with additional information

For details, see the bayesopt Verbose name-value argument and the example Optimize Classifier Fit Using Bayesian Optimization.

1
UseParallelLogical value indicating whether to run the Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. For details, see Parallel Bayesian Optimization.false
Repartition

Logical value indicating whether to repartition the cross-validation at every iteration. If this option is false, the optimizer uses a single partition for the optimization.

A value of true usually gives the most robust results because this setting takes partitioning noise into account. However, for optimal results, true requires at least twice as many function evaluations.

false
Specify only one of the following three options.
CVPartitioncvpartition object created by cvpartitionKFold=5 if you do not specify a cross-validation option
HoldoutScalar in the range (0,1) representing the holdout fraction
KFoldInteger greater than 1

Example: HyperparameterOptimizationOptions=struct(UseParallel=true)

Output Arguments

collapse all

Trained classification tree model, returned as a ClassificationTree object, a ClassificationPartitionedModel object, or a cell array of model objects.

  • If you set any of the name-value arguments CrossVal, CVPartition, Holdout, KFold, or Leaveout, then Mdl is a ClassificationPartitionedModel object.

  • If you specify OptimizeHyperparameters and set the ConstraintType and ConstraintBounds options of HyperparameterOptimizationOptions, then Mdl is an N-by-1 cell array of model objects, where N is equal to the number of rows in ConstraintBounds. If none of the optimization problems yields a feasible model, then each cell array value is [].

  • Otherwise, Mdl is a ClassificationTree model object.

To reference properties of a model object, use dot notation.

Aggregate optimization results for multiple optimization problems, returned as an AggregateBayesianOptimization object. To return AggregateOptimizationResults, you must specify OptimizeHyperparameters and HyperparameterOptimizationOptions. You must also specify the ConstraintType and ConstraintBounds options of HyperparameterOptimizationOptions. For an example that shows how to produce this output, see Hyperparameter Optimization with Multiple Constraint Bounds.

More About

collapse all

Tips

Algorithms

collapse all

References

[1] Breiman, L., J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Boca Raton, FL: CRC Press, 1984.

[2] Coppersmith, D., S. J. Hong, and J. R. M. Hosking. “Partitioning Nominal Attributes in Decision Trees.” Data Mining and Knowledge Discovery, Vol. 3, 1999, pp. 197–217.

[3] Loh, W.Y. “Regression Trees with Unbiased Variable Selection and Interaction Detection.” Statistica Sinica, Vol. 12, 2002, pp. 361–386.

[4] Loh, W.Y. and Y.S. Shih. “Split Selection Methods for Classification Trees.” Statistica Sinica, Vol. 7, 1997, pp. 815–840.

Extended Capabilities

expand all

Version History

Introduced in R2014a

expand all