Hauptinhalt

Interpretability

Train interpretable classification models and interpret complex classification models

Use inherently interpretable classification models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex classification models that are not inherently interpretable. To learn how to interpret classification models, see Interpret Machine Learning Models.

Gain insight into binary classifier decisions by generating counterfactual examples using the counterfactuals function. Counterfactual examples identify the minimal modifications needed to change the predicted label of a given observation.

Functions

expand all

Local Interpretable Model-Agnostic Explanations (LIME)

limeLocal interpretable model-agnostic explanations (LIME)
fitFit simple model of local interpretable model-agnostic explanations (LIME)
plotPlot results of local interpretable model-agnostic explanations (LIME)

Shapley Values

shapleyShapley values
fitCompute Shapley values for query points
plotPlot Shapley values using bar graphs
boxchartVisualize Shapley values using box charts (box plots) (Since R2024a)
plotDependencePlot dependence of Shapley values on predictor values (Since R2024b)
swarmchartVisualize Shapley values using swarm scatter charts (Since R2024a)

Partial Dependence

partialDependenceCompute partial dependence
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
fitcgamFit generalized additive model (GAM) for binary classification
fitclinearFit binary linear classifier to high-dimensional data
fitctreeFit binary decision tree for multiclass classification
counterfactualsGenerate counterfactual examples for observation (Since R2026a)

Objects

ClassificationGAMGeneralized additive model (GAM) for binary classification
ClassificationLinearLinear model for binary classification of high-dimensional data
ClassificationTreeBinary decision tree for multiclass classification

Topics

Model Interpretation

Interpretable Models