CompactClassificationECOC
Compact multiclass model for support vector machines (SVMs) and other classifiers
Description
CompactClassificationECOC
is a compact version of the multiclass
error-correcting output codes (ECOC) model. The compact classifier does not include the data
used for training the multiclass ECOC model. Therefore, you cannot perform certain tasks, such
as cross-validation, using the compact classifier. Use a compact multiclass ECOC model for
tasks such as classifying new data (predict
).
Creation
You can create a CompactClassificationECOC
model in two ways:
Create a compact ECOC model from a trained
ClassificationECOC
model by using thecompact
object function.Create a compact ECOC model by using the
fitcecoc
function and specifying the'Learners'
name-value pair argument as'linear'
,'kernel'
, atemplateLinear
ortemplateKernel
object, or a cell array of such objects.
Properties
After you create a CompactClassificationECOC
model object, you can use
dot notation to access its properties. For an example, see Train and Cross-Validate ECOC Classifier.
ECOC Properties
BinaryLearners
— Trained binary learners
cell vector of model objects
Trained binary learners, specified as a cell vector of model objects. The number of binary
learners depends on the number of classes in Y
and the coding
design.
The software trains BinaryLearner{j}
according to the binary problem
specified by CodingMatrix
(:,j)
. For example, for
multiclass learning using SVM learners, each element of
BinaryLearners
is a CompactClassificationSVM
classifier.
Data Types: cell
BinaryLoss
— Binary learner loss function
'binodeviance'
| 'exponential'
| 'hamming'
| 'hinge'
| 'linear'
| 'logit'
| 'quadratic'
Binary learner loss function, specified as a character vector representing the loss function name.
This table identifies the default BinaryLoss
value, which depends on the
score ranges returned by the binary learners.
Assumption | Default Value |
---|---|
All binary learners are any of the following:
| "quadratic" |
All binary learners are SVMs or linear or kernel classification models of SVM learners. | "hinge" |
All binary learners are ensembles trained by
AdaboostM1 or
GentleBoost . | "exponential" |
All binary learners are ensembles trained by
LogitBoost . | "binodeviance" |
You specify to predict class posterior probabilities by setting
FitPosterior=true in fitcecoc . | "quadratic" |
Binary learners are heterogeneous and use different loss functions. | "hamming" |
To check the default value, use dot notation to display the BinaryLoss
property of the trained model at the command line.
To potentially increase accuracy, specify a binary loss function other than the
default during a prediction or loss computation by using the
BinaryLoss
name-value argument of predict
or loss
. For more information, see Binary Loss.
Data Types: char
CodingMatrix
— Class assignment codes
numeric matrix
Class assignment codes for the binary learners, specified as a numeric matrix.
CodingMatrix
is a K-by-L
matrix, where K is the number of classes and L is
the number of binary learners.
The elements of CodingMatrix
are –1
,
0
, and 1
, and the values correspond to
dichotomous class assignments. This table describes how learner j
assigns observations in class i
to a dichotomous class corresponding
to the value of CodingMatrix(i,j)
.
Value | Dichotomous Class Assignment |
---|---|
–1 | Learner j assigns observations in class i to a negative
class. |
0 | Before training, learner j removes observations
in class i from the data set. |
1 | Learner j assigns observations in class i to a positive
class. |
Data Types: double
| single
| int8
| int16
| int32
| int64
LearnerWeights
— Binary learner weights
numeric row vector
Binary learner weights, specified as a numeric row vector. The length of
LearnerWeights
is equal to the
number of binary learners
(length(Mdl.BinaryLearners)
).
LearnerWeights(j)
is the sum of the observation weights that binary learner
j
uses to train its classifier.
The software uses LearnerWeights
to fit posterior probabilities by
minimizing the Kullback-Leibler divergence. The software ignores
LearnerWeights
when it uses the
quadratic programming method of estimating posterior
probabilities.
Data Types: double
| single
Other Classification Properties
CategoricalPredictors
— Categorical predictor indices
vector of positive integers | []
Categorical predictor
indices, specified as a vector of positive integers. CategoricalPredictors
contains index values indicating that the corresponding predictors are categorical. The index
values are between 1 and p
, where p
is the number of
predictors used to train the model. If none of the predictors are categorical, then this
property is empty ([]
).
Data Types: single
| double
ClassNames
— Unique class labels
categorical array | character array | logical vector | numeric vector | cell array of character vectors
This property is read-only.
Unique class labels used in training, specified as a categorical or
character array, logical or numeric vector, or cell array of
character vectors. ClassNames
has the same
data type as the class labels Y
.
(The software treats string arrays as cell arrays of character
vectors.)
ClassNames
also determines the class
order.
Data Types: categorical
| char
| logical
| single
| double
| cell
Cost
— Misclassification costs
square numeric matrix
This property is read-only.
Misclassification costs, specified as a square numeric matrix. Cost
has
K rows and columns, where K is the number of
classes.
Cost(i,j)
is the cost of classifying a point into class
j
if its true class is i
. The order of the
rows and columns of Cost
corresponds to the order of the classes in
ClassNames
.
Data Types: double
PredictorNames
— Predictor names
cell array of character vectors
Predictor names in order of their appearance in the predictor data, specified as a
cell array of character vectors. The length of PredictorNames
is
equal to the number of variables in the training data X
or
Tbl
used as predictor variables.
Data Types: cell
ExpandedPredictorNames
— Expanded predictor names
cell array of character vectors
Expanded predictor names, specified as a cell array of character vectors.
If the model uses encoding for categorical variables, then
ExpandedPredictorNames
includes the names that describe the
expanded variables. Otherwise, ExpandedPredictorNames
is the same as
PredictorNames
.
Data Types: cell
Prior
— Prior class probabilities
numeric vector
This property is read-only.
Prior class probabilities, specified as a numeric vector. Prior
has as
many elements as the number of classes in
ClassNames
, and the order of
the elements corresponds to the order of the classes in
ClassNames
.
fitcecoc
incorporates misclassification
costs differently among different types of binary learners.
Data Types: double
ResponseName
— Response variable name
character vector
Response variable name, specified as a character vector.
Data Types: char
ScoreTransform
— Score transformation function to apply to predicted scores
'none'
This property is read-only.
Score transformation function to apply to the predicted scores, specified as
'none'
. An ECOC model does not support score transformation.
Object Functions
compareHoldout | Compare accuracies of two classification models using new data |
discardSupportVectors | Discard support vectors of linear SVM binary learners in ECOC model |
edge | Classification edge for multiclass error-correcting output codes (ECOC) model |
gather | Gather properties of Statistics and Machine Learning Toolbox object from GPU |
incrementalLearner | Convert multiclass error-correcting output codes (ECOC) model to incremental learner |
lime | Local interpretable model-agnostic explanations (LIME) |
loss | Classification loss for multiclass error-correcting output codes (ECOC) model |
margin | Classification margins for multiclass error-correcting output codes (ECOC) model |
partialDependence | Compute partial dependence |
plotPartialDependence | Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots |
predict | Classify observations using multiclass error-correcting output codes (ECOC) model |
shapley | Shapley values |
selectModels | Choose subset of multiclass ECOC models composed of binary
ClassificationLinear learners |
update | Update model parameters for code generation |
Examples
Reduce Size of Full ECOC Model
Reduce the size of a full ECOC model by removing the training data. Full ECOC models (ClassificationECOC
models) hold the training data. To improve efficiency, use a smaller classifier.
Load Fisher's iris data set. Specify the predictor data X
, the response data Y
, and the order of the classes in Y
.
load fisheriris
X = meas;
Y = categorical(species);
classOrder = unique(Y);
Train an ECOC model using SVM binary classifiers. Standardize the predictor data using an SVM template t
, and specify the order of the classes. During training, the software uses default values for empty options in t
.
t = templateSVM('Standardize',true); Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);
Mdl
is a ClassificationECOC
model.
Reduce the size of the ECOC model.
CompactMdl = compact(Mdl)
CompactMdl = CompactClassificationECOC ResponseName: 'Y' CategoricalPredictors: [] ClassNames: [setosa versicolor virginica] ScoreTransform: 'none' BinaryLearners: {3x1 cell} CodingMatrix: [3x3 double]
CompactMdl
is a CompactClassificationECOC
model. CompactMdl
does not store all of the properties that Mdl
stores. In particular, it does not store the training data.
Display the amount of memory each classifier uses.
whos('CompactMdl','Mdl')
Name Size Bytes Class Attributes CompactMdl 1x1 13917 classreg.learning.classif.CompactClassificationECOC Mdl 1x1 27174 ClassificationECOC
The full ECOC model (Mdl
) is approximately double the size of the compact ECOC model (CompactMdl
).
To label new observations efficiently, you can remove Mdl
from the MATLAB® Workspace, and then pass CompactMdl
and new predictor values to predict
.
Train and Cross-Validate ECOC Classifier
Train and cross-validate an ECOC classifier using different binary learners and the one-versus-all coding design.
Load Fisher's iris data set. Specify the predictor data X
and the response data Y
. Determine the class names and the number of classes.
load fisheriris X = meas; Y = species; classNames = unique(species(~strcmp(species,''))) % Remove empty classes
classNames = 3x1 cell
{'setosa' }
{'versicolor'}
{'virginica' }
K = numel(classNames) % Number of classes
K = 3
You can use classNames
to specify the order of the classes during training.
For a one-versus-all coding design, this example has K
= 3 binary learners. Specify templates for the binary learners such that:
Binary learner 1 and 2 are naive Bayes classifiers. By default, each predictor is conditionally, normally distributed given its label.
Binary learner 3 is an SVM classifier. Specify to use the Gaussian kernel.
rng(1); % For reproducibility tNB = templateNaiveBayes(); tSVM = templateSVM('KernelFunction','gaussian'); tLearners = {tNB tNB tSVM};
tNB
and tSVM
are template objects for naive Bayes and SVM learning, respectively. The objects indicate which options to use during training. Most of their properties are empty, except those specified by name-value pair arguments. During training, the software fills in the empty properties with their default values.
Train and cross-validate an ECOC classifier using the binary learner templates and the one-versus-all coding design. Specify the order of the classes. By default, naive Bayes classifiers use posterior probabilities as scores, whereas SVM classifiers use distances from the decision boundary. Therefore, to aggregate the binary learners, you must specify to fit posterior probabilities.
CVMdl = fitcecoc(X,Y,'ClassNames',classNames,'CrossVal','on',... 'Learners',tLearners,'FitPosterior',true);
CVMdl
is a ClassificationPartitionedECOC
cross-validated model. By default, the software implements 10-fold cross-validation. The scores across the binary learners have the same form (that is, they are posterior probabilities), so the software can aggregate the results of the binary classifications properly.
Inspect one of the trained folds using dot notation.
CVMdl.Trained{1}
ans = CompactClassificationECOC ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'setosa' 'versicolor' 'virginica'} ScoreTransform: 'none' BinaryLearners: {3x1 cell} CodingMatrix: [3x3 double]
Each fold is a CompactClassificationECOC
model trained on 90% of the data.
You can access the results of the binary learners using dot notation and cell indexing. Display the trained SVM classifier (the third binary learner) in the first fold.
CVMdl.Trained{1}.BinaryLearners{3}
ans = CompactClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: [-1 1] ScoreTransform: '@(S)sigmoid(S,-4.016619e+00,-3.243499e-01)' Alpha: [33x1 double] Bias: -0.1345 KernelParameters: [1x1 struct] SupportVectors: [33x4 double] SupportVectorLabels: [33x1 double]
Estimate the generalization error.
genError = kfoldLoss(CVMdl)
genError = 0.0333
On average, the generalization error is approximately 3%.
More About
Error-Correcting Output Codes Model
An error-correcting output codes (ECOC) model reduces the problem of classification with three or more classes to a set of binary classification problems.
ECOC classification requires a coding design, which determines the classes that the binary learners train on, and a decoding scheme, which determines how the results (predictions) of the binary classifiers are aggregated.
Assume the following:
The classification problem has three classes.
The coding design is one-versus-one. For three classes, this coding design is
You can specify a different coding design by using the
Coding
name-value argument when you create a classification model.The model determines the predicted class by using the loss-weighted decoding scheme with the binary loss function g. The software also supports the loss-based decoding scheme. You can specify the decoding scheme and binary loss function by using the
Decoding
andBinaryLoss
name-value arguments, respectively, when you call object functions, such aspredict
,loss
,margin
,edge
, and so on.
The ECOC algorithm follows these steps.
Learner 1 trains on observations in Class 1 or Class 2, and treats Class 1 as the positive class and Class 2 as the negative class. The other learners are trained similarly.
Let M be the coding design matrix with elements mkl, and sl be the predicted classification score for the positive class of learner l. The algorithm assigns a new observation to the class () that minimizes the aggregation of the losses for the B binary learners.
ECOC models can improve classification accuracy, compared to other multiclass models [1].
Coding Design
The coding design is a matrix whose elements direct which classes are trained by each binary learner, that is, how the multiclass problem is reduced to a series of binary problems.
Each row of the coding design corresponds to a distinct class, and each column corresponds to a binary learner. In a ternary coding design, for a particular column (or binary learner):
A row containing 1 directs the binary learner to group all observations in the corresponding class into a positive class.
A row containing –1 directs the binary learner to group all observations in the corresponding class into a negative class.
A row containing 0 directs the binary learner to ignore all observations in the corresponding class.
Coding design matrices with large, minimal, pairwise row distances based on the Hamming measure are optimal. For details on the pairwise row distance, see Random Coding Design Matrices and [2].
This table describes popular coding designs.
Coding Design | Description | Number of Learners | Minimal Pairwise Row Distance |
---|---|---|---|
one-versus-all (OVA) | For each binary learner, one class is positive and the rest are negative. This design exhausts all combinations of positive class assignments. | K | 2 |
one-versus-one (OVO) | For each binary learner, one class is positive, one class is negative, and the rest are ignored. This design exhausts all combinations of class pair assignments. | K(K – 1)/2 | 1 |
binary complete | This design partitions the classes into all binary
combinations, and does not ignore any classes. That is, all class
assignments are | 2K – 1 – 1 | 2K – 2 |
ternary complete | This design partitions the classes into all ternary
combinations. That is, all class assignments are
| (3K – 2K + 1 + 1)/2 | 3K – 2 |
ordinal | For the first binary learner, the first class is negative and the rest are positive. For the second binary learner, the first two classes are negative and the rest are positive, and so on. | K – 1 | 1 |
dense random | For each binary learner, the software randomly assigns classes into positive or negative classes, with at least one of each type. For more details, see Random Coding Design Matrices. | Random, but approximately 10 log2K | Variable |
sparse random | For each binary learner, the software randomly assigns classes as positive or negative with probability 0.25 for each, and ignores classes with probability 0.5. For more details, see Random Coding Design Matrices. | Random, but approximately 15 log2K | Variable |
This plot compares the number of binary learners for the coding designs with an increasing number of classes (K).
Algorithms
Random Coding Design Matrices
For a given number of classes K, the software generates random coding design matrices as follows.
The software generates one of these matrices:
Dense random — The software assigns 1 or –1 with equal probability to each element of the K-by-Ld coding design matrix, where .
Sparse random — The software assigns 1 to each element of the K-by-Ls coding design matrix with probability 0.25, –1 with probability 0.25, and 0 with probability 0.5, where .
If a column does not contain at least one 1 and one –1, then the software removes that column.
For distinct columns u and v, if u = v or u = –v, then the software removes v from the coding design matrix.
The software randomly generates 10,000 matrices by default, and retains the matrix with the largest, minimal, pairwise row distance based on the Hamming measure ([2]) given by
where mkjl is an element of coding design matrix j.
Support Vector Storage
By default and for efficiency, fitcecoc
empties the Alpha
, SupportVectorLabels
,
and SupportVectors
properties
for all linear SVM binary learners. fitcecoc
lists Beta
, rather than
Alpha
, in the model display.
To store Alpha
, SupportVectorLabels
, and
SupportVectors
, pass a linear SVM template that specifies storing
support vectors to fitcecoc
. For example,
enter:
t = templateSVM('SaveSupportVectors',true) Mdl = fitcecoc(X,Y,'Learners',t);
You can remove the support vectors and related values by passing the resulting
ClassificationECOC
model to
discardSupportVectors
.
References
[1] Fürnkranz, Johannes. “Round Robin Classification.” J. Mach. Learn. Res., Vol. 2, 2002, pp. 721–747.
[2] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recog. Lett. Vol. 30, Issue 3, 2009, pp. 285–297.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
When you train an ECOC model by using
fitcecoc
, the following restrictions apply.All binary learners must be SVM classifiers or linear classification models. For the
Learners
name-value argument, you can specify:'svm'
or'linear'
An SVM template object or a cell array of such objects (see
templateSVM
)A linear classification model template object or a cell array of such objects (see
templateLinear
)
Code generation limitations for the binary learners used in the ECOC classifier also apply to the ECOC classifier. For linear classification models, you can specify only one regularization strength—
'auto'
or a nonnegative scalar for theLambda
name-value argument.For code generation with a coder configurer, the following additional restrictions apply.
If you use a cell array of SVM template objects, the value of
Standardize
for SVM learners must be consistent. For example, if you specify'Standardize',true
for one SVM learner, you must specify the same value for all SVM learners.If you use a cell array of SVM template objects, and you use one SVM learner with a linear kernel (
'KernelFunction','linear'
) and another with a different type of kernel function, then you must specify
for the learner with a linear kernel.'SaveSupportVectors'
,trueCategorical predictors (
logical
,categorical
,char
,string
, orcell
) are not supported. You cannot use theCategoricalPredictors
name-value argument. To include categorical predictors in a model, preprocess them by usingdummyvar
before fitting the model.Class labels with the
categorical
data type are not supported. Both the class label value in the training data (Tbl
orY
) and the value of theClassNames
name-value argument cannot be an array with thecategorical
data type.For more details, see
ClassificationECOCCoderConfigurer
. For information on name-value arguments that you cannot modify when you retrain a model, see Tips.
For more information, see Introduction to Code Generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
The following object functions fully support GPU arrays:
The following object functions offer limited support for GPU arrays:
The object functions execute on a GPU if at least one of the following applies:
The model was fitted with GPU arrays.
The predictor data that you pass to the object function is a GPU array.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2014bR2024b: Specify linear and ensemble learners for gpuArray
sample data
You can specify linear and ensemble learners when you create a
CompactClassificationECOC
object by passing gpuArray
sample data to
fitcecoc
.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)