average
Compute performance metrics for average receiver operating characteristic (ROC) curve in multiclass problem
Since R2022a
Syntax
Description
[
        computes the averages of performance metrics stored in the FPR,TPR,Thresholds,AUC] = average(rocObj,type)rocmetrics object
            rocObj for a multiclass classification problem using the averaging
        method specified in type. The function returns the average false
        positive rate (FPR) and the average true positive rate
            (TPR) for each threshold value in Thresholds.
        The function also returns AUC, the area under the ROC curve composed of
            FPR and TPR.
[
        computes the performance metrics and returns avg1,avg2,Thresholds,AUC] = average(rocObj,type,metric1,metric2)avg1 (the average of
            metric1) and avg2 (the average of
            metric2) in addition to Thresholds, the
        corresponding threshold for each of the average values, and AUC, the AUC
        of the curve generated by metric1 and metric2. (since R2024a)
average supports the AUC output only when
            metric1 and metric2 are TPR and FPR, or instead
        are precision and recall:
- TPR and FPR — Specify TPR using - "TruePositiveRate",- "tpr", or- "recall", and specify FPR using- "FalsePositiveRate"or- "fpr". These choices specify that AUC is a ROC curve.
- Precision and recall — Specify precision using - "PositivePredictiveValue",- "ppv",- "prec", or- "precision", and specify recall using- "TruePositiveRate",- "tpr", or- "recall". These choices specify that AUC is the area under a precision-recall curve.
Examples
Compute the performance metrics for a multiclass classification problem by creating a rocmetrics object, and then compute the average values for the metrics by using the average function. Plot the average ROC curve using the outputs of average.
Load the fisheriris data set. The matrix meas contains flower measurements for 150 different flowers. The vector species lists the species for each flower. species contains three distinct flower names.
load fisheririsTrain a classification tree that classifies observations into one of the three labels. Cross-validate the model using 10-fold cross-validation.
rng("default") % For reproducibility Mdl = fitctree(meas,species,Crossval="on");
Compute the classification scores for validation-fold observations.
[~,Scores] = kfoldPredict(Mdl); size(Scores)
ans = 1×2
   150     3
The output Scores is a matrix of size 150-by-3. The column order of Scores follows the class order in Mdl, stored in Mdl.ClassNames.
Create a rocmetrics object by using the true labels in species and the classification scores in Scores. Specify the column order of Scores using Mdl.ClassNames.
rocObj = rocmetrics(species,Scores,Mdl.ClassNames);
rocmetrics computes the FPR and TPR at different thresholds and finds the AUC value for each class.
Compute the average performance metric values, including the FPR and TPR at different thresholds using the macro-averaging method.
[FPR,TPR,Thresholds,AUC] = average(rocObj,"macro");Plot the average ROC curve and display the average AUC value..
plot(rocObj,AverageCurveType="macro",ClassNames=[])
To display all the ROC curves and AUC values, do not set the ClassNames argument to [].
plot(rocObj,AverageCurveType="macro")
Load the fisheriris data set. The matrix meas contains flower measurements for 150 different flowers. The vector species lists the species for each flower. species contains three distinct flower names.
Train a classification tree that classifies observations into one of the three labels.
load fisheriris
mdl = fitctree(meas,species);Create a rocmetrics object from the classification tree model.
roc = rocmetrics(mdl,meas,species); % Input data meas and response species requiredObtain the average macro recall and precision statistics in addition to the threshold and AUC statistics.
[avgRecall,avgPrec,thresh,AUC] = average(roc,"macro","recall","precision")
avgRecall = 9×1
         0
    0.6533
    0.9533
    0.9800
    0.9933
    0.9933
    1.0000
    1.0000
    1.0000
avgPrec = 9×1
       NaN
    1.0000
    0.9929
    0.9811
    0.9560
    0.9203
    0.7804
    0.6462
    0.3333
thresh = 9×1
    1.0000
    1.0000
    0.9565
    0.3333
   -0.3333
   -0.6667
   -0.9565
   -0.9783
   -1.0000
AUC = 0.9972
Plot the ROC curve for the recall and precision metrics.
plot(roc,AverageCurveType="macro",XAxisMetric="recall",YAxisMetric="precision")

Input Arguments
Object evaluating classification performance, specified as a rocmetrics
            object.
Averaging method, specified as "micro", "macro", or "weighted".
- "micro"(micro-averaging) —- averagefinds the average performance metrics by treating all one-versus-all binary classification problems as one binary classification problem. The function computes the confusion matrix components for the combined binary classification problem, and then computes the average metrics (as specified by the- XAxisMetricand- YAxisMetricname-value arguments) using the values of the confusion matrix.
- "macro"(macro-averaging) —- averagecomputes the average values for the metrics by averaging the values of all one-versus-all binary classification problems.
- "weighted"(weighted macro-averaging) —- averagecomputes the weighted average values for the metrics using the macro-averaging method and using the prior class probabilities (the- Priorproperty of- rocObj) as weights.
The algorithm type determines the length of the vectors for the output arguments (FPR, TPR, and Thresholds). For more details, see Average of Performance Metrics.
Data Types: char | string
Since R2024b
Name of a metric to average, specified as a name in
                rocObj.Metrics or as the name of a built-in
            metric listed in this table.
| Name | Description | 
|---|---|
| "TruePositives"or"tp" | Number of true positives (TP) | 
| "FalseNegatives"or"fn" | Number of false negatives (FN) | 
| "FalsePositives"or"fp" | Number of false positives (FP) | 
| "TrueNegatives"or"tn" | Number of true negatives (TN) | 
| "SumOfTrueAndFalsePositives"or"tp+fp" | Sum of TP and FP | 
| "RateOfPositivePredictions"or"rpp" | Rate of positive predictions (RPP), (TP+FP)/(TP+FN+FP+TN) | 
| "RateOfNegativePredictions"or"rnp" | Rate of negative predictions (RNP), (TN+FN)/(TP+FN+FP+TN) | 
| "Accuracy"or"accu" | Accuracy, (TP+TN)/(TP+FN+FP+TN) | 
| "TruePositiveRate","tpr", or"recall" | True positive rate (TPR), also known as recall or sensitivity, TP/(TP+FN) | 
| "FalseNegativeRate","fnr", or"miss" | False negative rate (FNR), or miss rate, FN/(TP+FN) | 
| "FalsePositiveRate"or"fpr" | False positive rate (FPR), also known as fallout or 1-specificity, FP/(TN+FP) | 
| "TrueNegativeRate","tnr", or"spec" | True negative rate (TNR), or specificity, TN/(TN+FP) | 
| "PositivePredictiveValue","ppv","prec", or"precision" | Positive predictive value (PPV), or precision, TP/(TP+FP) | 
| "NegativePredictiveValue"or"npv" | Negative predictive value (NPV), TN/(TN+FN) | 
| "f1score" | F1 score, 2*TP/(2*TP+FP+FN) | 
| "ExpectedCost"or"ecost" | Expected cost,
             The software converts the  | 
Data Types: char | string
Since R2024b
Name of a metric to average, specified as a name in
                rocObj.Metrics or as the name of a built-in
            metric listed in this table.
| Name | Description | 
|---|---|
| "TruePositives"or"tp" | Number of true positives (TP) | 
| "FalseNegatives"or"fn" | Number of false negatives (FN) | 
| "FalsePositives"or"fp" | Number of false positives (FP) | 
| "TrueNegatives"or"tn" | Number of true negatives (TN) | 
| "SumOfTrueAndFalsePositives"or"tp+fp" | Sum of TP and FP | 
| "RateOfPositivePredictions"or"rpp" | Rate of positive predictions (RPP), (TP+FP)/(TP+FN+FP+TN) | 
| "RateOfNegativePredictions"or"rnp" | Rate of negative predictions (RNP), (TN+FN)/(TP+FN+FP+TN) | 
| "Accuracy"or"accu" | Accuracy, (TP+TN)/(TP+FN+FP+TN) | 
| "TruePositiveRate","tpr", or"recall" | True positive rate (TPR), also known as recall or sensitivity, TP/(TP+FN) | 
| "FalseNegativeRate","fnr", or"miss" | False negative rate (FNR), or miss rate, FN/(TP+FN) | 
| "FalsePositiveRate"or"fpr" | False positive rate (FPR), also known as fallout or 1-specificity, FP/(TN+FP) | 
| "TrueNegativeRate","tnr", or"spec" | True negative rate (TNR), or specificity, TN/(TN+FP) | 
| "PositivePredictiveValue","ppv","prec", or"precision" | Positive predictive value (PPV), or precision, TP/(TP+FP) | 
| "NegativePredictiveValue"or"npv" | Negative predictive value (NPV), TN/(TN+FN) | 
| "f1score" | F1 score, 2*TP/(2*TP+FP+FN) | 
| "ExpectedCost"or"ecost" | Expected cost,
             The software converts the  | 
Data Types: char | string
Output Arguments
Average false positive rates, returned as a numeric vector.
Average true positive rates, returned as a numeric vector.
Area under the average ROC curve composed of FPR and
                TPR, returned as a numeric scalar.
Since R2024b
Average of metric1, returned as a double or single vector,
            depending on the data.
Since R2024b
Average of metric2, returned as a double or single vector,
            depending on the data.
More About
A ROC curve shows the true positive rate versus the false positive rate for different thresholds of classification scores.
The true positive rate and the false positive rate are defined as follows:
- True positive rate (TPR), also known as recall or sensitivity — - TP/(TP+FN), where TP is the number of true positives and FN is the number of false negatives
- False positive rate (FPR), also known as fallout or 1-specificity — - FP/(TN+FP), where FP is the number of false positives and TN is the number of true negatives
Each point on a ROC curve corresponds to a pair of TPR and FPR values for a specific
                threshold value. You can find different pairs of TPR and FPR values by varying the
                threshold value, and then create a ROC curve using the pairs. For each class,
                        rocmetrics uses all distinct adjusted score values
                as threshold values to create a ROC curve.
For a multiclass classification problem, rocmetrics formulates a set
                of one-versus-all binary
                classification problems to have one binary problem for each class, and finds a ROC
                curve for each class using the corresponding binary problem. Each binary problem
                assumes one class as positive and the rest as negative.
For a binary classification problem, if you specify the classification scores as a
                matrix, rocmetrics formulates two one-versus-all binary
                classification problems. Each of these problems treats one class as a positive class
                and the other class as a negative class, and rocmetrics finds two
                ROC curves. Use one of the curves to evaluate the binary classification
                problem.
For more details, see ROC Curve and Performance Metrics.
The area under a ROC curve (AUC) corresponds to the integral of a ROC curve
        (TPR values) with respect to FPR from FPR = 0 to FPR = 1.
The AUC provides an aggregate performance measure across all possible thresholds. The AUC
        values are in the range 0 to 1, and larger AUC values
        indicate better classifier performance.
The one-versus-all (OVA) coding design reduces a multiclass classification
        problem to a set of binary classification problems. In this coding design, each binary
        classification treats one class as positive and the rest of the classes as negative.
            rocmetrics uses the OVA coding design for multiclass classification and
        evaluates the performance on each class by using the binary classification that the class is
        positive.
For example, the OVA coding design for three classes formulates three binary classifications:
Each row corresponds to a class, and each column corresponds to a binary
        classification problem. The first binary classification assumes that class 1 is a positive
        class and the rest of the classes are negative. rocmetrics evaluates the
        performance on the first class by using the first binary classification problem.
Algorithms
For each class, rocmetrics adjusts the classification scores (input argument
            Scores of rocmetrics) relative to the scores for the rest
        of the classes if you specify Scores as a matrix. Specifically, the
        adjusted score for a class given an observation is the difference between the score for the
        class and the maximum value of the scores for the rest of the classes.
For example, if you have [s1,s2,s3] in a row of Scores for a classification problem with
        three classes, the adjusted score values are [s1-max(s2,s3),s2-max(s1,s3),s3-max(s1,s2)].
rocmetrics computes the performance metrics using the adjusted score values
        for each class.
For a binary classification problem, you can specify Scores as a
        two-column matrix or a column vector. Using a two-column matrix is a simpler option because
        the predict function of a classification object returns classification
        scores as a matrix, which you can pass to rocmetrics. If you pass scores in
        a two-column matrix, rocmetrics adjusts scores in the same way that it
        adjusts scores for multiclass classification, and it computes performance metrics for both
        classes. You can use the metric values for one of the two classes to evaluate the binary
        classification problem. The metric values for a class returned by
            rocmetrics when you pass a two-column matrix are equivalent to the
        metric values returned by rocmetrics when you specify classification scores
        for the class as a column vector.
Alternative Functionality
- You can use the - plotfunction to create the average ROC curve. The function returns a- ROCCurveobject containing the- XData,- YData,- Thresholds, and- AUCproperties, which correspond to the output arguments- FPR,- TPR,- Thresholds, and- AUCof the- averagefunction, respectively. For an example, see Plot Average ROC Curve for Multiclass Classifier.
References
[1] Sebastiani, Fabrizio. "Machine Learning in Automated Text Categorization." ACM Computing Surveys 34, no. 1 (March 2002): 1–47.
Version History
Introduced in R2022aYou can compute and plot the rocmetrics
    average results of any two metrics simultaneously. For an example, see Obtain Macro Averages for Two Metrics.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Website auswählen
Wählen Sie eine Website aus, um übersetzte Inhalte (sofern verfügbar) sowie lokale Veranstaltungen und Angebote anzuzeigen. Auf der Grundlage Ihres Standorts empfehlen wir Ihnen die folgende Auswahl: .
Sie können auch eine Website aus der folgenden Liste auswählen:
So erhalten Sie die bestmögliche Leistung auf der Website
Wählen Sie für die bestmögliche Website-Leistung die Website für China (auf Chinesisch oder Englisch). Andere landesspezifische Websites von MathWorks sind für Besuche von Ihrem Standort aus nicht optimiert.
Amerika
- América Latina (Español)
- Canada (English)
- United States (English)
Europa
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)