Main Content

Fairness in Binary Classification

Explore fairness in binary classification

To detect and mitigate societal bias in binary classification, you can use the fairnessMetrics, fairnessWeights, disparateImpactRemover, and fairnessThresholder functions in Statistics and Machine Learning Toolbox™. First, use fairnessMetrics to evaluate the fairness of a data set or classification model using bias and group metrics. Then, use fairnessWeights to reweight observations, disparateImpactRemover to remove the disparate impact of a sensitive attribute, or fairnessThresholder to optimize the classification threshold.

The fairnessWeights and disparateImpactRemover functions provide preprocessing techniques that allow you to adjust your predictor data before training (or retraining) a classifier. The fairnessThresholder function provides a postprocessing technique that adjusts labels near prediction boundaries for a trained classifier. To assess the final model behavior, you can use the fairnessMetrics function as well as various interpretability functions. For more information, see Interpret Machine Learning Models.


expand all

fairnessMetricsBias and group metrics for a data set or classification model (Since R2022b)
reportGenerate fairness metrics report (Since R2022b)
plotPlot bar graph of fairness metric (Since R2022b)
fairnessWeightsReweight observations for fairness in binary classification (Since R2022b)
disparateImpactRemoverRemove disparate impact of sensitive attribute (Since R2022b)
transformTransform new predictor data to remove disparate impact (Since R2022b)
fairnessThresholderOptimize classification threshold to include fairness (Since R2023a)
lossClassification loss adjusted by fairness threshold (Since R2023a)
predictPredicted labels adjusted by fairness threshold (Since R2023a)