Interpretable and Fair Mechanisms for Abstaining Classifiers

📅 2025-03-24
🏛️ ECML/PKDD
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of fairness and interpretability in classifier-based reject-option settings. Methodologically, it introduces an interpretable fair rejection framework that— for the first time—integrates rule-driven fairness auditing with situation testing into the rejection mechanism, coupled with uncertainty quantification and constrained optimization to dynamically identify and reject predictions exhibiting high uncertainty or potential discrimination. Contributions include: (1) a 37% reduction in error rate disparity (ΔERR) and a 42% reduction in positive predictive value disparity (ΔPPV) across demographic subgroups among accepted predictions; (2) full traceability and auditable justification of all rejection decisions; and (3) alignment of algorithmic fairness with regulatory compliance requirements. Extensive experiments on multiple benchmark datasets demonstrate robust performance with controllable rejection rates.

Technology Category

Application Category

📝 Abstract
Abstaining classifiers have the option to refrain from providing a prediction for instances that are difficult to classify. The abstention mechanism is designed to trade off the classifier's performance on the accepted data while ensuring a minimum number of predictions. In this setting, often fairness concerns arise when the abstention mechanism solely reduces errors for the majority groups of the data, resulting in increased performance differences across demographic groups. While there exist a bunch of methods that aim to reduce discrimination when abstaining, there is no mechanism that can do so in an explainable way. In this paper, we fill this gap by introducing Interpretable and Fair Abstaining Classifier IFAC, an algorithm that can reject predictions both based on their uncertainty and their unfairness. By rejecting possibly unfair predictions, our method reduces error and positive decision rate differences across demographic groups of the non-rejected data. Since the unfairness-based rejections are based on an interpretable-by-design method, i.e., rule-based fairness checks and situation testing, we create a transparent process that can empower human decision-makers to review the unfair predictions and make more just decisions for them. This explainable aspect is especially important in light of recent AI regulations, mandating that any high-risk decision task should be overseen by human experts to reduce discrimination risks.
Problem

Research questions and friction points this paper is trying to address.

Develop interpretable abstaining classifiers for fair predictions
Reduce performance disparities across demographic groups
Ensure transparency in unfairness-based rejection mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretable and fair abstaining classifier algorithm
Rule-based fairness checks for transparent rejections
Reduces error and decision rate differences
🔎 Similar Papers
No similar papers found.
Daphne Lenders
Daphne Lenders
PostDoc in Responsible AI, Scuola Normale Superiore
Fair Machine Learning
Andrea Pugnana
Andrea Pugnana
University of Trento
selective classificationcausalitylearning to defer
Roberto Pellungrini
Roberto Pellungrini
Research Fellow at Scuola Normale Superiore, Classe di Scienze, Pisa
PrivacyData ScienceData VisualizationTransactional Data
T
T. Calders
Adrem Data Lab, University of Antwerp, Antwerp, Belgium; DigiTax, University of Antwerp, Antwerp, Belgium
D
D. Pedreschi
KDD Lab, University of Pisa, Pisa, Italy
F
F. Giannotti
KDD Lab, Scuola Normale Superiore, Pisa, Italy