A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification

📅 2021-10-12
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
To address fairness violations in classification arising from imbalanced error rates across protected groups, this paper proposes the Fairness-Adjusted Selection Inference (FASI) framework. FASI introduces False Selection Rate (FSR) control—a concept previously unexplored in fair classification—by provably transforming black-box model outputs into R-values, thereby guaranteeing finite-sample upper bounds on group-wise FSR and achieving statistical parity. The method integrates selection inference, R-value construction, and post-hoc optimization without requiring modifications to the underlying classifier. Experiments on synthetic and real-world datasets demonstrate that FASI substantially reduces inter-group error-rate disparities; FSR control error remains consistently below the prespecified threshold. FASI thus delivers rigorous statistical guarantees, computational efficiency, and strong fairness assurance.
📝 Abstract
We investigate the fairness issue in classification, where automated decisions are made for individuals from different protected groups. In high-consequence scenarios, decision errors can disproportionately affect certain protected groups, leading to unfair outcomes. To address this issue, we propose a fairness-adjusted selective inference (FASI) framework and develop data-driven algorithms that achieve statistical parity by controlling the false selection rate (FSR) among protected groups. Our FASI algorithm operates by converting the outputs of black-box classifiers into R-values, which are both intuitive and computationally efficient. These R-values serve as the basis for selection rules that are provably valid for FSR control in finite samples for protected groups, effectively mitigating the unfairness in group-wise error rates. We demonstrate the numerical performance of our approach using both simulated and real data.
Problem

Research questions and friction points this paper is trying to address.

Addressing fairness in classification across protected groups
Mitigating disproportionate decision errors in automated systems
Achieving statistical parity with controlled false selection rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness-adjusted selective inference framework
Converts classifier outputs to R-values
Controls false selection rate among groups
B
Bradley Rava
University of Sydney Business School
Wenguang Sun
Wenguang Sun
Professor of Data Sciences and Operations, University of Southern California
Large-scale Multiple TestingDecision TheoryHigh Dimensional Statistical Inference
G
Gareth M. James
Goizueta Business School, Emory University
X
Xin Tong
Faculty of Business and Economics, University of Hong Kong