🤖 AI Summary
To address fairness violations in classification arising from imbalanced error rates across protected groups, this paper proposes the Fairness-Adjusted Selection Inference (FASI) framework. FASI introduces False Selection Rate (FSR) control—a concept previously unexplored in fair classification—by provably transforming black-box model outputs into R-values, thereby guaranteeing finite-sample upper bounds on group-wise FSR and achieving statistical parity. The method integrates selection inference, R-value construction, and post-hoc optimization without requiring modifications to the underlying classifier. Experiments on synthetic and real-world datasets demonstrate that FASI substantially reduces inter-group error-rate disparities; FSR control error remains consistently below the prespecified threshold. FASI thus delivers rigorous statistical guarantees, computational efficiency, and strong fairness assurance.
📝 Abstract
We investigate the fairness issue in classification, where automated decisions are made for individuals from different protected groups. In high-consequence scenarios, decision errors can disproportionately affect certain protected groups, leading to unfair outcomes. To address this issue, we propose a fairness-adjusted selective inference (FASI) framework and develop data-driven algorithms that achieve statistical parity by controlling the false selection rate (FSR) among protected groups. Our FASI algorithm operates by converting the outputs of black-box classifiers into R-values, which are both intuitive and computationally efficient. These R-values serve as the basis for selection rules that are provably valid for FSR control in finite samples for protected groups, effectively mitigating the unfairness in group-wise error rates. We demonstrate the numerical performance of our approach using both simulated and real data.