Fair Decisions from Calibrated Scores: Achieving Optimal Classification While Satisfying Sufficiency

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving optimal binary classification under the sufficiency fairness constraint—requiring equal positive predictive value (PPV) and false omission rate (FOR) across groups—while preserving prediction consistency and calibrated scores. The authors propose a post-processing randomized classification method that leverages group-calibrated scores and group identity to provide, for the first time, a precise geometric characterization and exact construction of an optimal randomized classifier satisfying sufficiency. By analyzing the geometry of the feasible PPV-FOR region, the method strictly enforces sufficiency without sacrificing classification performance. Moreover, when sufficiency conflicts with separation, it yields a Pareto-optimal trade-off with minimal deviation from separation. Experiments on real calibrated scores demonstrate that the algorithm effectively maintains predictive fairness while achieving near-optimal performance when balancing both fairness criteria.

Technology Category

Application Category

📝 Abstract
Binary classification based on predicted probabilities (scores) is a fundamental task in supervised machine learning. While thresholding scores is Bayes-optimal in the unconstrained setting, using a single threshold generally violates statistical group fairness constraints. Under independence (statistical parity) and separation (equalized odds), such thresholding suffices when the scores already satisfy the corresponding criterion. However, this does not extend to sufficiency: even perfectly group-calibrated scores -- including true class probabilities -- violate predictive parity after thresholding. In this work, we present an exact solution for optimal binary (randomized) classification under sufficiency, assuming finite sets of group-calibrated scores. We provide a geometric characterization of the feasible pairs of positive predictive value (PPV) and false omission rate (FOR) achievable by such classifiers, and use it to derive a simple post-processing algorithm that attains the optimal classifier using only group-calibrated scores and group membership. Finally, since sufficiency and separation are generally incompatible, we identify the classifier that minimizes deviation from separation subject to sufficiency, and show that it can also be obtained by our algorithm, often achieving performance comparable to the optimum.
Problem

Research questions and friction points this paper is trying to address.

fairness
sufficiency
calibrated scores
binary classification
predictive parity
Innovation

Methods, ideas, or system contributions that make the work stand out.

sufficiency
group calibration
fair classification
post-processing algorithm
predictive parity
🔎 Similar Papers
No similar papers found.
E
Etam Benger
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
Katrina Ligett
Katrina Ligett
Hebrew University