🤖 AI Summary
Binary classification models for resource allocation often suffer degraded predictive performance when enforcing global fairness constraints. This paper proposes “decision-centric fairness,” a novel paradigm that localizes fairness enforcement to the threshold neighborhood—where predictions directly influence decisions—rather than over the entire score distribution, thereby preserving discriminative power while ensuring fairness. Grounded in demographic parity, our method introduces a threshold-local optimization algorithm and develops a semi-synthetic data experimental framework coupled with decision sensitivity analysis. Extensive experiments demonstrate that our approach maintains or improves classification accuracy while significantly enhancing the balance of selection rates across demographic groups within the decision-critical threshold region. It consistently outperforms conventional globally constrained fairness methods, offering a more practical and effective trade-off between fairness and utility in real-world allocation settings.
📝 Abstract
Data-driven decision support tools play an increasingly central role in decision-making across various domains. In this work, we focus on binary classification models for predicting positive-outcome scores and deciding on resource allocation, e.g., credit scores for granting loans or churn propensity scores for targeting customers with a retention campaign. Such models may exhibit discriminatory behavior toward specific demographic groups through their predicted scores, potentially leading to unfair resource allocation. We focus on demographic parity as a fairness metric to compare the proportions of instances that are selected based on their positive outcome scores across groups. In this work, we propose a decision-centric fairness methodology that induces fairness only within the decision-making region -- the range of relevant decision thresholds on the score that may be used to decide on resource allocation -- as an alternative to a global fairness approach that seeks to enforce parity across the entire score distribution. By restricting the induction of fairness to the decision-making region, the proposed decision-centric approach avoids imposing overly restrictive constraints on the model, which may unnecessarily degrade the quality of the predicted scores. We empirically compare our approach to a global fairness approach on multiple (semi-synthetic) datasets to identify scenarios in which focusing on fairness where it truly matters, i.e., decision-centric fairness, proves beneficial.