🤖 AI Summary
Global fairness constraints in high-stakes decision-making often degrade predictive performance and diverge from practical concerns by imposing uniform fairness requirements across the entire score range.
Method: We propose a *partial fairness modeling framework* that enforces fairness constraints exclusively within the most contentious intermediate score interval—where decisions are most ambiguous and socially sensitive—while preserving model flexibility elsewhere. We introduce a novel *local fairness definition* based on difference-of-convex (DC) functions and develop an *inexact DC algorithm (IDCA)* with provable convergence and KKT-point iteration complexity analysis. Additionally, we design two statistically rigorous metrics to quantify local fairness.
Results: Experiments on multiple real-world datasets demonstrate that our approach significantly improves predictive performance (e.g., +3.2% average AUC gain) over global fairness methods, while strictly satisfying fairness constraints within the target interval—achieving a Pareto improvement in both accuracy and fairness.
📝 Abstract
Fairness in machine learning has become a critical concern, particularly in high-stakes applications. Existing approaches often focus on achieving full fairness across all score ranges generated by predictive models, ensuring fairness in both high and low-scoring populations. However, this stringent requirement can compromise predictive performance and may not align with the practical fairness concerns of stakeholders. In this work, we propose a novel framework for building partially fair machine learning models, which enforce fairness within a specific score range of interest, such as the middle range where decisions are most contested, while maintaining flexibility in other regions. We introduce two statistical metrics to rigorously evaluate partial fairness within a given score range, such as the top 20%-40% of scores. To achieve partial fairness, we propose an in-processing method by formulating the model training problem as constrained optimization with difference-of-convex constraints, which can be solved by an inexact difference-of-convex algorithm (IDCA). We provide the complexity analysis of IDCA for finding a nearly KKT point. Through numerical experiments on real-world datasets, we demonstrate that our framework achieves high predictive performance while enforcing partial fairness where it matters most.