🤖 AI Summary
This paper addresses the challenge of fairness evaluation in regression tasks under intersectional settings—where multiple protected attributes jointly define subpopulations. We propose Intersectional Divergence (ID), the first regression-specific intersectional fairness metric, and design IDLoss, a differentiable, optimizable loss function. Methodologically, ID integrates target-range-weighted error modeling, fine-grained intersectional subgroup partitioning, and end-to-end joint optimization to jointly balance overall mean error and prediction bias within critical output intervals. Unlike conventional fairness paradigms that focus on single attributes and mean-level disparities, ID explicitly captures latent intersectional biases. Empirical results across multiple real-world regression datasets demonstrate that models trained with IDLoss maintain high predictive accuracy while significantly improving fairness—not only across individual protected attributes but also across all intersectional subgroups—thereby enhancing both fairness robustness and generalizability.
📝 Abstract
Research on fairness in machine learning has been mainly framed in the context of classification tasks, leaving critical gaps in regression. In this paper, we propose a seminal approach to measure intersectional fairness in regression tasks, going beyond the focus on single protected attributes from existing work to consider combinations of all protected attributes. Furthermore, we contend that it is insufficient to measure the average error of groups without regard for imbalanced domain preferences. To this end, we propose Intersectional Divergence (ID) as the first fairness measure for regression tasks that 1) describes fair model behavior across multiple protected attributes and 2) differentiates the impact of predictions in target ranges most relevant to users. We extend our proposal demonstrating how ID can be adapted into a loss function, IDLoss, and used in optimization problems. Through an extensive experimental evaluation, we demonstrate how ID allows unique insights into model behavior and fairness, and how incorporating IDLoss into optimization can considerably improve single-attribute and intersectional model fairness while maintaining a competitive balance in predictive performance.