Intersectional Divergence: Measuring Fairness in Regression

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of fairness evaluation in regression tasks under intersectional settings—where multiple protected attributes jointly define subpopulations. We propose Intersectional Divergence (ID), the first regression-specific intersectional fairness metric, and design IDLoss, a differentiable, optimizable loss function. Methodologically, ID integrates target-range-weighted error modeling, fine-grained intersectional subgroup partitioning, and end-to-end joint optimization to jointly balance overall mean error and prediction bias within critical output intervals. Unlike conventional fairness paradigms that focus on single attributes and mean-level disparities, ID explicitly captures latent intersectional biases. Empirical results across multiple real-world regression datasets demonstrate that models trained with IDLoss maintain high predictive accuracy while significantly improving fairness—not only across individual protected attributes but also across all intersectional subgroups—thereby enhancing both fairness robustness and generalizability.

Technology Category

Application Category

📝 Abstract
Research on fairness in machine learning has been mainly framed in the context of classification tasks, leaving critical gaps in regression. In this paper, we propose a seminal approach to measure intersectional fairness in regression tasks, going beyond the focus on single protected attributes from existing work to consider combinations of all protected attributes. Furthermore, we contend that it is insufficient to measure the average error of groups without regard for imbalanced domain preferences. To this end, we propose Intersectional Divergence (ID) as the first fairness measure for regression tasks that 1) describes fair model behavior across multiple protected attributes and 2) differentiates the impact of predictions in target ranges most relevant to users. We extend our proposal demonstrating how ID can be adapted into a loss function, IDLoss, and used in optimization problems. Through an extensive experimental evaluation, we demonstrate how ID allows unique insights into model behavior and fairness, and how incorporating IDLoss into optimization can considerably improve single-attribute and intersectional model fairness while maintaining a competitive balance in predictive performance.
Problem

Research questions and friction points this paper is trying to address.

Measure intersectional fairness in regression tasks
Consider combinations of all protected attributes
Differentiate prediction impact in user-relevant target ranges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measures intersectional fairness in regression tasks
Introduces Intersectional Divergence (ID) as fairness metric
Adapts ID into loss function for optimization
🔎 Similar Papers
No similar papers found.
J
Joe Germino
Lucy Family Institute for Data & Society, University of Notre Dame, Notre Dame, IN, USA
Nuno Moniz
Nuno Moniz
Associate Research Professor at Lucy Family Institute for Data & Society, University of Notre Dame
Imbalanced LearningResponsible AIData Privacy
N
Nitesh V. Chawla
Lucy Family Institute for Data & Society, University of Notre Dame, Notre Dame, IN, USA