🤖 AI Summary
This paper addresses the underrepresentation of intersectional fairness—e.g., biases arising from combinations of race, gender, and age—in machine learning. We propose the first framework that explicitly models joint bias across multiple sensitive attributes as an adaptive multi-objective optimization problem. Our approach introduces a differentiable intersectional fairness metric and integrates Pareto cone projection, gradient-weighted objective balancing, and an exploration–exploitation switching mechanism to ensure convergence to Pareto-optimal solutions while dynamically trading off fairness and accuracy. Model-agnostic by design, our method achieves substantial reductions in intersectional fairness violations—averaging 32.7% across four real-world datasets—without compromising predictive performance. Empirical results demonstrate its effectiveness, robustness to distributional shifts, and scalability to diverse model architectures and fairness definitions.
📝 Abstract
Ensuring fairness in machine learning models is critical, especially when biases compound across intersecting protected attributes like race, gender, and age. While existing methods address fairness for single attributes, they fail to capture the nuanced, multiplicative biases faced by intersectional subgroups. We introduce Adaptive Pareto Front Explorer (APFEx), the first framework to explicitly model intersectional fairness as a joint optimization problem over the Cartesian product of sensitive attributes. APFEx combines three key innovations- (1) an adaptive multi-objective optimizer that dynamically switches between Pareto cone projection, gradient weighting, and exploration strategies to navigate fairness-accuracy trade-offs, (2) differentiable intersectional fairness metrics enabling gradient-based optimization of non-smooth subgroup disparities, and (3) theoretical guarantees of convergence to Pareto-optimal solutions. Experiments on four real-world datasets demonstrate APFEx's superiority, reducing fairness violations while maintaining competitive accuracy. Our work bridges a critical gap in fair ML, providing a scalable, model-agnostic solution for intersectional fairness.