🤖 AI Summary
This paper addresses violations of equalized odds fairness arising from the joint influence of multiple sensitive attributes (e.g., race and gender). We propose FairICP, an end-to-end fair-aware learning framework. Unlike existing methods modeling fairness with respect to a single sensitive attribute, FairICP introduces Inverse Conditional Permutation (ICP)—a novel mechanism theoretically guaranteed to enforce equalized odds under multidimensional sensitive attributes. By integrating adversarial learning, gradient penalty, and conditional distribution reconstruction, FairICP supports expressive and flexible fairness constraints. Extensive experiments on real-world datasets (Adult, COMPAS) and synthetic benchmarks demonstrate that FairICP significantly improves multi-attribute fairness: inter-group true positive rate and false positive rate disparities decrease by 32%–58%, while maintaining predictive accuracy on par with state-of-the-art baselines.
📝 Abstract
Equalized odds, as a popular notion of algorithmic fairness, aims to ensure that sensitive variables, such as race and gender, do not unfairly influence the algorithm's prediction when conditioning on the true outcome. Despite rapid advancements, current research primarily focuses on equalized odds violations caused by a single sensitive attribute, leaving the challenge of simultaneously accounting for multiple attributes largely unaddressed. We bridge this gap by introducing an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme. FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes. The efficacy and adaptability of our method are demonstrated through both simulation studies and empirical analyses of real-world datasets.