🤖 AI Summary
When protected attributes (e.g., gender, race) are corrupted by label noise, AUC-based fairness metrics—particularly the AUC difference (ΔAUC)—are highly susceptible to degradation. Existing methods assume clean sensitive attributes, limiting their practical robustness.
Method: This paper proposes the first theoretically grounded robust AUC fairness optimization framework. Departing from prior assumptions, it introduces distributionally robust optimization (DRO) into AUC fairness learning, deriving a provable upper bound on ΔAUC under attribute noise. The method integrates AUC gradient approximation, noise-robust loss design, and multi-task fairness constraints to ensure robust modeling despite noisy protected attributes.
Results: Extensive experiments on tabular and image datasets demonstrate that our approach significantly outperforms state-of-the-art methods, reducing average ΔAUC by 37% while preserving baseline AUC performance—confirming both fairness improvement and predictive utility.
📝 Abstract
The Area Under the ROC Curve (AUC) is a key metric for classification, especially under class imbalance, with growing research focus on optimizing AUC over accuracy in applications like medical image analysis and deepfake detection. This leads to fairness in AUC optimization becoming crucial as biases can impact protected groups. While various fairness mitigation techniques exist, fairness considerations in AUC optimization remain in their early stages, with most research focusing on improving AUC fairness under the assumption of clean protected groups. However, these studies often overlook the impact of noisy protected groups, leading to fairness violations in practice. To address this, we propose the first robust AUC fairness approach under noisy protected groups with fairness theoretical guarantees using distributionally robust optimization. Extensive experiments on tabular and image datasets show that our method outperforms state-of-the-art approaches in preserving AUC fairness. The code is in https://github.com/Purdue-M2/AUC_Fairness_with_Noisy_Groups.