Fair Domain Generalization: An Information-Theoretic View

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses Fair Domain Generalization (FairDG), the problem of jointly optimizing predictive utility (i.e., minimizing expected risk) and algorithmic fairness (i.e., mitigating bias correlated with sensitive attributes) on unseen target domains. We propose the first theoretical framework based on mutual information upper bounds, establishing the first unified generalization error bound for both domain generalization and fairness in multi-class classification. Building upon this, we design PAFDG—a novel method integrating mutual information regularization, domain-adversarial training, and Pareto-aware multi-objective optimization—to enable tunable trade-offs between utility and fairness. Extensive experiments on multi-source vision and language benchmarks demonstrate that PAFDG significantly reduces cross-domain fairness disparities while maintaining state-of-the-art classification accuracy, consistently outperforming existing domain generalization and fair learning approaches.

Technology Category

Application Category

📝 Abstract
Domain generalization (DG) and algorithmic fairness are two critical challenges in machine learning. However, most DG methods focus only on minimizing expected risk in the unseen target domain without considering algorithmic fairness. Conversely, fairness methods typically do not account for domain shifts, so the fairness achieved during training may not generalize to unseen test domains. In this work, we bridge these gaps by studying the problem of Fair Domain Generalization (FairDG), which aims to minimize both expected risk and fairness violations in unseen target domains. We derive novel mutual information-based upper bounds for expected risk and fairness violations in multi-class classification tasks with multi-group sensitive attributes. These bounds provide key insights for algorithm design from an information-theoretic perspective. Guided by these insights, we introduce PAFDG (Pareto-Optimal Fairness for Domain Generalization), a practical framework that solves the FairDG problem and models the utility-fairness trade-off through Pareto optimization. Experiments on real-world vision and language datasets show that PAFDG achieves superior utility-fairness trade-offs compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Bridging domain generalization and algorithmic fairness gaps
Minimizing risk and fairness violations in unseen domains
Modeling utility-fairness trade-off via Pareto optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual information bounds for risk and fairness
PAFDG framework for Fair Domain Generalization
Pareto optimization for utility-fairness trade-off
🔎 Similar Papers
No similar papers found.