Doubly-Regressing Approach for Subgroup Fairness

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-sensitive-attribute settings (e.g., gender, race, age) induce combinatorial explosion of subgroups, leading to prohibitive computational overhead and severe data sparsity—critically hindering subgroup fairness evaluation and optimization. To address this, we propose a dual-regression adversarial learning framework. Our key contributions are: (1) introducing *subgroup subset fairness*, which relaxes the requirement of enumerating all exponentially many subgroups; (2) adopting the *supremum Integral Probability Metric* (supIPM) as a distributional fairness measure and deriving its differentiable upper-bound surrogate loss, enabling efficient, provably convergent optimization; and (3) jointly modeling the primary task and marginal distributions of sensitive attributes via dual regression, thereby improving generalization for underrepresented subgroups. Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms state-of-the-art baselines in multi-sensitive-attribute and small-subgroup regimes.

Technology Category

Application Category

📝 Abstract
Algorithmic fairness is a socially crucial topic in real-world applications of AI. Among many notions of fairness, subgroup fairness is widely studied when multiple sensitive attributes (e.g., gender, race, age) are present. However, as the number of sensitive attributes grows, the number of subgroups increases accordingly, creating heavy computational burdens and data sparsity problem (subgroups with too small sizes). In this paper, we develop a novel learning algorithm for subgroup fairness which resolves these issues by focusing on subgroups with sufficient sample sizes as well as marginal fairness (fairness for each sensitive attribute). To this end, we formalize a notion of subgroup-subset fairness and introduce a corresponding distributional fairness measure called the supremum Integral Probability Metric (supIPM). Building on this formulation, we propose the Doubly Regressing Adversarial learning for subgroup Fairness (DRAF) algorithm, which reduces a surrogate fairness gap for supIPM with much less computation than directly reducing supIPM. Theoretically, we prove that the proposed surrogate fairness gap is an upper bound of supIPM. Empirically, we show that the DRAF algorithm outperforms baseline methods in benchmark datasets, specifically when the number of sensitive attributes is large so that many subgroups are very small.
Problem

Research questions and friction points this paper is trying to address.

Addresses subgroup fairness with multiple sensitive attributes
Solves computational burden from exponentially growing subgroups
Mitigates data sparsity in small-sized subgroup populations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Doubly regressing adversarial learning for subgroup fairness
Focusing on sufficient sample sizes and marginal fairness
Reducing surrogate fairness gap with less computation
🔎 Similar Papers
No similar papers found.
K
Kyungseon Lee
Department of Statistics, Seoul National University
Kunwoong Kim
Kunwoong Kim
Seoul National University
J
Jihu Lee
Department of Statistics, Seoul National University
D
Dongyoon Yang
AI Advanced Technology, SK hynix
Yongdai Kim
Yongdai Kim
Seoul National University
statisticsmachine learning