FairDD: Fair Dataset Distillation via Synchronized Matching

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the exacerbation of bias—particularly with respect to protected attributes (e.g., gender, race)—in dataset distillation for image classification. We propose the first fairness-aware distillation framework, whose core innovation is an attribute-aware grouped synchronous matching mechanism: during distillation, both original and synthetic data are partitioned and aligned by protected attributes, enabling joint optimization of distributional and gradient matching objectives to prevent synthetic samples from collapsing toward majority groups. The method requires no modification to existing distillation architectures and is plug-and-play. Extensive evaluation across multiple benchmark datasets and state-of-the-art distillation methods demonstrates that our approach significantly improves fairness—reducing the equal opportunity difference by over 40% on average—while preserving or even improving classification accuracy. To the best of our knowledge, this is the first work to achieve synergistic optimization of fairness and accuracy in dataset distillation.

Technology Category

Application Category

📝 Abstract
Condensing large datasets into smaller synthetic counterparts has demonstrated its promise for image classification. However, previous research has overlooked a crucial concern in image recognition: ensuring that models trained on condensed datasets are unbiased towards protected attributes (PA), such as gender and race. Our investigation reveals that dataset distillation (DD) fails to alleviate the unfairness towards minority groups within original datasets. Moreover, this bias typically worsens in the condensed datasets due to their smaller size. To bridge the research gap, we propose a novel fair dataset distillation (FDD) framework, namely FairDD, which can be seamlessly applied to diverse matching-based DD approaches, requiring no modifications to their original architectures. The key innovation of FairDD lies in synchronously matching synthetic datasets to PA-wise groups of original datasets, rather than indiscriminate alignment to the whole distributions in vanilla DDs, dominated by majority groups. This synchronized matching allows synthetic datasets to avoid collapsing into majority groups and bootstrap their balanced generation to all PA groups. Consequently, FairDD could effectively regularize vanilla DDs to favor biased generation toward minority groups while maintaining the accuracy of target attributes. Theoretical analyses and extensive experimental evaluations demonstrate that FairDD significantly improves fairness compared to vanilla DD methods, without sacrificing classification accuracy. Its consistent superiority across diverse DDs, spanning Distribution and Gradient Matching, establishes it as a versatile FDD approach.
Problem

Research questions and friction points this paper is trying to address.

Addressing bias amplification in dataset distillation towards minority groups
Ensuring synthetic datasets maintain fairness across protected attributes
Synchronizing dataset matching to balance representation across demographic groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synchronously matches synthetic datasets to protected attribute groups
Avoids collapsing into majority groups during distillation
Maintains accuracy while improving fairness across datasets
🔎 Similar Papers
No similar papers found.