Rectifying Soft-Label Entangled Bias in Long-Tailed Dataset Distillation

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dataset distillation methods suffer significant performance degradation under long-tailed distributions, primarily due to soft-label bias—arising jointly from distillation model bias and intrinsic image bias—a mechanism long overlooked. This work is the first to systematically characterize this dual-source bias. We propose Adaptive Distillation Soft-label Alignment (ADSA), a lightweight, training-free module that aligns soft labels via imbalance-aware generalization error bound analysis and perturbation-based validation. ADSA is plug-and-play compatible with mainstream distillation frameworks. On ImageNet-1k-LT, it improves tail-class accuracy by up to 11.8% and achieves an overall accuracy of 41.4%, substantially enhancing tail-class generalization and method robustness.

Technology Category

Application Category

📝 Abstract
Dataset distillation compresses large-scale datasets into compact, highly informative synthetic data, significantly reducing storage and training costs. However, existing research primarily focuses on balanced datasets and struggles to perform under real-world long-tailed distributions. In this work, we emphasize the critical role of soft labels in long-tailed dataset distillation and uncover the underlying mechanisms contributing to performance degradation. Specifically, we derive an imbalance-aware generalization bound for model trained on distilled dataset. We then identify two primary sources of soft-label bias, which originate from the distillation model and the distilled images, through systematic perturbation of the data imbalance levels. To address this, we propose ADSA, an Adaptive Soft-label Alignment module that calibrates the entangled biases. This lightweight module integrates seamlessly into existing distillation pipelines and consistently improves performance. On ImageNet-1k-LT with EDC and IPC=50, ADSA improves tail-class accuracy by up to 11.8% and raises overall accuracy to 41.4%. Extensive experiments demonstrate that ADSA provides a robust and generalizable solution under limited label budgets and across a range of distillation techniques. Code is available at: https://github.com/j-cyoung/ADSA_DD.git.
Problem

Research questions and friction points this paper is trying to address.

Addressing soft-label bias in long-tailed dataset distillation
Improving performance on imbalanced real-world data distributions
Calibrating biases from distillation models and synthetic images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Soft-label Alignment module calibrates biases
Lightweight module integrates into existing distillation pipelines
Improves tail-class accuracy and overall performance significantly
🔎 Similar Papers
No similar papers found.
C
Chenyang Jiang
Harbin Institute of Technology, Shenzhen
H
Hang Zhao
Harbin Institute of Technology, Shenzhen
X
Xinyu Zhang
Harbin Institute of Technology, Shenzhen
Z
Zhengcen Li
Harbin Institute of Technology, Shenzhen
Q
Qiben Shan
Pengcheng Laboratory
S
Shaocong Wu
Pengcheng Laboratory
Jingyong Su
Jingyong Su
Professor, Harbin Institute of Technology at Shenzhen, China
Computer Vision and MultimodalData-Centric MLMedical Image AnalysisStatistics on Manifold