🤖 AI Summary
Existing dataset distillation methods suffer from representation bias and inaccurate Batch Normalization (BN) statistics under long-tailed distributions due to severe class-frequency imbalance. This paper proposes the first unbiased distillation framework tailored for long-tailed scenarios: it abandons conventional trajectory matching and instead introduces an observer–teacher co-training paradigm. Key innovations include dynamic momentum BN calibration, multi-round diversity-enhanced synthetic initialization, high-confidence augmentation filtering, and soft-label re-annotation—jointly enabling statistical alignment and equitable supervision recovery. Evaluated on CIFAR-100-LT and Tiny-ImageNet-LT (IPC=10, imbalance factor=10), our method achieves +15.6% and +11.8% Top-1 accuracy gains over prior state-of-the-art, respectively, and consistently outperforms all existing approaches across four long-tailed benchmarks.
📝 Abstract
Dataset distillation creates a small distilled set that enables efficient training by capturing key information from the full dataset. While existing dataset distillation methods perform well on balanced datasets, they struggle under long-tailed distributions, where imbalanced class frequencies induce biased model representations and corrupt statistical estimates such as Batch Normalization (BN) statistics. In this paper, we rethink long-tailed dataset distillation by revisiting the limitations of trajectory-based methods, and instead adopt the statistical alignment perspective to jointly mitigate model bias and restore fair supervision. To this end, we introduce three dedicated components that enable unbiased recovery of distilled images and soft relabeling: (1) enhancing expert models (an observer model for recovery and a teacher model for relabeling) to enable reliable statistics estimation and soft-label generation; (2) recalibrating BN statistics via a full forward pass with dynamically adjusted momentum to reduce representation skew; (3) initializing synthetic images by incrementally selecting high-confidence and diverse augmentations via a multi-round mechanism that promotes coverage and diversity. Extensive experiments on four long-tailed benchmarks show consistent improvements over state-of-the-art methods across varying degrees of class imbalance.Notably, our approach improves top-1 accuracy by 15.6% on CIFAR-100-LT and 11.8% on Tiny-ImageNet-LT under IPC=10 and IF=10.