Rethinking Long-tailed Dataset Distillation: A Uni-Level Framework with Unbiased Recovery and Relabeling

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dataset distillation methods suffer from representation bias and inaccurate Batch Normalization (BN) statistics under long-tailed distributions due to severe class-frequency imbalance. This paper proposes the first unbiased distillation framework tailored for long-tailed scenarios: it abandons conventional trajectory matching and instead introduces an observer–teacher co-training paradigm. Key innovations include dynamic momentum BN calibration, multi-round diversity-enhanced synthetic initialization, high-confidence augmentation filtering, and soft-label re-annotation—jointly enabling statistical alignment and equitable supervision recovery. Evaluated on CIFAR-100-LT and Tiny-ImageNet-LT (IPC=10, imbalance factor=10), our method achieves +15.6% and +11.8% Top-1 accuracy gains over prior state-of-the-art, respectively, and consistently outperforms all existing approaches across four long-tailed benchmarks.

Technology Category

Application Category

📝 Abstract
Dataset distillation creates a small distilled set that enables efficient training by capturing key information from the full dataset. While existing dataset distillation methods perform well on balanced datasets, they struggle under long-tailed distributions, where imbalanced class frequencies induce biased model representations and corrupt statistical estimates such as Batch Normalization (BN) statistics. In this paper, we rethink long-tailed dataset distillation by revisiting the limitations of trajectory-based methods, and instead adopt the statistical alignment perspective to jointly mitigate model bias and restore fair supervision. To this end, we introduce three dedicated components that enable unbiased recovery of distilled images and soft relabeling: (1) enhancing expert models (an observer model for recovery and a teacher model for relabeling) to enable reliable statistics estimation and soft-label generation; (2) recalibrating BN statistics via a full forward pass with dynamically adjusted momentum to reduce representation skew; (3) initializing synthetic images by incrementally selecting high-confidence and diverse augmentations via a multi-round mechanism that promotes coverage and diversity. Extensive experiments on four long-tailed benchmarks show consistent improvements over state-of-the-art methods across varying degrees of class imbalance.Notably, our approach improves top-1 accuracy by 15.6% on CIFAR-100-LT and 11.8% on Tiny-ImageNet-LT under IPC=10 and IF=10.
Problem

Research questions and friction points this paper is trying to address.

Addressing biased model representations in long-tailed dataset distillation
Mitigating corrupted statistical estimates like Batch Normalization skew
Enhancing distilled image recovery and soft relabeling for imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unbiased recovery and soft relabeling for distillation
Recalibrating BN statistics to reduce representation skew
Initializing synthetic images via multi-round augmentation selection
🔎 Similar Papers
No similar papers found.