๐ค AI Summary
Existing dataset distillation methods suffer significant performance degradation in long-tailed scenarios, primarily due to heuristic distribution alignment strategies and uniform treatment of imbalanced classes. This work proposes a Class-aware Spectral Distribution Matching (CSDM) framework, which formalizes the distribution alignment problem from a spectral perspective for the first time. By mapping samples into the frequency domain via kernel functions, CSDM constructs a Spectral Distribution Distance (SDD) and leverages magnitudeโphase decomposition to enable class-adaptive enhancement of tail classes. On CIFAR-10-LT, with only 10 synthesized images per class, CSDM improves performance by 14.0% over the current state-of-the-art method. Moreover, when the number of tail-class images is reduced from 500 to 25, performance declines by merely 5.7%, demonstrating remarkable stability and effectiveness.
๐ Abstract
Dataset distillation (DD) aims to compress large-scale datasets into compact synthetic counterparts for efficient model training. However, existing DD methods exhibit substantial performance degradation on long-tailed datasets. We identify two fundamental challenges: heuristic design choices for distribution discrepancy measure and uniform treatment of imbalanced classes. To address these limitations, we propose Class-Aware Spectral Distribution Matching (CSDM), which reformulates distribution alignment via the spectrum of a well-behaved kernel function. This technique maps the original samples into frequency space, resulting in the Spectral Distribution Distance (SDD). To mitigate class imbalance, we exploit the unified form of SDD to perform amplitude-phase decomposition, which adaptively prioritizes the realism in tail classes. On CIFAR-10-LT, with 10 images per class, CSDM achieves a 14.0% improvement over state-of-the-art DD methods, with only a 5.7% performance drop when the number of images in tail classes decreases from 500 to 25, demonstrating strong stability on long-tailed data.