🤖 AI Summary
To address information decay and generation distortion in dataset distillation, this paper proposes Data Residual Matching (DRM), a novel paradigm that introduces data-level skip connections in pixel space to explicitly model the residual between original data and distilled samples, while jointly leveraging multi-model knowledge distillation to balance feature fidelity and knowledge transfer. DRM is the first to adapt the residual learning principle—previously confined to network architecture design—to the data generation stage. Coupled with a lightweight optimization strategy, it significantly reduces computational overhead. On ImageNet-1K, DRM achieves 47.7% and 50.0% top-1 accuracy using single- and multi-model distillation, respectively, at a 0.8% compression ratio—surpassing state-of-the-art methods including RDED (+5.7%), EDC, and CV-DD. Moreover, it reduces training time by 32% and GPU memory consumption by 28%.
📝 Abstract
Residual connection has been extensively studied and widely applied at the model architecture level. However, its potential in the more challenging data-centric approaches remains unexplored. In this work, we introduce the concept of Data Residual Matching for the first time, leveraging data-level skip connections to facilitate data generation and mitigate data information vanishing. This approach maintains a balance between newly acquired knowledge through pixel space optimization and existing core local information identification within raw data modalities, specifically for the dataset distillation task. Furthermore, by incorporating optimization-level refinements, our method significantly improves computational efficiency, achieving superior performance while reducing training time and peak GPU memory usage by 50%. Consequently, the proposed method Fast and Accurate Data Residual Matching for Dataset Distillation (FADRM) establishes a new state-of-the-art, demonstrating substantial improvements over existing methods across multiple dataset benchmarks in both efficiency and effectiveness. For instance, with ResNet-18 as the student model and a 0.8% compression ratio on ImageNet-1K, the method achieves 47.7% test accuracy in single-model dataset distillation and 50.0% in multi-model dataset distillation, surpassing RDED by +5.7% and outperforming state-of-the-art multi-model approaches, EDC and CV-DD, by +1.4% and +4.0%. Code is available at: https://github.com/Jiacheng8/FADRM.