🤖 AI Summary
Dataset distillation faces the fundamental challenge of simultaneously achieving diversity, generalization, and representativeness. While diffusion-based methods offer strong generative capacity, they typically neglect the intrinsic representational priors encoded in diffusion models and instead rely on external constraints to improve distilled sample quality. This work establishes, for the first time, a theoretical connection between diffusion priors and distillation objectives. We propose DAP (Diffusion-Aware Prior), a training-free guidance mechanism that leverages Mercer kernels to measure feature-space similarity and thereby calibrate and enhance sample representativeness within the diffusion inversion process. DAP introduces no auxiliary networks or loss functions; it solely optimizes synthetic trajectories via kernel-guided refinement. On benchmarks including ImageNet-1K, DAP significantly outperforms state-of-the-art methods, yielding distilled data with higher fidelity and stronger cross-architecture generalization—empirically validating the critical role of diffusion priors in dataset distillation.
📝 Abstract
Dataset distillation aims to synthesize compact yet informative datasets from large ones. A significant challenge in this field is achieving a trifecta of diversity, generalization, and representativeness in a single distilled dataset. Although recent generative dataset distillation methods adopt powerful diffusion models as their foundation models, the inherent representativeness prior in diffusion models is overlooked. Consequently, these approaches often necessitate the integration of external constraints to enhance data quality. To address this, we propose Diffusion As Priors (DAP), which formalizes representativeness by quantifying the similarity between synthetic and real data in feature space using a Mercer kernel. We then introduce this prior as guidance to steer the reverse diffusion process, enhancing the representativeness of distilled samples without any retraining. Extensive experiments on large-scale datasets, such as ImageNet-1K and its subsets, demonstrate that DAP outperforms state-of-the-art methods in generating high-fidelity datasets while achieving superior cross-architecture generalization. Our work not only establishes a theoretical connection between diffusion priors and the objectives of dataset distillation but also provides a practical, training-free framework for improving the quality of the distilled dataset.