🤖 AI Summary
Dataset distillation (DD) lacks a unified theoretical foundation; existing methods pursue heterogeneous objectives, exhibit unclear generalization bounds, and lack well-characterized conditions for effectiveness under varying training configurations (e.g., optimizers, architectures, data augmentations).
Method: We propose the first unified analytical framework encompassing mainstream DD approaches, establishing a “configuration–dynamics–error” theory. Leveraging generalization error analysis, we derive scaling laws for performance saturation and coverage laws for configuration robustness. By integrating gradient matching, distribution matching, and trajectory matching—rigorously grounded in theoretical analysis and validated across diverse training configurations—we provide a principled characterization of DD behavior.
Contribution/Results: We prove, for the first time, an optimal linear lower bound on distilled dataset size with respect to configuration diversity. This yields interpretable, verifiable theoretical guarantees for robust DD design, bridging theory and practice in data-efficient learning.
📝 Abstract
Dataset distillation (DD) aims to construct compact synthetic datasets that allow models to achieve comparable performance to full-data training while substantially reducing storage and computation. Despite rapid empirical progress, its theoretical foundations remain limited: existing methods (gradient, distribution, trajectory matching) are built on heterogeneous surrogate objectives and optimization assumptions, which makes it difficult to analyze their common principles or provide general guarantees. Moreover, it is still unclear under what conditions distilled data can retain the effectiveness of full datasets when the training configuration, such as optimizer, architecture, or augmentation, changes. To answer these questions, we propose a unified theoretical framework, termed configuration--dynamics--error analysis, which reformulates major DD approaches under a common generalization-error perspective and provides two main results: (i) a scaling law that provides a single-configuration upper bound, characterizing how the error decreases as the distilled sample size increases and explaining the commonly observed performance saturation effect; and (ii) a coverage law showing that the required distilled sample size scales linearly with configuration diversity, with provably matching upper and lower bounds. In addition, our unified analysis reveals that various matching methods are interchangeable surrogates, reducing the same generalization error, clarifying why they can all achieve dataset distillation and providing guidance on how surrogate choices affect sample efficiency and robustness. Experiments across diverse methods and configurations empirically confirm the derived laws, advancing a theoretical foundation for DD and enabling theory-driven design of compact, configuration-robust dataset distillation.