Utility Boundary of Dataset Distillation: Scaling and Configuration-Coverage Laws

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dataset distillation (DD) lacks a unified theoretical foundation; existing methods pursue heterogeneous objectives, exhibit unclear generalization bounds, and lack well-characterized conditions for effectiveness under varying training configurations (e.g., optimizers, architectures, data augmentations). Method: We propose the first unified analytical framework encompassing mainstream DD approaches, establishing a “configuration–dynamics–error” theory. Leveraging generalization error analysis, we derive scaling laws for performance saturation and coverage laws for configuration robustness. By integrating gradient matching, distribution matching, and trajectory matching—rigorously grounded in theoretical analysis and validated across diverse training configurations—we provide a principled characterization of DD behavior. Contribution/Results: We prove, for the first time, an optimal linear lower bound on distilled dataset size with respect to configuration diversity. This yields interpretable, verifiable theoretical guarantees for robust DD design, bridging theory and practice in data-efficient learning.

Technology Category

Application Category

📝 Abstract
Dataset distillation (DD) aims to construct compact synthetic datasets that allow models to achieve comparable performance to full-data training while substantially reducing storage and computation. Despite rapid empirical progress, its theoretical foundations remain limited: existing methods (gradient, distribution, trajectory matching) are built on heterogeneous surrogate objectives and optimization assumptions, which makes it difficult to analyze their common principles or provide general guarantees. Moreover, it is still unclear under what conditions distilled data can retain the effectiveness of full datasets when the training configuration, such as optimizer, architecture, or augmentation, changes. To answer these questions, we propose a unified theoretical framework, termed configuration--dynamics--error analysis, which reformulates major DD approaches under a common generalization-error perspective and provides two main results: (i) a scaling law that provides a single-configuration upper bound, characterizing how the error decreases as the distilled sample size increases and explaining the commonly observed performance saturation effect; and (ii) a coverage law showing that the required distilled sample size scales linearly with configuration diversity, with provably matching upper and lower bounds. In addition, our unified analysis reveals that various matching methods are interchangeable surrogates, reducing the same generalization error, clarifying why they can all achieve dataset distillation and providing guidance on how surrogate choices affect sample efficiency and robustness. Experiments across diverse methods and configurations empirically confirm the derived laws, advancing a theoretical foundation for DD and enabling theory-driven design of compact, configuration-robust dataset distillation.
Problem

Research questions and friction points this paper is trying to address.

Unified theoretical framework for dataset distillation methods
Scaling law for error reduction with distilled sample size
Coverage law for sample size scaling with configuration diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified theoretical framework for dataset distillation
Scaling law for error bound with sample size
Coverage law linking sample size to configuration diversity
🔎 Similar Papers
No similar papers found.
Z
Zhengquan Luo
Department of Machine Learning, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
Zhiqiang Xu
Zhiqiang Xu
Professor, Academy of Math. And Sys. Sciences, Chinese Academy of Science
approximation theorycompressed sensingsplinesframe theoryquantization