π€ AI Summary
This work addresses the high cost and limited accessibility of large-scale datasets in visual recognition by establishing, for the first time, a theoretical equivalence between diffusion-based dataset distillation and distribution alignment, thereby revealing their inherent efficiency limits. To surpass these limits, the authors propose Dataset Concentration, a novel framework that integrates Noise Optimization (NOpt) to synthesize representative samples and introduces a βDopingβ strategy that blends original and synthetic data. This approach enables efficient, lossless dataset condensation in both data-rich and data-scarce regimes. Experiments demonstrate state-of-the-art performance under low-data conditions and show that nearly half of a large dataset can be compressed without sacrificing accuracy, substantially improving training efficiency and storage economy.
π Abstract
The high cost and accessibility problem associated with large datasets hinder the development of large-scale visual recognition systems. Dataset Distillation addresses these problems by synthesizing compact surrogate datasets for efficient training, storage, transfer, and privacy preservation. The existing state-of-the-art diffusion-based dataset distillation methods face three issues: lack of theoretical justification, poor efficiency in scaling to high data volumes, and failure in data-free scenarios. To address these issues, we establish a theoretical framework that justifies the use of diffusion models by proving the equivalence between dataset distillation and distribution matching, and reveals an inherent efficiency limit in the dataset distillation paradigm. We then propose a Dataset Concentration (DsCo) framework that uses a diffusion-based Noise-Optimization (NOpt) method to synthesize a small yet representative set of samples, and optionally augments the synthetic data via "Doping", which mixes selected samples from the original dataset with the synthetic samples to overcome the efficiency limit of dataset distillation. DsCo is applicable in both data-accessible and data-free scenarios, achieving SOTA performances for low data volumes, and it extends well to high data volumes, where it nearly reduces the dataset size by half with no performance degradation.