Dataset Distillation via the Wasserstein Metric

📅 2023-11-30
🏛️ arXiv.org
📈 Citations: 14
Influential: 4
📄 PDF
🤖 AI Summary
Dataset distillation aims to retain the discriminative distributional information of a large original dataset using a minimal set of synthetic samples, thereby reducing training cost while preserving model performance. This work is the first to introduce the Wasserstein distance and Wasserstein barycenter into dataset distillation. Leveraging optimal transport theory, we model a geometrically well-defined and robust distributional centroid within the feature space of a pre-trained model, and subsequently optimize synthetic samples via gradient-based updates. Our approach avoids the biases inherent in conventional distribution-matching strategies, significantly enhancing the representativeness of distilled data. Extensive experiments on multiple high-resolution image benchmarks demonstrate state-of-the-art performance: our method achieves superior distillation efficiency and stronger downstream generalization compared to existing approaches.
📝 Abstract
Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead. Pursuing this objective, we introduce the Wasserstein distance, a metric grounded in optimal transport theory, to enhance distribution matching in DD. Our approach employs the Wasserstein barycenter to provide a geometrically meaningful method for quantifying distribution differences and capturing the centroid of distribution sets efficiently. By embedding synthetic data in the feature spaces of pretrained classification models, we facilitate effective distribution matching that leverages prior knowledge inherent in these models. Our method not only maintains the computational advantages of distribution matching-based techniques but also achieves new state-of-the-art performance across a range of high-resolution datasets. Extensive testing demonstrates the effectiveness and adaptability of our method, underscoring the untapped potential of Wasserstein metrics in dataset distillation.
Problem

Research questions and friction points this paper is trying to address.

Generate compact synthetic dataset matching full dataset performance
Enhance distribution matching using Wasserstein metric
Preserve intra-class variations while optimizing synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Wasserstein metric for distribution matching
Computes Wasserstein barycenter of pretrained features
Optimizes synthetic data with BatchNorm statistics
🔎 Similar Papers
No similar papers found.