Dataset Distillation via Committee Voting

📅 2025-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of substantial single-model bias, severe distribution shift, and limited generalization in data distillation, this paper proposes Committee-Voting-based Data Distillation (CV-DD). CV-DD employs a multi-model ensemble to collaboratively generate soft labels, integrates gradient-matching optimization, and introduces an adaptive weighted voting mechanism to synthesize high-fidelity, compact distilled datasets. It establishes the first ensemble distillation paradigm grounded in multi-model predictive distributions, effectively mitigating overfitting and distribution shift. Extensive experiments across multiple benchmark datasets and varying images-per-class (IPC) settings demonstrate that CV-DD consistently outperforms state-of-the-art methods: it achieves an average 2.3% improvement in downstream task accuracy, alongside markedly enhanced generalization capability and training stability.

Technology Category

Application Category

📝 Abstract
Dataset distillation aims to synthesize a smaller, representative dataset that preserves the essential properties of the original data, enabling efficient model training with reduced computational resources. Prior work has primarily focused on improving the alignment or matching process between original and synthetic data, or on enhancing the efficiency of distilling large datasets. In this work, we introduce ${f C}$ommittee ${f V}$oting for ${f D}$ataset ${f D}$istillation (CV-DD), a novel and orthogonal approach that leverages the collective wisdom of multiple models or experts to create high-quality distilled datasets. We start by showing how to establish a strong baseline that already achieves state-of-the-art accuracy through leveraging recent advancements and thoughtful adjustments in model design and optimization processes. By integrating distributions and predictions from a committee of models while generating high-quality soft labels, our method captures a wider spectrum of data features, reduces model-specific biases and the adverse effects of distribution shifts, leading to significant improvements in generalization. This voting-based strategy not only promotes diversity and robustness within the distilled dataset but also significantly reduces overfitting, resulting in improved performance on post-eval tasks. Extensive experiments across various datasets and IPCs (images per class) demonstrate that Committee Voting leads to more reliable and adaptable distilled data compared to single/multi-model distillation methods, demonstrating its potential for efficient and accurate dataset distillation. Code is available at: https://github.com/Jiacheng8/CV-DD.
Problem

Research questions and friction points this paper is trying to address.

Data Simplification
Computational Efficiency
Model Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

CV-DD
Ensemble Modeling
Data Refinement
🔎 Similar Papers
No similar papers found.