Task-Specific Generative Dataset Distillation with Difficulty-Guided Sampling

πŸ“… 2025-07-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the heavy reliance of deep neural networks on large-scale labeled data, this paper proposes a task-oriented generative dataset distillation method. To enhance the efficacy of synthesized data for downstream classification tasks, we introduce a difficulty-aware sampling strategy: first quantifying sample difficulty, then correcting distributional bias via distribution matching and logarithmic transformation to faithfully replicate the original dataset’s difficulty distribution in the distilled data. This strategy significantly improves task-specific adaptability of distilled samples. Experiments demonstrate that our method achieves superior classification accuracy over state-of-the-art distillation approaches across multiple benchmark datasets, while compressing synthetic data size to less than 1% of the original. Notably, this work is the first to systematically integrate task-specific difficulty modeling into a generative distillation framework, establishing a new paradigm for efficient and lightweight model training.

Technology Category

Application Category

πŸ“ Abstract
To alleviate the reliance of deep neural networks on large-scale datasets, dataset distillation aims to generate compact, high-quality synthetic datasets that can achieve comparable performance to the original dataset. The integration of generative models has significantly advanced this field. However, existing approaches primarily focus on aligning the distilled dataset with the original one, often overlooking task-specific information that can be critical for optimal downstream performance. In this paper, focusing on the downstream task of classification, we propose a task-specific sampling strategy for generative dataset distillation that incorporates the concept of difficulty to consider the requirements of the target task better. The final dataset is sampled from a larger image pool with a sampling distribution obtained by matching the difficulty distribution of the original dataset. A logarithmic transformation is applied as a pre-processing step to correct for distributional bias. The results of extensive experiments demonstrate the effectiveness of our method and suggest its potential for enhancing performance on other downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Reducing reliance on large datasets via synthetic data generation
Incorporating task-specific difficulty in dataset distillation
Improving downstream classification performance with optimized sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-specific sampling strategy for distillation
Difficulty-guided sampling from image pool
Logarithmic transformation corrects distribution bias
πŸ”Ž Similar Papers
No similar papers found.