Efficient Dataset Distillation via Diffusion-Driven Patch Selection for Improved Generalization

📅 2024-12-13
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Dataset distillation for large-scale datasets (e.g., ImageNet-1K) and complex models (e.g., ResNet-101) remains computationally expensive and inefficient, especially under conventional generative or multi-step iterative paradigms. Method: This paper proposes a diffusion-driven, single-step discriminative image patch selection framework—bypassing generative modeling and iterative optimization. It leverages pre-trained diffusion models not for synthesis, but for discriminative importance scoring of image patches. Specifically, it integrates noise-prediction-guided pixel-level loss difference localization with intra-class k-means clustering to jointly optimize discriminability and diversity. Contribution/Results: The method achieves high-fidelity distillation in a single forward pass. It outperforms state-of-the-art approaches across multiple benchmarks, exhibits significantly improved generalization to unseen architectures on compact distilled sets, and reduces computational overhead by over 40%.

Technology Category

Application Category

📝 Abstract
Dataset distillation offers an efficient way to reduce memory and computational costs by optimizing a smaller dataset with performance comparable to the full-scale original. However, for large datasets and complex deep networks (e.g., ImageNet-1K with ResNet-101), the extensive optimization space limits performance, reducing its practicality. Recent approaches employ pre-trained diffusion models to generate informative images directly, avoiding pixel-level optimization and achieving notable results. However, these methods often face challenges due to distribution shifts between pre-trained models and target datasets, along with the need for multiple distillation steps across varying settings. To address these issues, we propose a novel framework orthogonal to existing diffusion-based distillation methods, leveraging diffusion models for selection rather than generation. Our method starts by predicting noise generated by the diffusion model based on input images and text prompts (with or without label text), then calculates the corresponding loss for each pair. With the loss differences, we identify distinctive regions of the original images. Additionally, we perform intra-class clustering and ranking on selected patches to maintain diversity constraints. This streamlined framework enables a single-step distillation process, and extensive experiments demonstrate that our approach outperforms state-of-the-art methods across various metrics.
Problem

Research questions and friction points this paper is trying to address.

Efficient dataset distillation
Reduced computational costs
Improved generalization performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models for patch selection
Single-step distillation process
Intra-class clustering for diversity
🔎 Similar Papers
No similar papers found.