Multi-Modal Dataset Distillation in the Wild

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high storage overhead and performance degradation caused by noisy, large-scale web data in multimodal model training, this paper proposes the first real-world-oriented multimodal data distillation framework. Methodologically, it introduces fine-grained learnable cross-modal correspondence modeling, integrated with adaptive region optimization and a dual-track collaborative learning mechanism, to formulate a noise-robust distillation objective. The framework significantly enhances the information density and discriminability of distilled data, consistently outperforming state-of-the-art methods by over 15% on average across diverse compression ratios. Extensive experiments demonstrate strong scalability and deployment flexibility: the framework substantially reduces training costs and accelerates model convergence. It establishes a novel paradigm for high-quality, low-cost multimodal pretraining—enabling efficient utilization of noisy, web-scale data while preserving semantic fidelity and task relevance.

Technology Category

Application Category

📝 Abstract
Recent multi-modal models have shown remarkable versatility in real-world applications. However, their rapid development encounters two critical data challenges. First, the training process requires large-scale datasets, leading to substantial storage and computational costs. Second, these data are typically web-crawled with inevitable noise, i.e., partially mismatched pairs, severely degrading model performance. To these ends, we propose Multi-modal dataset Distillation in the Wild, i.e., MDW, the first framework to distill noisy multi-modal datasets into compact clean ones for effective and efficient model training. Specifically, MDW introduces learnable fine-grained correspondences during distillation and adaptively optimizes distilled data to emphasize correspondence-discriminative regions, thereby enhancing distilled data's information density and efficacy. Moreover, to capture robust cross-modal correspondence prior knowledge from real data, MDW proposes dual-track collaborative learning to avoid the risky data noise, alleviating information loss with certifiable noise tolerance. Extensive experiments validate MDW's theoretical and empirical efficacy with remarkable scalability, surpassing prior methods by over 15% across various compression ratios, highlighting its appealing practicality for applications with diverse efficacy and resource needs.
Problem

Research questions and friction points this paper is trying to address.

Reducing storage and computational costs in multi-modal training
Cleaning noisy web-crawled multi-modal datasets
Enhancing distilled data's information density and efficacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills noisy multi-modal datasets into compact clean ones
Introduces learnable fine-grained correspondences during distillation
Uses dual-track collaborative learning to avoid data noise
🔎 Similar Papers
No similar papers found.