DataProphet: Demystifying Supervision Data Generalization in Multimodal LLMs

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the common practice of selecting pretraining data for multimodal large language models based on apparent task similarity, which lacks empirical validation as a predictor of downstream performance. Through a systematic analysis of transfer performance across 14 vision–language datasets and 7 task categories, the study reveals that task similarity is an unreliable indicator of transferability, whereas intrinsic dataset properties are more decisive. To address this, the authors propose DataProphet, a training-free method for pretraining data selection that integrates multimodal perplexity, semantic similarity, and data diversity. Experiments demonstrate that DataProphet achieves a Kendall’s tau correlation of 86.0% with actual downstream performance rankings—outperforming uniform sampling by 6.9%, surpassing the current state-of-the-art trainable baseline by 1.4%, and even slightly exceeding an oracle selection based on empirical performance by 0.2%.

Technology Category

Application Category

📝 Abstract
Conventional wisdom for selecting supervision data for multimodal large language models (MLLMs) is to prioritize datasets that appear similar to the target benchmark, such as text-intensive or vision-centric tasks. However, it remains unclear whether such intuitive similarity reliably predicts downstream performance gains. In this work, we take a first step toward answering a practical question: can we estimate the influence of a training dataset on a target benchmark before any training is performed? To investigate this question, we conduct an in-depth analysis of transfer across 14 vision-language datasets spanning 7 diverse tasks. Our results show that intuitive task similarity is an unreliable predictor of transferability, and that generalization depends more on the specific dataset than on its broad task category. Motivated by this finding, we propose DATAPROPHET, a simple and effective training-free metric that combines multimodal perplexity, similarity, and data diversity. Experiments show that DATAPROPHET produces supervision-data rankings that strongly correlate with rankings based on actual post-training performance gains, achieving a Kendall's tau of 86.0%. Moreover, DATAPROPHET enables better supervision-data selection, yielding up to 6.9% improvement over uniform selection, 1.4% over a state-of-the-art training-based baseline, and 0.2% above oracle selection based on experimental performance. Our code and data will be released.
Problem

Research questions and friction points this paper is trying to address.

supervision data selection
multimodal LLMs
data generalization
transferability prediction
training-free evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

supervision data selection
multimodal LLMs
training-free metric
data generalization
DATAPROPHET
🔎 Similar Papers
No similar papers found.