VisNec: Measuring and Leveraging Visual Necessity for Multimodal Instruction Tuning

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language instruction-tuning datasets commonly suffer from visual redundancy and multimodal alignment errors, which hinder model performance. To address this, this work proposes VisNec, a novel framework that introduces a visual necessity scoring mechanism—based on the difference in prediction loss with and without visual input—to identify informative samples. Coupled with semantic clustering, VisNec selects a high-value, diverse training subset that significantly improves data efficiency. Empirical results demonstrate that using only 15% of the LLaVA-665K dataset, VisNec achieves 100.2% of the full-dataset performance; on Vision-Flan-186K, it further reduces data volume while surpassing full-data training by 15.8%.

Technology Category

Application Category

📝 Abstract
The effectiveness of multimodal instruction tuning depends not only on dataset scale, but critically on whether training samples genuinely require visual reasoning. However, existing instruction datasets often contain a substantial portion of visually redundant samples (solvable from text alone), as well as multimodally misaligned supervision that can degrade learning. To address this, we propose VisNec (Visual Necessity Score), a principled data selection framework that measures the marginal contribution of visual input during instruction tuning. By comparing predictive loss with and without visual context, VisNec identifies whether a training instance is vision-critical, redundant, or misaligned. To preserve task diversity, we combine VisNec with semantic clustering and select high-necessity samples within each cluster. Across 10 downstream benchmarks, training on only 15% of the LLaVA-665K dataset selected by VisNec achieves 100.2% of full-data performance. On the smaller Vision-Flan-186K dataset, our selection not only further reduces data size but also surpasses full-data training by 15.8%. These results demonstrate that measuring and leveraging visual necessity provides an effective solution for both efficient and robust multimodal instruction tuning. Codes and selected subsets will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

visual redundancy
multimodal misalignment
instruction tuning
data selection
visual necessity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Necessity
Multimodal Instruction Tuning
Data Selection
Vision-Language Alignment
Efficient Training