🤖 AI Summary
Visual-language model training suffers from high computational overhead and low efficiency due to data redundancy. To address this, we propose a gradient-based influence consensus framework for data selection. Our method constructs a multi-task influence matrix by estimating per-sample gradient influences across diverse vision-language tasks and aggregates importance scores via majority voting—enabling cross-task collaborative assessment beyond single-task influence analysis. The pipeline comprises gradient influence estimation, multi-task influence matrix modeling, consensus-driven sample selection, and a joint vision-language fine-tuning framework. Evaluated on the LLaVA benchmark, our approach achieves 98.6% of the full-dataset (665K samples) performance using only 20% of the data (133K samples), substantially improving training efficiency. We publicly release LLaVA-ICONS-133K—a high-quality, compact subset—establishing a new paradigm for efficient vision-instruction fine-tuning.
📝 Abstract
Visual Instruction Tuning typically requires a large amount of vision-language training data. This data often containing redundant information that increases computational costs without proportional performance gains. In this work, we introduce ICONS, a gradient-driven Influence CONsensus approach for vision-language data Selection that selects a compact training dataset for efficient multi-task training. The key element of our approach is cross-task influence consensus, which uses majority voting across task-specific influence matrices to identify samples that are consistently valuable across multiple tasks, allowing us to effectively prioritize data that optimizes for overall performance. Experiments show that models trained on our selected data (20% of LLaVA-665K) achieve 98.6% of the relative performance obtained using the full dataset. Additionally, we release this subset, LLaVA-ICONS-133K, a compact yet highly informative subset of LLaVA-665K visual instruction tuning data, preserving high impact training data for efficient vision-language model development.