🤖 AI Summary
This study investigates the scaling relationship between visual token count and performance in vision-language models (VLMs), as well as question-guided visual–language fusion mechanisms. Method: We formally establish that performance $S(N_l)$ scales with visual token count $N_l$ according to a power law $S(N_l) approx (c/N_l)^alpha$, validated across 15 diverse vision-language benchmarks. We further propose a task-aware visual–language fusion mechanism that leverages user questions as semantic priors to enhance alignment between visual tokens and language representations, without disrupting the underlying scaling trend. The approach integrates theoretical modeling with empirical multi-scale token sampling analysis. Contribution/Results: Our findings demonstrate that VLMs exhibit language-model-like scalability. The proposed fusion mechanism significantly improves question-relevant visual question answering, yielding an average gain of 12.7% across benchmarks—confirming that user queries serve as effective inductive biases for semantic grounding of visual features.
📝 Abstract
The scaling capability has been widely validated in neural language models with respect to the number of parameters and the size of training data. One important question is that does the scaling capability also exists similarly with respect to the number of vision tokens in large vision language Model? This study fills the gap by investigating the relationship between the number of vision tokens and the performance on vision-language models. Our theoretical analysis and empirical evaluations demonstrate that the model exhibits scalable performance (S(N_l)) with respect to the number of vision tokens (N_l), characterized by the relationship (S(N_l) approx (c/N_l)^{alpha}). Furthermore, we also investigate the impact of a fusion mechanism that integrates the user's question with vision tokens. The results reveal two key findings. First, the scaling capability remains intact with the incorporation of the fusion mechanism. Second, the fusion mechanism enhances model performance, particularly when the user's question is task-specific and relevant. The analysis, conducted on fifteen diverse benchmarks spanning a broad range of tasks and domains, validates the effectiveness of the proposed approach.