Scaling Capability in Token Space: An Analysis of Large Vision Language Model

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the scaling relationship between visual token count and performance in vision-language models (VLMs), as well as question-guided visual–language fusion mechanisms. Method: We formally establish that performance $S(N_l)$ scales with visual token count $N_l$ according to a power law $S(N_l) approx (c/N_l)^alpha$, validated across 15 diverse vision-language benchmarks. We further propose a task-aware visual–language fusion mechanism that leverages user questions as semantic priors to enhance alignment between visual tokens and language representations, without disrupting the underlying scaling trend. The approach integrates theoretical modeling with empirical multi-scale token sampling analysis. Contribution/Results: Our findings demonstrate that VLMs exhibit language-model-like scalability. The proposed fusion mechanism significantly improves question-relevant visual question answering, yielding an average gain of 12.7% across benchmarks—confirming that user queries serve as effective inductive biases for semantic grounding of visual features.

Technology Category

Application Category

📝 Abstract
The scaling capability has been widely validated in neural language models with respect to the number of parameters and the size of training data. One important question is that does the scaling capability also exists similarly with respect to the number of vision tokens in large vision language Model? This study fills the gap by investigating the relationship between the number of vision tokens and the performance on vision-language models. Our theoretical analysis and empirical evaluations demonstrate that the model exhibits scalable performance (S(N_l)) with respect to the number of vision tokens (N_l), characterized by the relationship (S(N_l) approx (c/N_l)^{alpha}). Furthermore, we also investigate the impact of a fusion mechanism that integrates the user's question with vision tokens. The results reveal two key findings. First, the scaling capability remains intact with the incorporation of the fusion mechanism. Second, the fusion mechanism enhances model performance, particularly when the user's question is task-specific and relevant. The analysis, conducted on fifteen diverse benchmarks spanning a broad range of tasks and domains, validates the effectiveness of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Visual Tokens
User Queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Language Models
Visual Tokens
Performance Enhancement
🔎 Similar Papers
No similar papers found.
T
Tenghui Li
School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Tensor Learning Team, RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan
Guoxu Zhou
Guoxu Zhou
Guangdong University of Technology
Tensor Analysisblind source separationmachine learning
Xuyang Zhao
Xuyang Zhao
Peking University
statisticsmachine learning
Qibin Zhao
Qibin Zhao
RIKEN AIP
Machine LearningTensor DecompositionTensor Networks