🤖 AI Summary
Multimodal large language models (MLLMs) face prohibitive computational overhead and latency in long-video understanding due to linear growth of visual tokens with video length. To address this, we propose QTSplus, a query-aware lightweight visual token selection module. Its core contributions are: (1) cross-attention-driven relevance scoring conditioned on the textual query; (2) instance-level retention budget prediction adaptive to query complexity; (3) differentiable Top-n selection during training coupled with hard gating at inference; and (4) a compact re-encoder augmented with absolute temporal positional encoding to preserve temporal structure. Integrated into Qwen2.5-VL, QTSplus achieves an 89% visual stream compression rate and reduces end-to-end latency by 28%, while maintaining near-original accuracy across eight benchmarks. Notably, it improves directional and sequential accuracy on the TempCompass benchmark by +20.5 and +5.6 points, respectively.
📝 Abstract
Despite the recent advances in the video understanding ability of multimodal large language models (MLLMs), long video understanding remains a challenge. One of the main issues is that the number of vision tokens grows linearly with video length, which causes an explosion in attention cost, memory, and latency. To solve this challenge, we present Query-aware Token Selector ( extbf{QTSplus}), a lightweight yet powerful visual token selection module that serves as an information gate between the vision encoder and LLMs. Given a text query and video tokens, QTSplus dynamically selects the most important visual evidence for the input text query by (i) scoring visual tokens via cross-attention, (ii) emph{predicting} an instance-specific retention budget based on the complexity of the query, and (iii) emph{selecting} Top-$n$ tokens with a differentiable straight-through estimator during training and a hard gate at inference. Furthermore, a small re-encoder preserves temporal order using absolute time information, enabling second-level localization while maintaining global coverage.
Integrated into Qwen2.5-VL, QTSplus compresses the vision stream by up to extbf{89%} and reduces end-to-end latency by extbf{28%} on long videos. The evaluation on eight long video understanding benchmarks shows near-parity accuracy overall when compared with the original Qwen models and outperforms the original model by extbf{+20.5} and extbf{+5.6} points respectively on TempCompass direction and order accuracies. These results show that QTSplus is an effective, general mechanism for scaling MLLMs to real-world long-video scenarios while preserving task-relevant evidence.
We will make all code, data, and trained models' weights publicly available.