๐ค AI Summary
Multimodal large language models (MLLMs) suffer from weak visual-spatial reasoning and inconsistent cross-frame visual cues in video understanding, primarily due to language token dominance in attention mechanisms that suppresses visual token contributions. To address this, we propose VideoAnchorโa training-free, plug-and-play module that for the first time integrates the self-expressive property of sparse subspace clustering into Transformer attention. By modeling subspace affinity, VideoAnchor anchors shared visual structures across frames, enhancing cross-frame visual cue consistency and mitigating language-dominant bias. Crucially, it requires no backbone fine-tuning, preserving model integrity while improving visual grounding and structural coherence. Evaluated on VSI-Bench and Video-MME benchmarks, VideoAnchor achieves absolute improvements of +3.2% and +4.6%, respectively, demonstrating both effectiveness and broad applicability across diverse MLLMs.
๐ Abstract
Multimodal Large Language Models (MLLMs) have achieved impressive progress in vision-language alignment, yet they remain limited in visual-spatial reasoning. We first identify that this limitation arises from the attention mechanism: visual tokens are overshadowed by language tokens, preventing the model from consistently recognizing the same visual cues across frames. To address this challenge, we draw a novel connection between the self-expressiveness property in sparse subspace clustering and the attention mechanism in Transformers. Building on this insight, we propose VideoAnchor, a plug-and-play module that leverages subspace affinities to reinforce visual cues across frames without retraining, effectively anchoring attention to shared visual structures. Extensive experiments across benchmarks and backbone models show consistent performance gains -- $e.g.$, 3.2% and 4.6% improvements on VSI-Bench and Video-MME (spatial-related tasks) with InternVL2-8B and Qwen2.5VL-72B -- while qualitative analyses demonstrate more coherent subspace partitions and stronger visual grounding. Our codes will be made public available at https://github.com/feufhd/VideoAnchor.