VideoAnchor: Reinforcing Subspace-Structured Visual Cues for Coherent Visual-Spatial Reasoning

๐Ÿ“… 2025-09-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal large language models (MLLMs) suffer from weak visual-spatial reasoning and inconsistent cross-frame visual cues in video understanding, primarily due to language token dominance in attention mechanisms that suppresses visual token contributions. To address this, we propose VideoAnchorโ€”a training-free, plug-and-play module that for the first time integrates the self-expressive property of sparse subspace clustering into Transformer attention. By modeling subspace affinity, VideoAnchor anchors shared visual structures across frames, enhancing cross-frame visual cue consistency and mitigating language-dominant bias. Crucially, it requires no backbone fine-tuning, preserving model integrity while improving visual grounding and structural coherence. Evaluated on VSI-Bench and Video-MME benchmarks, VideoAnchor achieves absolute improvements of +3.2% and +4.6%, respectively, demonstrating both effectiveness and broad applicability across diverse MLLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal Large Language Models (MLLMs) have achieved impressive progress in vision-language alignment, yet they remain limited in visual-spatial reasoning. We first identify that this limitation arises from the attention mechanism: visual tokens are overshadowed by language tokens, preventing the model from consistently recognizing the same visual cues across frames. To address this challenge, we draw a novel connection between the self-expressiveness property in sparse subspace clustering and the attention mechanism in Transformers. Building on this insight, we propose VideoAnchor, a plug-and-play module that leverages subspace affinities to reinforce visual cues across frames without retraining, effectively anchoring attention to shared visual structures. Extensive experiments across benchmarks and backbone models show consistent performance gains -- $e.g.$, 3.2% and 4.6% improvements on VSI-Bench and Video-MME (spatial-related tasks) with InternVL2-8B and Qwen2.5VL-72B -- while qualitative analyses demonstrate more coherent subspace partitions and stronger visual grounding. Our codes will be made public available at https://github.com/feufhd/VideoAnchor.
Problem

Research questions and friction points this paper is trying to address.

Addresses visual token overshadowing by language tokens
Enhances visual-spatial reasoning in multimodal language models
Reinforces subspace-structured visual cues across video frames
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages subspace affinities to reinforce visual cues
Anchors attention to shared visual structures across frames
Plug-and-play module without retraining for spatial reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhaozhi Wang
University of Chinese Academy of Sciences, Peng Cheng Lab
T
Tong Zhang
University of Chinese Academy of Sciences
M
Mingyue Guo
Peng Cheng Lab
Yaowei Wang
Yaowei Wang
The Hong Kong Polytechnic University
Qixiang Ye
Qixiang Ye
University of Chinese Academy of Sciences, University of Maryland
Visual Object DetectionImage Processing