🤖 AI Summary
Large Vision-Language Models (LVLMs) exhibit inconsistent outputs when key visual information undergoes spatial position shifts, revealing a fundamental deficiency in spatial semantic understanding. This paper identifies the root cause as inherent spatial bias in positional embeddings within the language model—downstream of the visual encoder. To address this, we propose Balanced Position Allocation (BaPA), a mechanism that maps all image tokens to identical positional embeddings, thereby eliminating positional sensitivity and promoting balanced visual perception. BaPA requires no full model retraining, is compatible with mainstream positional encodings (e.g., RoPE), and is augmented with lightweight cross-modal fine-tuning. Experiments demonstrate that BaPA significantly improves spatial robustness and overall performance across multiple multimodal benchmarks. Attention analysis further confirms that BaPA fosters more uniform and global cross-modal information fusion.
📝 Abstract
Large Vision-Language Models (LVLMs) have achieved remarkable success across a wide range of multimodal tasks, yet their robustness to spatial variations remains insufficiently understood. In this work, we present a systematic study of the spatial bias of LVLMs, focusing on how models respond when identical key visual information is placed at different locations within an image. Through a carefully designed probing dataset, we demonstrate that current LVLMs often produce inconsistent outputs under such spatial shifts, revealing a fundamental limitation in their spatial-semantic understanding. Further analysis shows that this phenomenon originates not from the vision encoder, which reliably perceives and interprets visual content across positions, but from the unbalanced design of position embeddings in the language model component. In particular, the widely adopted position embedding strategies, such as RoPE, introduce imbalance during cross-modal interaction, leading image tokens at different positions to exert unequal influence on semantic understanding. To mitigate this issue, we introduce Balanced Position Assignment (BaPA), a simple yet effective mechanism that assigns identical position embeddings to all image tokens, promoting a more balanced integration of visual information. Extensive experiments show that BaPA enhances the spatial robustness of LVLMs without retraining and further boosts their performance across diverse multimodal benchmarks when combined with lightweight fine-tuning. Further analysis of information flow reveals that BaPA yields balanced attention, enabling more holistic visual understanding.