🤖 AI Summary
This work addresses the unclear mechanisms by which current vision-language models associate spatial relationships with object attributes, particularly the lack of understanding regarding how these models internally process spatial information. Through representational analysis, disentanglement of spatial relations, and enhancement of global visual tokens, the study systematically evaluates the contribution of individual components to spatial reasoning. It reveals, for the first time, that the visual encoder plays a dominant role in spatial reasoning: its output encodes global spatial signals distributed broadly across all image tokens—including background regions—rather than being confined to object-centric areas. Leveraging this insight, augmenting the visual encoder’s global spatial representations substantially improves spatial reasoning performance on natural images, challenging the conventional paradigm that focuses exclusively on object regions.
📝 Abstract
Many multimodal tasks, such as image captioning and visual question answering, require vision-language models (VLMs) to associate objects with their properties and spatial relations. Yet it remains unclear where and how such associations are computed within VLMs. In this work, we show that VLMs rely on two concurrent mechanisms to represent such associations. In the language model backbone, intermediate layers represent content-independent spatial relations on top of visual tokens corresponding to objects. However, this mechanism plays only a secondary role in shaping model predictions. Instead, the dominant source of spatial information originates in the vision encoder, whose representations encode the layout of objects and are directly exploited by the language model backbone. Notably, this spatial signal is distributed globally across visual tokens, extending beyond object regions into surrounding background areas. We show that enhancing these vision-derived spatial representations globally across all image tokens improves spatial reasoning performance on naturalistic images. Together, our results clarify how spatial association is computed within VLMs and highlight the central role of vision encoders in enabling spatial reasoning.