Thinking with Blueprints: Assisting Vision-Language Models in Spatial Reasoning via Structured Object Representation

πŸ“… 2026-01-05
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that existing vision-language models struggle to balance fine-grained local details with global structural understanding in spatial reasoning, often failing to capture holistic object relationships due to reliance on local image patches or isolated coordinates. To overcome this limitation, the study introduces the cognitive science concept of a β€œblueprint” into vision-language modeling for the first time, proposing an object-centric, structured JSON-based blueprint representation that explicitly encodes object positions, sizes, and attributes to support explicit spatial reasoning. The model is enhanced through trajectory-supervised fine-tuning with blueprint embeddings, a blueprint-aware reinforcement learning reward mechanism, and targeted perturbation-based data augmentation conditioned on both images and questions. Experiments demonstrate that the proposed approach significantly outperforms current vision-language models and specialized methods across multiple spatial reasoning benchmarks, exhibiting superior generalization and robustness.

Technology Category

Application Category

πŸ“ Abstract
Spatial reasoning -- the ability to perceive and reason about relationships in space -- advances vision-language models (VLMs) from visual perception toward spatial semantic understanding. Existing approaches either revisit local image patches, improving fine-grained perception but weakening global spatial awareness, or mark isolated coordinates, which capture object locations but overlook their overall organization. In this work, we integrate the cognitive concept of an object-centric blueprint into VLMs to enhance spatial reasoning. Given an image and a question, the model first constructs a JSON-style blueprint that records the positions, sizes, and attributes of relevant objects, and then reasons over this structured representation to produce the final answer. To achieve this, we introduce three key techniques: (1) blueprint-embedded reasoning traces for supervised fine-tuning to elicit basic reasoning skills; (2) blueprint-aware rewards in reinforcement learning to encourage the blueprint to include an appropriate number of objects and to align final answers with this causal reasoning; and (3) anti-shortcut data augmentation that applies targeted perturbations to images and questions, discouraging reliance on superficial visual or linguistic cues. Experiments show that our method consistently outperforms existing VLMs and specialized spatial reasoning models.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
vision-language models
object-centric representation
structured representation
spatial awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

object-centric blueprint
structured representation
spatial reasoning
reinforcement learning with blueprint-aware rewards
anti-shortcut data augmentation
πŸ”Ž Similar Papers
No similar papers found.