Spatial-LLaVA: Enhancing Large Language Models with Spatial Referring Expressions for Visual Understanding

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of multimodal large language models (MLLMs) in understanding visual spatial relations and precisely localizing unique targets within groups of visually similar objects. To tackle this, we propose the first spatial referring expression modeling paradigm, which decouples spatial structure from semantic features and introduces an explicit object-mapping training framework to mitigate semantic bias. Leveraging our newly constructed 90k-image annotation dataset, SUN-Spot v2.0, we employ Set-of-Marks prompting to align image landmarks with textual mentions, and integrate synthetic dialogue data distillation with spatially aware vision-language alignment training. Our method achieves a 3.15% absolute improvement over state-of-the-art on zero-shot visual spatial reasoning benchmarks, significantly enhancing spatial referring accuracy. The approach provides robust support for real-world applications such as autonomous navigation and interactive robotics.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have demonstrated remarkable abilities in comprehending visual input alongside text input. Typically, these models are trained on extensive data sourced from the internet, which are sufficient for general tasks such as scene understanding and question answering. However, they often underperform on specialized tasks where online data is scarce, such as determining spatial relationships between objects or localizing unique target objects within a group of objects sharing similar features. In response to this challenge, we introduce the SUN-Spot v2.0 dataset1, now comprising a total of 90k image-caption pairs and additional annotations on the landmark objects. Each image-caption pair utilizes Set-of-Marks prompting as an additional indicator, mapping each landmark object in the image to the corresponding object mentioned in the caption. Furthermore, we present Spatial-LLaVA, an MLLM trained on conversational data generated by a state-of-the-art language model using the SUNSpot v2.0 dataset. Our approach ensures a robust alignment between the objects in the images and their corresponding object mentions in the captions, enabling our model to learn spatial referring expressions without bias from the semantic information of the objects. Spatial-LLaVA outperforms previous methods by 3.15% on the zero-shot Visual Spatial Reasoning benchmark dataset. Spatial-LLaVA is specifically designed to precisely understand spatial referring expressions, making it highly applicable for tasks in real-world scenarios such as autonomous navigation and interactive robotics, where precise object recognition is critical.
Problem

Research questions and friction points this paper is trying to address.

Improving spatial relationship understanding in multimodal language models
Addressing scarcity of specialized online data for object localization
Enhancing precise spatial referring expressions for real-world applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SUN-Spot v2.0 dataset with 90k image-caption pairs
Trains Spatial-LLaVA on conversational data for alignment
Improves spatial reasoning by 3.15% on benchmark
🔎 Similar Papers
No similar papers found.
X
Xuefei Sun
Autonomous Robotics and Perception Group, Computer Science Department, University of Colorado Boulder
Doncey Albin
Doncey Albin
PhD Student
roboticssystem dynamicscomputer visionautonomous vehiclescontrol systems
Cecilia Mauceri
Cecilia Mauceri
Jet Propulsion Laboratory
Computer Vision
D
Dusty Woods
Autonomous Robotics and Perception Group, Computer Science Department, University of Colorado Boulder
Christoffer Heckman
Christoffer Heckman
Associate Professor, University of Colorado
roboticsautonomyperception