AutoSpatial: Visual-Language Reasoning for Social Robot Navigation through Efficient Spatial Reasoning Learning

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited spatial relational understanding of vision-language models (VLMs) in social robot navigation, this paper proposes a hierarchical dual-round visual question answering (VQA) self-annotation framework that synergistically integrates minimal human supervision with large-scale automated annotation. The method enables joint learning of global scene layout and fine-grained object-level spatial grounding. It incorporates structured spatial representations, chain-of-thought reasoning, and cross-validation by multiple expert multimodal models (GPT-4o, Gemini 2.0 Flash, Claude 3.5 Sonnet), complemented by human evaluation. Experiments demonstrate significant improvements over a fully human-annotated baseline: +10.71% in spatial perception and motion prediction, +16.26% in chain-of-thought reasoning, +20.50% in action decision-making, and +18.73% in natural language explanation quality (average expert score). These gains collectively enhance VLMs’ spatial cognition and embodied decision-making capabilities for socially situated navigation.

Technology Category

Application Category

📝 Abstract
We present a novel method, AutoSpatial, an efficient approach with structured spatial grounding to enhance VLMs' spatial reasoning. By combining minimal manual supervision with large-scale Visual Question-Answering (VQA) pairs auto-labeling, our approach tackles the challenge of VLMs' limited spatial understanding in social navigation tasks. By applying a hierarchical two-round VQA strategy during training, AutoSpatial achieves both global and detailed understanding of scenarios, demonstrating more accurate spatial perception, movement prediction, Chain of Thought (CoT) reasoning, final action, and explanation compared to other SOTA approaches. These five components are essential for comprehensive social navigation reasoning. Our approach was evaluated using both expert systems (GPT-4o, Gemini 2.0 Flash, and Claude 3.5 Sonnet) that provided cross-validation scores and human evaluators who assigned relative rankings to compare model performances across four key aspects. Augmented by the enhanced spatial reasoning capabilities, AutoSpatial demonstrates substantial improvements by averaged cross-validation score from expert systems in: perception&prediction (up to 10.71%), reasoning (up to 16.26%), action (up to 20.50%), and explanation (up to 18.73%) compared to baseline models trained only on manually annotated data.
Problem

Research questions and friction points this paper is trying to address.

Enhance VLMs' spatial reasoning for social robot navigation.
Improve spatial perception and movement prediction accuracy.
Boost reasoning, action, and explanation in navigation tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines minimal supervision with auto-labeling VQA pairs
Uses hierarchical two-round VQA for global and detailed understanding
Enhances spatial reasoning for social robot navigation tasks
🔎 Similar Papers
No similar papers found.