Vision Language Models Can Parse Floor Plan Maps

📅 2024-09-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability of vision-language models (VLMs) to parse architectural floor plans for complex indoor navigation, with a focus on their understanding of spatial labels and geometric-topological relationships—including region shape, adjacency, and connectivity. It represents the first systematic application of VLMs to map parsing, revealing a previously unreported performance degradation in large, open-area layouts. To address this, we propose a structured multimodal reasoning framework grounded in prompt engineering, integrating geometric-semantic modeling with stepwise task planning. Evaluated on nine-step long-horizon navigation tasks, our method achieves a 96% success rate, demonstrating strong generalization in interpreting indoor spatial structure. The work establishes a novel paradigm and key technical pathway for VLM-driven spatial intelligence, advancing the frontier of embodied AI and scene understanding.

Technology Category

Application Category

📝 Abstract
Vision language models (VLMs) can simultaneously reason about images and texts to tackle many tasks, from visual question answering to image captioning. This paper focuses on map parsing, a novel task that is unexplored within the VLM context and particularly useful to mobile robots. Map parsing requires understanding not only the labels but also the geometric configurations of a map, i.e., what areas are like and how they are connected. To evaluate the performance of VLMs on map parsing, we prompt VLMs with floorplan maps to generate task plans for complex indoor navigation. Our results demonstrate the remarkable capability of VLMs in map parsing, with a success rate of 0.96 in tasks requiring a sequence of nine navigation actions, e.g., approaching and going through doors. Other than intuitive observations, e.g., VLMs do better in smaller maps and simpler navigation tasks, there was a very interesting observation that its performance drops in large open areas. We provide practical suggestions to address such challenges as validated by our experimental results. Webpage: https://shorturl.at/OUkEY
Problem

Research questions and friction points this paper is trying to address.

Vision Language Models parse floor plan maps for robot navigation tasks
VLMs understand map labels and geometric configurations of spaces
Models generate navigation plans but struggle in large open areas
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLMs parse floor plans for navigation tasks
VLMs generate task plans from geometric configurations
VLMs achieve high success in sequential navigation actions
🔎 Similar Papers
No similar papers found.