DIV-Nav: Open-Vocabulary Spatial Relationships for Multi-Object Navigation

๐Ÿ“… 2025-10-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing semantic mapping-based navigation methods lack robustness for complex natural-language queries (e.g., โ€œfind the remote control on the tableโ€) due to insufficient modeling of spatial relations and limited vocabulary coverage. Method: We propose a multi-object navigation framework integrating spatial relation reasoning with open-vocabulary semantic mapping. Specifically: (1) natural-language instructions are parsed into structured spatial relation triplets; (2) a semantic confidence map is constructed and intersected to localize composite targets; (3) a large vision-language model (LVLM) performs online verification and correction, augmented by a three-stage relaxation strategy and target-guided frontier exploration. Contribution/Results: This work achieves the first end-to-end coupling of open-vocabulary semantic mapping and explicit spatial relation modeling. It significantly improves navigation accuracy and zero-shot generalization on the MultiON benchmark and has been successfully deployed on a Boston Dynamics Spot robot, enabling real-time, robust multi-object spatial navigation.

Technology Category

Application Category

๐Ÿ“ Abstract
Advances in open-vocabulary semantic mapping and object navigation have enabled robots to perform an informed search of their environment for an arbitrary object. However, such zero-shot object navigation is typically designed for simple queries with an object name like "television" or "blue rug". Here, we consider more complex free-text queries with spatial relationships, such as "find the remote on the table" while still leveraging robustness of a semantic map. We present DIV-Nav, a real-time navigation system that efficiently addresses this problem through a series of relaxations: i) Decomposing natural language instructions with complex spatial constraints into simpler object-level queries on a semantic map, ii) computing the Intersection of individual semantic belief maps to identify regions where all objects co-exist, and iii) Validating the discovered objects against the original, complex spatial constrains via a LVLM. We further investigate how to adapt the frontier exploration objectives of online semantic mapping to such spatial search queries to more effectively guide the search process. We validate our system through extensive experiments on the MultiON benchmark and real-world deployment on a Boston Dynamics Spot robot using a Jetson Orin AGX. More details and videos are available at https://anonsub42.github.io/reponame/
Problem

Research questions and friction points this paper is trying to address.

Handling complex spatial relationships in navigation queries
Decomposing natural language instructions into simpler object searches
Validating discovered objects against spatial constraints using LVLM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes complex spatial queries into simpler object-level searches
Computes intersection of semantic belief maps for co-existence regions
Validates spatial constraints using LVLM for final verification
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jesรบs Ortega-Peimbert
Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Sweden
F
Finn Lukas Busch
Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Sweden
T
Timon Homberger
Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Sweden
Q
Quantao Yang
Division of Robotics, Perception, and Learning, KTH Royal Institute of Technology, Sweden
Olov Andersson
Olov Andersson
Assistant Professor at KTH Royal Institute of Technology. Previously: ASL@ETH Zurich
Robot LearningAutonomous RobotsMotion PlanningMappingNavigation