🤖 AI Summary
Existing open-vocabulary robotic mapping approaches rely heavily on high-fidelity semantic maps, rendering them brittle under object motion or in unmapped environments—leading to localization failure. This paper proposes a zero-shot navigation framework that reduces dependence on dense geometric mapping by reinterpreting semantic maps as environment anchors and contextual providers. Our method integrates pretrained vision-language model features, large language model–driven spatial relational reasoning (e.g., “remote controls are commonly found beside sofas”), and uncertainty-aware active exploration. It enables online collaborative inference and navigation for mobile or previously unmodeled objects. Evaluated in both simulation and real-world settings, our approach achieves significant improvements: +12.7% object retrieval success rate and 19.3% shorter navigation paths, outperforming state-of-the-art methods—especially in dynamic environments.
📝 Abstract
Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features, achieving a high level of detail and guiding robots to find objects specified by open-vocabulary language queries. While the issue of scalability for such approaches has received some attention, another fundamental problem is that high-detail object mapping quickly becomes outdated, as objects get moved around a lot. In this work, we develop a mapping and navigation system for object-goal navigation that, from the ground up, considers the possibilities that a queried object can have moved, or may not be mapped at all. Instead of striving for high-fidelity mapping detail, we consider that the main purpose of a map is to provide environment grounding and context, which we combine with the semantic priors of LLMs to reason about object locations and deploy an active, online approach to navigate to the objects. Through simulated and real-world experiments we find that our approach tends to have higher retrieval success at shorter path lengths for static objects and by far outperforms prior approaches in cases of dynamic or unmapped object queries. We provide our code and dataset at: https://anonymous.4open.science/r/osmAG-LLM.