HaltNav: Reactive Visual Halting over Lightweight Topological Priors for Robust Vision-Language Navigation

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fragility of existing vision-and-language navigation methods in dynamic real-world environments, where performance often degrades due to local connectivity changes—such as closed doors or crowded passages—and reliance on verbose instructions or computationally expensive metric maps. To overcome these limitations, the authors propose a hierarchical navigation framework that combines efficient global planning using a lightweight textual topological graph (osmAG) with local adaptation driven by a multimodal large language model (MLLM) to interpret instructions and perceive obstacles, dynamically generating subgoals. A novel reactive visual halting (RVH) mechanism is introduced to trigger replanning upon anomaly detection, while generative data augmentation synthesizes challenging negative samples containing realistic obstacles. This approach achieves significantly improved robustness and success rates in long-horizon navigation under environmental dynamics, without requiring overly detailed instructions.

Technology Category

Application Category

📝 Abstract
Vision-and-Language Navigation (VLN) is shifting from rigid, step-by-step instruction following toward open-vocabulary, goal-oriented autonomy. Achieving this transition without exhaustive routing prompts requires agents to leverage structural priors. While prior work often assumes computationally heavy 2D/3D metric maps, we instead exploit a lightweight, text-based osmAG (OpenStreetMap Area Graph), a floorplan-level topological representation that is easy to obtain and maintain. However, global planning over a prior map alone is brittle in real-world deployments, where local connectivity can change (e.g., closed doors or crowded passages), leading to execution-time failures. To address this gap, we propose a hierarchical navigation framework HaltNav that couples the robust global planning of osmAG with the local exploration and instruction-grounding capability of VLN. Our approach features an MLLM-based brain module, which is capable of high-level task grounding and obstruction awareness. Conditioned on osmAG, the brain converts the global route into a sequence of localized execution snippets, providing the VLN executor with prior-grounded, goal-centric sub-instructions. Meanwhile, it detects local anomalies via a mechanism we term Reactive Visual Halting (RVH), which interrupts the local control loop, updates osmAG by invalidating the corresponding topology, and triggers replanning to orchestrate a viable detour. To train this halting capability efficiently, we introduce a data synthesis pipeline that leverages generative models to inject realistic obstacles into otherwise navigable scenes, substantially enriching hard negative samples. Extensive experiments demonstrate that our hierarchical framework outperforms several baseline methods without tedious language instructions, and significantly improves robustness for long-horizon vision-language navigation under environmental changes.
Problem

Research questions and friction points this paper is trying to address.

Vision-and-Language Navigation
Robust Navigation
Environmental Changes
Topological Priors
Local Obstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reactive Visual Halting
Lightweight Topological Priors
Vision-Language Navigation
Hierarchical Navigation Framework
osmAG
🔎 Similar Papers
No similar papers found.
P
Pingcong Li
Key Laboratory of Intelligent Perception and Human-Machine Collaboration - ShanghaiTech University, Ministry of Education, China
Z
Zihui Yu
Key Laboratory of Intelligent Perception and Human-Machine Collaboration - ShanghaiTech University, Ministry of Education, China
B
Bichi Zhang
Key Laboratory of Intelligent Perception and Human-Machine Collaboration - ShanghaiTech University, Ministry of Education, China
Sören Schwertfeger
Sören Schwertfeger
Associate Professor, ShanghaiTech University
Mobile RoboticsPerformance EvaluationMobile Manipulation(3D) SLAMAI