🤖 AI Summary
This work addresses the performance degradation of existing zero-shot robotic navigation methods in real-world environments, where hand-crafted scene reconstructions are often incomplete and noisy. To overcome this limitation, the authors propose SpatialAnt, a framework that actively explores and constructs scene representations by innovatively integrating physical-scale recovery with a vision-based prediction mechanism operating directly on noisy point clouds. This approach enables robust utilization of imperfect reconstructions and employs counterfactual reasoning to prune infeasible paths. Requiring only monocular input and leveraging a multimodal large language model, SpatialAnt achieves success rates of 66% on R2R-CE and 50.8% on RxR-CE, and demonstrates a 52% deployment success rate in complex real-world environments.
📝 Abstract
Vision-and-Language Navigation (VLN) has recently benefited from Multimodal Large Language Models (MLLMs), enabling zero-shot navigation. While recent exploration-based zero-shot methods have shown promising results by leveraging global scene priors, they rely on high-quality human-crafted scene reconstructions, which are impractical for real-world robot deployment. When encountering an unseen environment, a robot should build its own priors through pre-exploration. However, these self-built reconstructions are inevitably incomplete and noisy, which severely degrade methods that depend on high-quality scene reconstructions. To address these issues, we propose SpatialAnt, a zero-shot navigation framework designed to bridge the gap between imperfect self-reconstructions and robust execution. SpatialAnt introduces a physical grounding strategy to recover the absolute metric scale for monocular-based reconstructions. Furthermore, rather than treating the noisy self-reconstructed scenes as absolute spatial references, we propose a novel visual anticipation mechanism. This mechanism leverages the noisy point clouds to render future observations, enabling the agent to perform counterfactual reasoning and prune paths that contradict human instructions. Extensive experiments in both simulated and real-world environments demonstrate that SpatialAnt significantly outperforms existing zero-shot methods. We achieve a 66% Success Rate (SR) on R2R-CE and 50.8% SR on RxR-CE benchmarks. Physical deployment on a Hello Robot further confirms the efficiency and efficacy of our framework, achieving a 52% SR in challenging real-world settings.