🤖 AI Summary
Monocular RGB navigation in unknown environments suffers from unreliable collision detection due to the absence of explicit depth information.
Method: We propose a learning-based, end-to-end risk-aware navigation framework. It employs a learnable collision model conditioned on depth estimates from a vision foundation model, jointly modeling collision probability and prediction uncertainty. A risk-aware Model Predictive Control (MPC) planner is designed and trained end-to-end using paired safe/unsafe trajectory data to ensure robust optimization.
Contribution/Results: Our key innovation lies in integrating depth estimation as structured contextual input into collision modeling and explicitly optimizing predictive variance to enhance risk discrimination. Experiments demonstrate that our method achieves navigation success rates 9× and 7× higher than NoMaD and the ROS navigation stack, respectively, in complex, cluttered environments—significantly improving reliability and generalization of monocular visual navigation.
📝 Abstract
Navigating unknown environments with a single RGB camera is challenging, as the lack of depth information prevents reliable collision-checking. While some methods use estimated depth to build collision maps, we found that depth estimates from vision foundation models are too noisy for zero-shot navigation in cluttered environments.
We propose an alternative approach: instead of using noisy estimated depth for direct collision-checking, we use it as a rich context input to a learned collision model. This model predicts the distribution of minimum obstacle clearance that the robot can expect for a given control sequence. At inference, these predictions inform a risk-aware MPC planner that minimizes estimated collision risk. Our joint learning pipeline co-trains the collision model and risk metric using both safe and unsafe trajectories. Crucially, our joint-training ensures optimal variance in our collision model that improves navigation in highly cluttered environments. Consequently, real-world experiments show 9x and 7x improvements in success rates over NoMaD and the ROS stack, respectively. Ablation studies further validate the effectiveness of our design choices.