🤖 AI Summary
Deep-space robotic navigation in unknown, extreme terrains lacks formal safety guarantees—existing generative AI approaches prioritize performance over verifiable safety. Method: We propose a risk-guided dual-system coupled diffusion framework integrating a fast-learning “System 1” with a physics-driven, formally verifiable “System 2”. This is the first approach to enable computational sharing between training and inference phases, jointly ensuring generalization and formal safety. The framework unifies diffusion modeling, cognition-inspired architecture, real-time physics simulation, cross-modal robotic pretraining, and hardware-in-the-loop inference optimization. Contribution/Results: Evaluated on NASA JPL’s Mars analog terrain, our method reduces mission failure rate to 25% of baseline models while preserving target-reaching performance. Crucially, it achieves substantial robustness gains without additional training—demonstrating both theoretical soundness and practical efficacy for autonomous planetary exploration.
📝 Abstract
Safe, reliable navigation in extreme, unfamiliar terrain is required for future robotic space exploration missions. Recent generative-AI methods learn semantically aware navigation policies from large, cross-embodiment datasets, but offer limited safety guarantees. Inspired by human cognitive science, we propose a risk-guided diffusion framework that fuses a fast, learned "System-1" with a slow, physics-based "System-2", sharing computation at both training and inference to couple adaptability with formal safety. Hardware experiments conducted at the NASA JPL's Mars-analog facility, Mars Yard, show that our approach reduces failure rates by up to $4 imes$ while matching the goal-reaching performance of learning-based robotic models by leveraging inference-time compute without any additional training.