🤖 AI Summary
Existing visual navigation diffusion policies generate actions by denoising from pure Gaussian noise, resulting in slow convergence, inefficient learning, and poor adaptability to sparse, non-Gaussian real-world action distributions. To address this, we propose NaviBridger—a novel framework that introduces denoising diffusion bridge models to visual navigation for the first time, enabling action generation initialized from arbitrary informative prior actions—not merely Gaussian noise. By systematically constructing three complementary source policies, NaviBridger provides high-quality, multimodal action priors, facilitating prior-guided diffusion-based imitation learning. Experiments demonstrate significant improvements in inference speed and action sequence accuracy across simulated and real-world indoor/outdoor environments, outperforming state-of-the-art diffusion baselines. Moreover, the method maintains strong generalization under sparse-reward settings.
📝 Abstract
Recent advancements in diffusion-based imitation learning, which show impressive performance in modeling multimodal distributions and training stability, have led to substantial progress in various robot learning tasks. In visual navigation, previous diffusion-based policies typically generate action sequences by initiating from denoising Gaussian noise. However, the target action distribution often diverges significantly from Gaussian noise, leading to redundant denoising steps and increased learning complexity. Additionally, the sparsity of effective action distributions makes it challenging for the policy to generate accurate actions without guidance. To address these issues, we propose a novel, unified visual navigation framework leveraging the denoising diffusion bridge models named NaviBridger. This approach enables action generation by initiating from any informative prior actions, enhancing guidance and efficiency in the denoising process. We explore how diffusion bridges can enhance imitation learning in visual navigation tasks and further examine three source policies for generating prior actions. Extensive experiments in both simulated and real-world indoor and outdoor scenarios demonstrate that NaviBridger accelerates policy inference and outperforms the baselines in generating target action sequences. Code is available at https://github.com/hren20/NaiviBridger.