🤖 AI Summary
Diffusion-based navigation policies often suffer from trajectory error accumulation and safety failures in unknown environments due to distributional shift, and are notoriously difficult to fine-tune effectively with reinforcement learning. This work proposes the first reinforcement learning fine-tuning framework tailored for diffusion policies, introducing Group Relative Policy Optimization (GRPO)—a value-network-free algorithm that leverages multi-trajectory sampling for online environmental adaptation. The approach freezes the visual encoder while selectively updating only the high-level decoder and action head, thereby preserving pretrained representations while enhancing safety. Evaluated on the Isaac Sim PointGoal task, the method improves success rate in unseen scenes from 52.0% to 58.7% and SPL from 0.49 to 0.54, significantly reduces collision frequency, and demonstrates successful zero-shot transfer to a real quadrupedal robot platform.
📝 Abstract
Diffusion-based robot navigation policies trained on large-scale imitation learning datasets, can generate multi-modal trajectories directly from the robot's visual observations, bypassing the traditional localization-mapping-planning pipeline and achieving strong zero-shot generalization. However, their performance remains constrained by the coverage of offline datasets, and when deployed in unseen settings, distribution shift often leads to accumulated trajectory errors and safety-critical failures. Adapting diffusion policies with reinforcement learning is challenging because their iterative denoising structure hinders effective gradient backpropagation, while also making the training of an additional value network computationally expensive and less stable. To address these issues, we propose a reinforcement learning fine-tuning framework tailored for diffusion-based navigation. The method leverages the inherent multi-trajectory sampling mechanism of diffusion models and adopts Group Relative Policy Optimization (GRPO), which estimates relative advantages across sampled trajectories without requiring a separate value network. To preserve pretrained representations while enabling adaptation, we freeze the visual encoder and selectively update the higher decoder layers and action head, enhancing safety-aware behaviors through online environmental feedback. On the PointGoal task in Isaac Sim, our approach improves the Success Rate from 52.0% to 58.7% and SPL from 0.49 to 0.54 on unseen scenes, while reducing collision frequency. Additional experiments show that the fine-tuned policy transfers zero-shot to a real quadruped platform and maintains stable performance in geometrically out-of-distribution environments, suggesting improved adaptability and safe generalization to new domains.