Beyond Imitation: Reinforcement Learning Fine-Tuning for Adaptive Diffusion Navigation Policies

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based navigation policies often suffer from trajectory error accumulation and safety failures in unknown environments due to distributional shift, and are notoriously difficult to fine-tune effectively with reinforcement learning. This work proposes the first reinforcement learning fine-tuning framework tailored for diffusion policies, introducing Group Relative Policy Optimization (GRPO)—a value-network-free algorithm that leverages multi-trajectory sampling for online environmental adaptation. The approach freezes the visual encoder while selectively updating only the high-level decoder and action head, thereby preserving pretrained representations while enhancing safety. Evaluated on the Isaac Sim PointGoal task, the method improves success rate in unseen scenes from 52.0% to 58.7% and SPL from 0.49 to 0.54, significantly reduces collision frequency, and demonstrates successful zero-shot transfer to a real quadrupedal robot platform.

Technology Category

Application Category

📝 Abstract
Diffusion-based robot navigation policies trained on large-scale imitation learning datasets, can generate multi-modal trajectories directly from the robot's visual observations, bypassing the traditional localization-mapping-planning pipeline and achieving strong zero-shot generalization. However, their performance remains constrained by the coverage of offline datasets, and when deployed in unseen settings, distribution shift often leads to accumulated trajectory errors and safety-critical failures. Adapting diffusion policies with reinforcement learning is challenging because their iterative denoising structure hinders effective gradient backpropagation, while also making the training of an additional value network computationally expensive and less stable. To address these issues, we propose a reinforcement learning fine-tuning framework tailored for diffusion-based navigation. The method leverages the inherent multi-trajectory sampling mechanism of diffusion models and adopts Group Relative Policy Optimization (GRPO), which estimates relative advantages across sampled trajectories without requiring a separate value network. To preserve pretrained representations while enabling adaptation, we freeze the visual encoder and selectively update the higher decoder layers and action head, enhancing safety-aware behaviors through online environmental feedback. On the PointGoal task in Isaac Sim, our approach improves the Success Rate from 52.0% to 58.7% and SPL from 0.49 to 0.54 on unseen scenes, while reducing collision frequency. Additional experiments show that the fine-tuned policy transfers zero-shot to a real quadruped platform and maintains stable performance in geometrically out-of-distribution environments, suggesting improved adaptability and safe generalization to new domains.
Problem

Research questions and friction points this paper is trying to address.

diffusion-based navigation
distribution shift
reinforcement learning fine-tuning
safety-critical failures
zero-shot generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion policy
reinforcement learning fine-tuning
Group Relative Policy Optimization
zero-shot transfer
safe navigation
🔎 Similar Papers
J
Junhe Sheng
Nanyang Technological University, Singapore
R
Ruofei Bai
Nanyang Technological University, Singapore; A*STAR, Singapore
Kuan Xu
Kuan Xu
Nanyang Technological University
roboticsvisual SLAM
R
Ruimeng Liu
Nanyang Technological University, Singapore
J
Jie Chen
National University of Singapore, Singapore
S
Shenghai Yuan
Nanyang Technological University, Singapore
W
Wei-Yun Yau
A*STAR, Singapore
Lihua Xie
Lihua Xie
Professor of Electrical Engineering, Nanyang Technological University
Robust controlNetworked ControlMult-agent Systems