🤖 AI Summary
This work addresses the inefficiency of existing diffusion-based planners in leveraging reward signals during reinforcement fine-tuning, which often results in limited trajectory diversity and poor adaptability across scenarios. To overcome this, we propose PlannerRFT, a framework featuring a dual-branch optimization mechanism that refines the trajectory distribution and adaptively guides the denoising process—without altering the original inference pipeline—enabling efficient closed-loop reinforcement fine-tuning. Integrated with our custom high-speed simulation platform, nuMax, the approach supports large-scale parallel training. Evaluated on autonomous driving trajectory planning tasks, PlannerRFT significantly enhances performance by learning diverse yet realistic driving behaviors, while achieving a tenfold improvement in simulation efficiency.
📝 Abstract
Diffusion-based planners have emerged as a promising approach for human-like trajectory generation in autonomous driving. Recent works incorporate reinforcement fine-tuning to enhance the robustness of diffusion planners through reward-oriented optimization in a generation-evaluation loop. However, they struggle to generate multi-modal, scenario-adaptive trajectories, hindering the exploitation efficiency of informative rewards during fine-tuning. To resolve this, we propose PlannerRFT, a sample-efficient reinforcement fine-tuning framework for diffusion-based planners. PlannerRFT adopts a dual-branch optimization that simultaneously refines the trajectory distribution and adaptively guides the denoising process toward more promising exploration, without altering the original inference pipeline. To support parallel learning at scale, we develop nuMax, an optimized simulator that achieves 10 times faster rollout compared to native nuPlan. Extensive experiments shows that PlannerRFT yields state-of-the-art performance with distinct behaviors emerging during the learning process.