Breaking Imitation Bottlenecks: Reinforced Diffusion Powers Diverse Trajectory Generation

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end autonomous driving approaches predominantly rely on imitation learning from single-expert demonstrations, leading to conservative behavior, insufficient trajectory diversity, and limited generalization. To address these limitations, we propose DIVER—a novel framework that deeply integrates reinforcement learning with conditional diffusion models. DIVER leverages reward signals to guide the diffusion process, generating multimodal, physically feasible trajectories conditioned on high-definition maps and observations of surrounding agents. We introduce a new diversity metric based on trajectory distribution discrepancy to effectively mitigate mode collapse. Departing from conventional L2-based evaluation, DIVER adopts multimodal conditional modeling and reward-driven optimization. Evaluated on NAVSIM, Bench2Drive, and nuScenes benchmarks, DIVER achieves state-of-the-art performance in both open-loop trajectory prediction and closed-loop driving control, significantly improving trajectory diversity and safety.

Technology Category

Application Category

📝 Abstract
Most end-to-end autonomous driving methods rely on imitation learning from single expert demonstrations, often leading to conservative and homogeneous behaviors that limit generalization in complex real-world scenarios. In this work, we propose DIVER, an end-to-end driving framework that integrates reinforcement learning with diffusion-based generation to produce diverse and feasible trajectories. At the core of DIVER lies a reinforced diffusion-based generation mechanism. First, the model conditions on map elements and surrounding agents to generate multiple reference trajectories from a single ground-truth trajectory, alleviating the limitations of imitation learning that arise from relying solely on single expert demonstrations. Second, reinforcement learning is employed to guide the diffusion process, where reward-based supervision enforces safety and diversity constraints on the generated trajectories, thereby enhancing their practicality and generalization capability. Furthermore, to address the limitations of L2-based open-loop metrics in capturing trajectory diversity, we propose a novel Diversity metric to evaluate the diversity of multi-mode predictions.Extensive experiments on the closed-loop NAVSIM and Bench2Drive benchmarks, as well as the open-loop nuScenes dataset, demonstrate that DIVER significantly improves trajectory diversity, effectively addressing the mode collapse problem inherent in imitation learning.
Problem

Research questions and friction points this paper is trying to address.

Overcoming conservative behaviors in imitation learning for autonomous driving
Generating diverse feasible trajectories with reinforced diffusion
Enhancing trajectory diversity evaluation beyond L2-based metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforced diffusion integrates RL with diffusion generation
Generates diverse trajectories from single expert demonstrations
Novel Diversity metric evaluates multi-mode predictions
🔎 Similar Papers
No similar papers found.
Ziying Song
Ziying Song
Beijing Jiaotong University
Object DetectionComputer VisionDeep Learning
L
Lin Liu
Beijing Key Laboratory of Traffic Data Mining and Embodied Intelligence, School of Computer Science and Technology, Beijing Jiaotong University
Hongyu Pan
Hongyu Pan
Alibaba DAMO Academy, Autonomous Driving Lab
Computer VisionDetectionSegmentationPoint CloudMotion,End2End
B
Bencheng Liao
Horizon Robotics
M
Mingzhe Guo
Beijing Key Laboratory of Traffic Data Mining and Embodied Intelligence, School of Computer Science and Technology, Beijing Jiaotong University
L
Lei Yang
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
Y
Yongchang Zhang
Horizon Robotics
Shaoqing Xu
Shaoqing Xu
University of Macau, BUAA, Xiaomi EV
3D Computer Vision3D GenerationVision and Language ModelEnd2EndWorld Model
C
Caiyan Jia
Beijing Key Laboratory of Traffic Data Mining and Embodied Intelligence, School of Computer Science and Technology, Beijing Jiaotong University
Yadan Luo
Yadan Luo
ARC DECRA and Senior Lecturer, University of Queensland
Generalization3D VisionAutonomous Driving