🤖 AI Summary
Existing end-to-end autonomous driving approaches predominantly rely on imitation learning from single-expert demonstrations, leading to conservative behavior, insufficient trajectory diversity, and limited generalization. To address these limitations, we propose DIVER—a novel framework that deeply integrates reinforcement learning with conditional diffusion models. DIVER leverages reward signals to guide the diffusion process, generating multimodal, physically feasible trajectories conditioned on high-definition maps and observations of surrounding agents. We introduce a new diversity metric based on trajectory distribution discrepancy to effectively mitigate mode collapse. Departing from conventional L2-based evaluation, DIVER adopts multimodal conditional modeling and reward-driven optimization. Evaluated on NAVSIM, Bench2Drive, and nuScenes benchmarks, DIVER achieves state-of-the-art performance in both open-loop trajectory prediction and closed-loop driving control, significantly improving trajectory diversity and safety.
📝 Abstract
Most end-to-end autonomous driving methods rely on imitation learning from single expert demonstrations, often leading to conservative and homogeneous behaviors that limit generalization in complex real-world scenarios. In this work, we propose DIVER, an end-to-end driving framework that integrates reinforcement learning with diffusion-based generation to produce diverse and feasible trajectories. At the core of DIVER lies a reinforced diffusion-based generation mechanism. First, the model conditions on map elements and surrounding agents to generate multiple reference trajectories from a single ground-truth trajectory, alleviating the limitations of imitation learning that arise from relying solely on single expert demonstrations. Second, reinforcement learning is employed to guide the diffusion process, where reward-based supervision enforces safety and diversity constraints on the generated trajectories, thereby enhancing their practicality and generalization capability. Furthermore, to address the limitations of L2-based open-loop metrics in capturing trajectory diversity, we propose a novel Diversity metric to evaluate the diversity of multi-mode predictions.Extensive experiments on the closed-loop NAVSIM and Bench2Drive benchmarks, as well as the open-loop nuScenes dataset, demonstrate that DIVER significantly improves trajectory diversity, effectively addressing the mode collapse problem inherent in imitation learning.