DriveDPO: Policy Learning via Safety DPO For End-to-End Autonomous Driving

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional end-to-end imitation learning methods for autonomous driving struggle to distinguish visually plausible yet unsafe trajectories, and existing rule-based safety scoring approaches are decoupled from policy optimization. Method: We propose Safe-DPO—a safety-aware direct preference optimization framework—that jointly models human imitation fidelity and rule-regularized safety scores as a trajectory-level preference distribution, enabling end-to-end safe policy learning via iterative DPO. Our approach unifies imitation learning, safety-guided scoring, and trajectory-level preference optimization to achieve safety-aware policy distillation. Results: On the NAVSIM benchmark, Safe-DPO achieves a PDMS score of 90.0—the current state-of-the-art—demonstrating significant improvements in driving safety and stability, particularly in complex scenarios.

Technology Category

Application Category

📝 Abstract
End-to-end autonomous driving has substantially progressed by directly predicting future trajectories from raw perception inputs, which bypasses traditional modular pipelines. However, mainstream methods trained via imitation learning suffer from critical safety limitations, as they fail to distinguish between trajectories that appear human-like but are potentially unsafe. Some recent approaches attempt to address this by regressing multiple rule-driven scores but decoupling supervision from policy optimization, resulting in suboptimal performance. To tackle these challenges, we propose DriveDPO, a Safety Direct Preference Optimization Policy Learning framework. First, we distill a unified policy distribution from human imitation similarity and rule-based safety scores for direct policy optimization. Further, we introduce an iterative Direct Preference Optimization stage formulated as trajectory-level preference alignment. Extensive experiments on the NAVSIM benchmark demonstrate that DriveDPO achieves a new state-of-the-art PDMS of 90.0. Furthermore, qualitative results across diverse challenging scenarios highlight DriveDPO's ability to produce safer and more reliable driving behaviors.
Problem

Research questions and friction points this paper is trying to address.

Addressing safety limitations in end-to-end autonomous driving imitation learning
Overcoming suboptimal performance from decoupling supervision and policy optimization
Distinguishing human-like but potentially unsafe trajectories in autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills unified policy from imitation and safety scores
Uses iterative Direct Preference Optimization for alignment
Directly optimizes policy via trajectory-level preference learning
🔎 Similar Papers
No similar papers found.
S
Shuyao Shang
NLPR, Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Yuntao Chen
Yuntao Chen
Miromind
agentic aimultimodal modelcomputer vision
Y
Yuqi Wang
NLPR, Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Yingyan Li
Yingyan Li
Institute of Automation, Chinese Academy of Sciences
computer vision
Zhaoxiang Zhang
Zhaoxiang Zhang
Institute of Automation, Chinese Academy of Sciences
Computer VisionPattern RecognitionBiologically-inspired Learning