TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In end-to-end autonomous driving, the mismatch between open-loop imitation learning (IL) training and closed-loop deployment leads to frequent human interventions. To address this, we propose a post-optimization framework leveraging expert intervention data—novelly modeling interventions as preference signals—and jointly iterating Dataset Aggregation (DAgger) and Direct Preference Optimization (DPO). DAgger expands the dataset with high-quality, intervention-rich trajectories, while DPO aligns the policy with human preferences without requiring hand-crafted rewards or environment interaction. This approach progressively enhances closed-loop robustness and recovery capability. On the Bench2Drive benchmark, our method significantly outperforms pure IL baselines. Ablation studies confirm complementary gains from DAgger (in data coverage) and DPO (in preference alignment), collectively bridging the open-loop–closed-loop gap.

Technology Category

Application Category

📝 Abstract
Existing end-to-end autonomous driving methods typically rely on imitation learning (IL) but face a key challenge: the misalignment between open-loop training and closed-loop deployment. This misalignment often triggers driver-initiated takeovers and system disengagements during closed-loop execution. How to leverage those expert takeover data from disengagement scenarios and effectively expand the IL policy's capability presents a valuable yet unexplored challenge. In this paper, we propose TakeAD, a novel preference-based post-optimization framework that fine-tunes the pre-trained IL policy with this disengagement data to enhance the closed-loop driving performance. First, we design an efficient expert takeover data collection pipeline inspired by human takeover mechanisms in real-world autonomous driving systems. Then, this post optimization framework integrates iterative Dataset Aggregation (DAgger) for imitation learning with Direct Preference Optimization (DPO) for preference alignment. The DAgger stage equips the policy with fundamental capabilities to handle disengagement states through direct imitation of expert interventions. Subsequently, the DPO stage refines the policy's behavior to better align with expert preferences in disengagement scenarios. Through multiple iterations, the policy progressively learns recovery strategies for disengagement states, thereby mitigating the open-loop gap. Experiments on the closed-loop Bench2Drive benchmark demonstrate our method's effectiveness compared with pure IL methods, with comprehensive ablations confirming the contribution of each component.
Problem

Research questions and friction points this paper is trying to address.

Addresses misalignment between open-loop training and closed-loop deployment in autonomous driving
Leverages expert takeover data from disengagement scenarios to improve driving policies
Enhances closed-loop performance by fine-tuning imitation learning with preference optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes IL policy with expert takeover data
Integrates DAgger imitation learning with DPO alignment
Learns recovery strategies through iterative post-optimization
🔎 Similar Papers
No similar papers found.
D
Deqing Liu
The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
Y
Yinfeng Gao
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
Deheng Qian
Deheng Qian
Unknown affiliation
Qichao Zhang
Qichao Zhang
中国科学院自动化研究所
人工智能 强化学习 博弈论 自适应动态规划
Xiaoqing Ye
Xiaoqing Ye
School of Computing and Artificial Intelligence,Southwest Jiaotong University
Granular Computing、Recommender System、Business Intelligence
J
Junyu Han
Chongqing Chang’an Technology Co., Ltd.
Yupeng Zheng
Yupeng Zheng
Institute of Automation, Chinese Academy of Sciences
Xueyi Liu
Xueyi Liu
Institute of Automation, Chinese Academy of Sciences
Z
Zhongpu Xia
The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
Dawei Ding
Dawei Ding
Yau Mathematical Sciences Center, Tsinghua University
Quantum TelepathyQuantum Computing at the Physical Layer
Y
Yifeng Pan
Chongqing Chang’an Technology Co., Ltd.
Dongbin Zhao
Dongbin Zhao
Institute of Automation, Chinese Academy of Sciences
Deep Reinforcement LearningAdaptive Dynamic ProgrammingGame AISmart drivingrobotics