🤖 AI Summary
In end-to-end autonomous driving, the mismatch between open-loop imitation learning (IL) training and closed-loop deployment leads to frequent human interventions. To address this, we propose a post-optimization framework leveraging expert intervention data—novelly modeling interventions as preference signals—and jointly iterating Dataset Aggregation (DAgger) and Direct Preference Optimization (DPO). DAgger expands the dataset with high-quality, intervention-rich trajectories, while DPO aligns the policy with human preferences without requiring hand-crafted rewards or environment interaction. This approach progressively enhances closed-loop robustness and recovery capability. On the Bench2Drive benchmark, our method significantly outperforms pure IL baselines. Ablation studies confirm complementary gains from DAgger (in data coverage) and DPO (in preference alignment), collectively bridging the open-loop–closed-loop gap.
📝 Abstract
Existing end-to-end autonomous driving methods typically rely on imitation learning (IL) but face a key challenge: the misalignment between open-loop training and closed-loop deployment. This misalignment often triggers driver-initiated takeovers and system disengagements during closed-loop execution. How to leverage those expert takeover data from disengagement scenarios and effectively expand the IL policy's capability presents a valuable yet unexplored challenge. In this paper, we propose TakeAD, a novel preference-based post-optimization framework that fine-tunes the pre-trained IL policy with this disengagement data to enhance the closed-loop driving performance. First, we design an efficient expert takeover data collection pipeline inspired by human takeover mechanisms in real-world autonomous driving systems. Then, this post optimization framework integrates iterative Dataset Aggregation (DAgger) for imitation learning with Direct Preference Optimization (DPO) for preference alignment. The DAgger stage equips the policy with fundamental capabilities to handle disengagement states through direct imitation of expert interventions. Subsequently, the DPO stage refines the policy's behavior to better align with expert preferences in disengagement scenarios. Through multiple iterations, the policy progressively learns recovery strategies for disengagement states, thereby mitigating the open-loop gap. Experiments on the closed-loop Bench2Drive benchmark demonstrate our method's effectiveness compared with pure IL methods, with comprehensive ablations confirming the contribution of each component.