🤖 AI Summary
Existing end-to-end autonomous driving methods over-rely on ego-state supervision and lack explicit modeling of planning intent, resulting in insufficient decision-making robustness. This paper proposes a planning-oriented end-to-end framework: (1) a homomorphic but heterosource planning model that generates diverse, semantically plausible trajectory proposals from structured scene representations; (2) multi-instance imitation learning via knowledge distillation, jointly optimized with reinforcement learning and generative modeling to refine the state-to-decision mapping. The core innovation lies in explicitly embedding planning priors into end-to-end training, enabling intent-level understanding of complex traffic scenarios. Evaluated on nuScenes and NAVSIM, our approach reduces collision rate by 50% and improves closed-loop task success rate by 3 percentage points.
📝 Abstract
End-to-end autonomous driving has been recently seen rapid development, exerting a profound influence on both industry and academia. However, the existing work places excessive focus on ego-vehicle status as their sole learning objectives and lacks of planning-oriented understanding, which limits the robustness of the overall decision-making prcocess. In this work, we introduce DistillDrive, an end-to-end knowledge distillation-based autonomous driving model that leverages diversified instance imitation to enhance multi-mode motion feature learning. Specifically, we employ a planning model based on structured scene representations as the teacher model, leveraging its diversified planning instances as multi-objective learning targets for the end-to-end model. Moreover, we incorporate reinforcement learning to enhance the optimization of state-to-decision mappings, while utilizing generative modeling to construct planning-oriented instances, fostering intricate interactions within the latent space. We validate our model on the nuScenes and NAVSIM datasets, achieving a 50% reduction in collision rate and a 3-point improvement in closed-loop performance compared to the baseline model. Code and model are publicly available at https://github.com/YuruiAI/DistillDrive