🤖 AI Summary
End-to-end autonomous driving suffers from causal confusion and degraded generalization in closed-loop evaluation due to noise in expert demonstration trajectories under imitation learning. To address this, we propose a perception-guided self-supervised training paradigm: for the first time, we introduce a contrastive learning mechanism—using perception outputs (e.g., lane markings and surrounding vehicle motion) to construct positive and negative samples—that explicitly models causal relationships between environmental context and driving actions, thereby mitigating spurious correlations inherent in expert data. Our method requires no additional annotations; it leverages perception outputs as supervisory signals within standard end-to-end architectures, seamlessly integrating self-supervision with closed-loop feedback. Evaluated on the Bench2Drive benchmark, our approach achieves a driving score of 78.08 and an average success rate of 48.64%, significantly outperforming state-of-the-art methods. These results empirically validate the critical role of causal modeling in enhancing closed-loop robustness.
📝 Abstract
End-to-end autonomous driving systems, predominantly trained through imitation learning, have demonstrated considerable effectiveness in leveraging large-scale expert driving data. Despite their success in open-loop evaluations, these systems often exhibit significant performance degradation in closed-loop scenarios due to causal confusion. This confusion is fundamentally exacerbated by the overreliance of the imitation learning paradigm on expert trajectories, which often contain unattributable noise and interfere with the modeling of causal relationships between environmental contexts and appropriate driving actions. To address this fundamental limitation, we propose Perception-Guided Self-Supervision (PGS) - a simple yet effective training paradigm that leverages perception outputs as the primary supervisory signals, explicitly modeling causal relationships in decision-making. The proposed framework aligns both the inputs and outputs of the decision-making module with perception results, such as lane centerlines and the predicted motions of surrounding agents, by introducing positive and negative self-supervision for the ego trajectory. This alignment is specifically designed to mitigate causal confusion arising from the inherent noise in expert trajectories. Equipped with perception-driven supervision, our method, built on a standard end-to-end architecture, achieves a Driving Score of 78.08 and a mean success rate of 48.64% on the challenging closed-loop Bench2Drive benchmark, significantly outperforming existing state-of-the-art methods, including those employing more complex network architectures and inference pipelines. These results underscore the effectiveness and robustness of the proposed PGS framework and point to a promising direction for addressing causal confusion and enhancing real-world generalization in autonomous driving.