🤖 AI Summary
Evaluating lane detection models under domain shift without ground-truth labels in the target domain remains challenging, as existing image-classification–oriented unsupervised evaluation methods are inapplicable.
Method: This paper proposes the first label-free performance estimation framework for lane detection, featuring a dual-path architecture that jointly leverages image-level contextual features and structured geometric features of lanes. It integrates a pretrained vision encoder with DeepSets to robustly model variable-length lane instances—including degenerate cases such as zero-lane scenes.
Contribution/Results: Evaluated on OpenLane, our method achieves a mean absolute error (MAE) of 0.117 and a Spearman correlation coefficient of 0.727—significantly outperforming five state-of-the-art domain adaptation evaluation baselines. The framework provides a practical, deployable solution for robustness verification of high-level autonomous driving systems under severe domain shifts.
📝 Abstract
Lane detection is a critical component of Advanced Driver-Assistance Systems (ADAS) and Automated Driving System (ADS), providing essential spatial information for lateral control. However, domain shifts often undermine model reliability when deployed in new environments. Ensuring the robustness and safety of lane detection models typically requires collecting and annotating target domain data, which is resource-intensive. Estimating model performance without ground-truth labels offers a promising alternative for efficient robustness assessment, yet remains underexplored in lane detection. While previous work has addressed performance estimation in image classification, these methods are not directly applicable to lane detection tasks. This paper first adapts five well-performing performance estimation methods from image classification to lane detection, building a baseline. Addressing the limitations of prior approaches that solely rely on softmax scores or lane features, we further propose a new Lane Performance Estimation Framework (LanePerf), which integrates image and lane features using a pretrained image encoder and a DeepSets-based architecture, effectively handling zero-lane detection scenarios and large domain-shift cases. Extensive experiments on the OpenLane dataset, covering diverse domain shifts (scenes, weather, hours), demonstrate that our LanePerf outperforms all baselines, achieving a lower MAE of 0.117 and a higher Spearman's rank correlation coefficient of 0.727. These findings pave the way for robust, label-free performance estimation in ADAS, supporting more efficient testing and improved safety in challenging driving scenarios.