🤖 AI Summary
This study addresses the significant degradation in generalization performance of end-to-end autonomous driving models when deployed in unseen cities, primarily due to differences in road topology and driving conventions such as left- versus right-hand traffic. For the first time, it systematically evaluates the zero-shot cross-city transfer capabilities of self-supervised visual representations—namely I-JEPA, DINOv2, and MAE—under strict geographic partitioning, validated through both the nuScenes open-loop benchmark and the NAVSIM closed-loop simulation protocol. Results demonstrate that self-supervised pretraining substantially narrows the generalization gap: in transfer from Boston to Singapore, the L2 displacement ratio improves from 9.77× to 1.20×, collision rate drops from 19.43× to 0.75×, and closed-loop PDMS scores increase by up to 4%. This work underscores the critical role of self-supervised representations in cross-city generalization and advocates zero-shot transfer under geographic isolation as a new standard for evaluating autonomous driving robustness.
📝 Abstract
End-to-end autonomous driving models are typically trained on multi-city datasets using supervised ImageNet-pretrained backbones, yet their ability to generalize to unseen cities remains largely unexamined. When training and evaluation data are geographically mixed, models may implicitly rely on city-specific cues, masking failure modes that would occur under real domain shifts when generalizing to new locations. In this work we investigate zero-shot cross-city generalization in end-to-end trajectory planning and ask whether self-supervised visual representations improve transfer across cities. We conduct a comprehensive study by integrating self-supervised backbones (I-JEPA, DINOv2, and MAE) into planning frameworks. We evaluate performance under strict geographic splits on nuScenes in the open-loop setting and on NAVSIM in the closed-loop evaluation protocol. Our experiments reveal a substantial generalization gap when transferring models relying on traditional supervised backbones across cities with different road topologies and driving conventions, particularly when transferring from right-side to left-side driving environments. Self-supervised representation learning reduces this gap. In open-loop evaluation, a supervised backbone exhibits severe inflation when transferring from Boston to Singapore (L2 displacement ratio 9.77x, collision ratio 19.43x), whereas domain-specific self-supervised pretraining reduces this to 1.20x and 0.75x respectively. In closed-loop evaluation, self-supervised pretraining improves PDMS by up to 4 percent for all single-city training cities. These results show that representation learning strongly influences the robustness of cross-city planning and establish zero-shot geographic transfer as a necessary test for evaluating end-to-end autonomous driving systems.