🤖 AI Summary
Vision-language models (VLMs) for end-to-end autonomous driving suffer from insufficient robustness in real-world scenarios due to sensor failures (e.g., environmental interference) and prompt corruption (e.g., human misconfiguration or transmission errors). Method: We propose RoboDriveBench—the first robustness benchmark for trajectory prediction—covering 11 realistic corruption types. To address these challenges, we design RoboDriveVLM: a novel framework that maps multimodal sensor inputs (LiDAR, radar, camera) into a unified latent space and introduces cross-modal knowledge distillation–based test-time adaptation (TTA) for dynamic perturbation compensation. Contribution/Results: Evaluated on 64,559 trajectory prediction instances, RoboDriveVLM significantly improves prediction stability under multiple corruptions. RoboDriveBench provides a standardized, reproducible evaluation protocol, while RoboDriveVLM establishes a verifiable robustness-enhancement paradigm for deploying VLMs in safety-critical autonomous driving systems.
📝 Abstract
Current Vision-Language Model (VLM)-based end-to-end autonomous driving systems often leverage large language models to generate driving decisions directly based on their understanding of the current scene. However, such systems introduce multiple risks in real-world driving scenarios. To evaluate whether VLMs are truly viable for autonomous driving, we introduce RoboDriveBench, the first robustness benchmark focused on end-to-end trajectory prediction tasks. This benchmark systematically evaluates two critical categories of real-world challenges for VLM-based end-to-end autonomous driving systems through 11 simulated scenarios encompassing various corruption types, including 6 scenarios of sensor corruption caused by environmental variations, along with 5 cases of prompt corruption resulting from human intervention and data transmission failures. Each corruption type includes 250 unique driving scenarios and 5,689 frames, resulting in 64,559 total trajectory prediction cases per evaluation. To overcome these real-world challenges, we propose a novel VLM-based autonomous driving framework called RoboDriveVLM, which enhances robustness by mapping more multimodal data-e.g., lidar and radar-into a unified latent space. Furthermore, we introduce a new Test-Time Adaptation (TTA) method based on cross-modal knowledge distillation to improve the robustness of VLM-based autonomous driving systems. Through extensive experiments, our work highlights the limitations of current VLM-based end-to-end autonomous driving systems and provides a more reliable solution for real-world deployment. Source code and datasets will be released.