A Hierarchical Test Platform for Vision Language Model (VLM)-Integrated Real-World Autonomous Driving

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address domain shift, insufficient evaluation, and irreproducible closed-loop testing when deploying vision-language models (VLMs) in safety-critical autonomous driving, this work introduces the first hierarchical, real-road-oriented closed-loop testing platform. We propose a modular in-vehicle middleware and a perception-planning-control decoupled architecture, enabling plug-and-play integration of VLMs with conventional autonomous driving modules. Our platform bridges the sim-to-real gap via parametric scenario orchestration, low-latency communication protocols, and programmable physical test tracks. Extensive real-vehicle validation across diverse operational conditions demonstrates significant improvements in VLM comprehension accuracy for complex semantic instructions and generalization to dynamic scenes. The platform achieves 100% test reproducibility and end-to-end latency under 120 ms, establishing a rigorous, scalable benchmark for VLM-driven autonomous driving research and deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated notable promise in autonomous driving by offering the potential for multimodal reasoning through pretraining on extensive image-text pairs. However, adapting these models from broad web-scale data to the safety-critical context of driving presents a significant challenge, commonly referred to as domain shift. Existing simulation-based and dataset-driven evaluation methods, although valuable, often fail to capture the full complexity of real-world scenarios and cannot easily accommodate repeatable closed-loop testing with flexible scenario manipulation. In this paper, we introduce a hierarchical real-world test platform specifically designed to evaluate VLM-integrated autonomous driving systems. Our approach includes a modular, low-latency on-vehicle middleware that allows seamless incorporation of various VLMs, a clearly separated perception-planning-control architecture that can accommodate both VLM-based and conventional modules, and a configurable suite of real-world testing scenarios on a closed track that facilitates controlled yet authentic evaluations. We demonstrate the effectiveness of the proposed platform`s testing and evaluation ability with a case study involving a VLM-enabled autonomous vehicle, highlighting how our test framework supports robust experimentation under diverse conditions.
Problem

Research questions and friction points this paper is trying to address.

Adapting VLMs to safety-critical autonomous driving challenges
Overcoming domain shift from web data to real-world driving
Lacking real-world testing for VLM-integrated autonomous systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular low-latency middleware for VLM integration
Separated perception-planning-control architecture flexibility
Configurable closed-track real-world testing scenarios
🔎 Similar Papers
No similar papers found.