🤖 AI Summary
This work addresses two key limitations in end-to-end autonomous driving: poor generalization to unseen objects and heterogeneous sensor configurations, and unnatural human-vehicle interaction. To this end, we propose Graph Structured Visual Question Answering (Graph VQA)—a novel task that emulates human drivers’ multi-step reasoning (localization → interaction estimation → decision-making and planning). We introduce the first Vision-Language Model (VLM)-based driving reasoning agent, supported by DriveLM-Data—a large-scale, cross-domain dataset spanning simulation (CARLA) and real-world scenes (nuScenes). We further design DriveLM-Agent, a unified architecture that jointly models perception-prediction-planning question-answer pairs via graph-structured reasoning. Experiments demonstrate that our approach matches state-of-the-art specialized models on standard end-to-end driving benchmarks, while achieving significant zero-shot generalization gains on unseen objects and novel sensor configurations. All code, data, and models are publicly released.
📝 Abstract
We study how vision-language models (VLMs) trained on web-scale data can be integrated into end-to-end driving systems to boost generalization and enable interactivity with human users. While recent approaches adapt VLMs to driving via single-round visual question answering (VQA), human drivers reason about decisions in multiple steps. Starting from the localization of key objects, humans estimate object interactions before taking actions. The key insight is that with our proposed task, Graph VQA, where we model graph-structured reasoning through perception, prediction and planning question-answer pairs, we obtain a suitable proxy task to mimic the human reasoning process. We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving. The experiments demonstrate that Graph VQA provides a simple, principled framework for reasoning about a driving scene, and DriveLM-Data provides a challenging benchmark for this task. Our DriveLM-Agent baseline performs end-to-end autonomous driving competitively in comparison to state-of-the-art driving-specific architectures. Notably, its benefits are pronounced when it is evaluated zero-shot on unseen objects or sensor configurations. We hope this work can be the starting point to shed new light on how to apply VLMs for autonomous driving. To facilitate future research, all code, data, and models are available to the public.