GraphPilot: Grounded Scene Graph Conditioning for Language-Based Autonomous Driving

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current language-based autonomous driving models lack explicit modeling of spatial structures and dynamic interactions among traffic entities, limiting their reasoning capability in complex scenarios. To address this, we propose a model-agnostic scene graph conditioning injection method: traffic scene graphs—generated by a graph neural network and aligned with vision-language models—are serialized into multi-granularity structured representations; relational supervision is then incorporated during training via structured prompt templates, enabling the model to internalize spatial and interaction priors without requiring scene graphs at inference time. Our approach supports end-to-end multimodal training. Evaluated on the LangAuto benchmark, it achieves significant performance gains: +15.6% in driving score for LMDrive and +17.5% for BEVDriver, demonstrating both effectiveness and generalizability across diverse language-driven driving architectures.

Technology Category

Application Category

📝 Abstract
Vision-language models have recently emerged as promising planners for autonomous driving, where success hinges on topology-aware reasoning over spatial structure and dynamic interactions from multimodal input. However, existing models are typically trained without supervision that explicitly encodes these relational dependencies, limiting their ability to infer how agents and other traffic entities influence one another from raw sensor data. In this work, we bridge this gap with a novel model-agnostic method that conditions language-based driving models on structured relational context in the form of traffic scene graphs. We serialize scene graphs at various abstraction levels and formats, and incorporate them into the models via structured prompt templates, enabling a systematic analysis of when and how relational supervision is most beneficial. Extensive evaluations on the public LangAuto benchmark show that scene graph conditioning of state-of-the-art approaches yields large and persistent improvement in driving performance. Notably, we observe up to a 15.6% increase in driving score for LMDrive and 17.5% for BEVDriver, indicating that models can better internalize and ground relational priors through scene graph-conditioned training, even without requiring scene graph input at test-time. Code, fine-tuned models, and our scene graph dataset are publicly available at https://github.com/iis-esslingen/GraphPilot.
Problem

Research questions and friction points this paper is trying to address.

Improving autonomous driving models' spatial reasoning with scene graphs
Enhancing relational understanding between traffic agents from sensor data
Bridging gap in topology-aware reasoning for language-based driving systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditions driving models on traffic scene graphs
Serializes scene graphs at multiple abstraction levels
Uses structured prompt templates for relational supervision
🔎 Similar Papers
No similar papers found.