Predicting the Road Ahead: A Knowledge Graph based Foundation Model for Scene Understanding in Autonomous Driving

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-term temporal evolution modeling and semantic generalization in autonomous driving scenarios remain limited by purely end-to-end approaches. This paper proposes FM4SU, a symbolic foundation model that pioneers the integration of knowledge graphs—encoding road topology, traffic regulations, and agent interaction dynamics—with a large language model (T5). FM4SU first transforms multi-source perception data into interpretable, bird’s-eye-view symbolic representations, then serializes them as structured linguistic sequences for input to the pre-trained language model. This enables explainable, reasoning-capable modeling of scene evolution and accurate next-scene prediction. By grounding spatiotemporal dynamics in symbolic semantics, FM4SU overcomes key bottlenecks in long-range dependency modeling and cross-scenario generalization. On the nuScenes benchmark, FM4SU achieves 86.7% accuracy in next-scene prediction and consistently outperforms state-of-the-art methods across multiple downstream tasks.

Technology Category

Application Category

📝 Abstract
The autonomous driving field has seen remarkable advancements in various topics, such as object recognition, trajectory prediction, and motion planning. However, current approaches face limitations in effectively comprehending the complex evolutions of driving scenes over time. This paper proposes FM4SU, a novel methodology for training a symbolic foundation model (FM) for scene understanding in autonomous driving. It leverages knowledge graphs (KGs) to capture sensory observation along with domain knowledge such as road topology, traffic rules, or complex interactions between traffic participants. A bird's eye view (BEV) symbolic representation is extracted from the KG for each driving scene, including the spatio-temporal information among the objects across the scenes. The BEV representation is serialized into a sequence of tokens and given to pre-trained language models (PLMs) for learning an inherent understanding of the co-occurrence among driving scene elements and generating predictions on the next scenes. We conducted a number of experiments using the nuScenes dataset and KG in various scenarios. The results demonstrate that fine-tuned models achieve significantly higher accuracy in all tasks. The fine-tuned T5 model achieved a next scene prediction accuracy of 86.7%. This paper concludes that FM4SU offers a promising foundation for developing more comprehensive models for scene understanding in autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

Enhancing scene understanding in autonomous driving using knowledge graphs
Predicting next driving scenes with spatio-temporal BEV representations
Improving accuracy of autonomous driving tasks via fine-tuned foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge graphs capture driving scene dynamics
BEV symbolic representation serialized for PLMs
Fine-tuned T5 model achieves high prediction accuracy
🔎 Similar Papers
No similar papers found.