From Static to Dynamic: a Survey of Topology-Aware Perception in Autonomous Driving

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional static high-definition maps suffer from high construction costs, infrequent updates, and poor generalization—severely limiting the scalability of autonomous driving systems. To address these limitations, this paper proposes a topology-aware dynamic environment understanding framework. The framework integrates vectorized map construction, topological graph neural network modeling, multimodal large language model–guided semantic reasoning, and real-time sensor fusion to achieve compact, interpretable representations of lane geometry and road semantics, along with relational reasoning over structural and semantic elements. Compared to static-map paradigms, our approach significantly improves perception latency (millisecond-level map updates), cross-region generalization, and online adaptability. Experimental results demonstrate superior performance over state-of-the-art methods in dynamic scene modeling accuracy, map lightweighting (compression ratio >60%), and semantic consistency. This work establishes a novel paradigm for scalable, adaptive, and interpretable autonomous driving perception, supported by both theoretical foundations and empirical validation.

Technology Category

Application Category

📝 Abstract
The key to achieving autonomous driving lies in topology-aware perception, the structured understanding of the driving environment with an emphasis on lane topology and road semantics. This survey systematically reviews four core research directions under this theme: vectorized map construction, topological structure modeling, prior knowledge fusion, and language model-based perception. Across these directions, we observe a unifying trend: a paradigm shift from static, pre-built maps to dynamic, sensor-driven perception. Specifically, traditional static maps have provided semantic context for autonomous systems. However, they are costly to construct, difficult to update in real time, and lack generalization across regions, limiting their scalability. In contrast, dynamic representations leverage on-board sensor data for real-time map construction and topology reasoning. Each of the four research directions contributes to this shift through compact spatial modeling, semantic relational reasoning, robust domain knowledge integration, and multimodal scene understanding powered by pre-trained language models. Together, they pave the way for more adaptive, scalable, and explainable autonomous driving systems.
Problem

Research questions and friction points this paper is trying to address.

Shifting from static maps to dynamic sensor-driven perception
Overcoming costly static map limitations through real-time topology reasoning
Enhancing autonomous driving with adaptive and scalable environmental understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sensor-driven perception replaces static maps
Compact spatial modeling with semantic relational reasoning
Multimodal scene understanding using pre-trained language models
🔎 Similar Papers
No similar papers found.
Y
Yixiao Chen
Sems
Ruining Yang
Ruining Yang
PhD Student, Northeastern University
deep learningtrajectory predictiondata efficiency
X
Xin Chen
Shandong University
J
Jia He
Shandong University
D
Dongliang Xu
Shandong University
Y
Yue Yao
Shandong University