Driving with Context: Online Map Matching for Complex Roads Using Lane Markings and Scenario Recognition

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor robustness and frequent misalignment in online map matching for complex multi-lane road scenarios, this paper proposes an online Standard Definition (SD) map matching method jointly leveraging lane marking detection and driving scene recognition. We innovatively unify dynamic lane tracking and scene classification into a single probabilistic framework, constructing a lane-augmented SD map. Within a Hidden Markov Model (HMM), we introduce two novel probabilistic factors—lane association probability and scene emission probability—to enable high-accuracy, real-time vehicle localization. The method integrates multi-lane tracking, ICP-based point cloud registration, and a lightweight deep learning model for scene recognition. Evaluated on the Zenseact Open Dataset and a Shanghai field-test dataset, our approach achieves F1 scores of 98.04% and 94.60%, respectively—outperforming state-of-the-art methods. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Accurate online map matching is fundamental to vehicle navigation and the activation of intelligent driving functions. Current online map matching methods are prone to errors in complex road networks, especially in multilevel road area. To address this challenge, we propose an online Standard Definition (SD) map matching method by constructing a Hidden Markov Model (HMM) with multiple probability factors. Our proposed method can achieve accurate map matching even in complex road networks by carefully leveraging lane markings and scenario recognition in the designing of the probability factors. First, the lane markings are generated by a multi-lane tracking method and associated with the SD map using HMM to build an enriched SD map. In areas covered by the enriched SD map, the vehicle can re-localize itself by performing Iterative Closest Point (ICP) registration for the lane markings. Then, the probability factor accounting for the lane marking detection can be obtained using the association probability between adjacent lanes and roads. Second, the driving scenario recognition model is applied to generate the emission probability factor of scenario recognition, which improves the performance of map matching on elevated roads and ordinary urban roads underneath them. We validate our method through extensive road tests in Europe and China, and the experimental results show that our proposed method effectively improves the online map matching accuracy as compared to other existing methods, especially in multilevel road area. Specifically, the experiments show that our proposed method achieves $F_1$ scores of 98.04% and 94.60% on the Zenseact Open Dataset and test data of multilevel road areas in Shanghai respectively, significantly outperforming benchmark methods. The implementation is available at https://github.com/TRV-Lab/LMSR-OMM.
Problem

Research questions and friction points this paper is trying to address.

Improving online map matching accuracy in complex road networks
Utilizing lane markings and scenario recognition for precise localization
Enhancing performance in multilevel road areas with HMM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Hidden Markov Model with multiple probability factors
Leverages lane markings for enriched SD map construction
Applies scenario recognition for elevated road differentiation
🔎 Similar Papers
No similar papers found.
X
Xin Bi
School of Automotive Studies, Tongji University, Shanghai 201804, China
Z
Zhichao Li
School of Automotive Studies, Tongji University, Shanghai 201804, China
Yuxuan Xia
Yuxuan Xia
Researcher at Shanghai Jiao Tong University
Sensor fusionMultiple object trackingSLAM
P
Panpan Tong
School of Automotive Studies, Tongji University, Shanghai 201804, China
Lijuan Zhang
Lijuan Zhang
Zenseact, Lindholmspiren 2, Gothenburg 41756, Sweden
Y
Yang Chen
Zenseact, Lindholmspiren 2, Gothenburg 41756, Sweden
Junsheng Fu
Junsheng Fu
Tehcnical Expert, Zenseact
Autonomous DrivingComputer VisionSensor Fusion