RLOMM: An Efficient and Robust Online Map Matching Framework with Reinforcement Learning

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online map matching requires efficient, robust, and accurate real-time alignment between streaming trajectories and road networks, yet existing methods struggle to simultaneously satisfy all three criteria. To address this, we propose the first Online Markov Decision Process (OMDP) modeling framework tailored for online scenarios, integrating future-aware reinforcement learning policies with dynamic feedback optimization. We design a heterogeneous graph neural network coupled with recurrent neural networks to capture multi-granularity topological relationships between trajectories and road segments, and introduce contrastive learning to achieve cross-modal representation alignment. Evaluated on three real-world datasets, our method achieves a 12.6% improvement in matching accuracy over state-of-the-art approaches while reducing inference latency by 41%. Moreover, it demonstrates significantly enhanced robustness against noisy, sparse, and anomalous trajectories.

Technology Category

Application Category

📝 Abstract
Online map matching is a fundamental problem in location-based services, aiming to incrementally match trajectory data step-by-step onto a road network. However, existing methods fail to meet the needs for efficiency, robustness, and accuracy required by large-scale online applications, making this task still a challenging problem. This paper introduces a novel framework that achieves high accuracy and efficient matching while ensuring robustness in handling diverse scenarios. To improve efficiency, we begin by modeling the online map matching problem as an Online Markov Decision Process (OMDP) based on its inherent characteristics. This approach helps efficiently merge historical and real-time data, reducing unnecessary calculations. Next, to enhance the model's robustness, we design a reinforcement learning method, enabling robust handling of real-time data from dynamically changing environments. In particular, we propose a novel model learning process and a comprehensive reward function, allowing the model to make reasonable current matches from a future-oriented perspective, and to continuously update and optimize during the decision-making process based on feedback. Lastly, to address the heterogeneity between trajectories and roads, we design distinct graph structures, facilitating efficient representation learning through graph and recurrent neural networks. To further align trajectory and road data, we introduce contrastive learning to decrease their distance in the latent space, thereby promoting effective integration of the two. Extensive evaluations on three real-world datasets confirm that our method significantly outperforms existing state-of-the-art solutions in terms of accuracy, efficiency and robustness.
Problem

Research questions and friction points this paper is trying to address.

Enhance online map matching efficiency
Improve robustness in dynamic environments
Address trajectory-road data heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning for robust handling
Online Markov Decision Process modeling
Contrastive learning for data integration
🔎 Similar Papers
No similar papers found.