Learning Multi-Modal Mobility Dynamics for Generalized Next Location Recommendation

๐Ÿ“… 2025-12-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing next-location recommendation methods suffer from limited generalization: unimodal models are constrained by data sparsity and bias, while multimodal models struggle to bridge the semantic gap between static modality representations and dynamic spatiotemporal mobility patterns. To address this, we propose a Unified Spatiotemporal Relational Graph (STRG) to jointly model multimodal dynamics, and design an STKG-guided cross-modal alignment and gated fusion mechanism that bridges a large language modelโ€“enhanced Spatiotemporal Knowledge Graph (STKG) with real-world mobility trajectories. Our core innovation lies in unifying multimodal semantics, spatiotemporal evolution, and knowledge priors within a single coherent framework. Extensive experiments on six public benchmarks demonstrate significant improvements over state-of-the-art methods: consistent performance gains under normal scenarios and substantially enhanced generalization capability under anomalous conditions.

Technology Category

Application Category

๐Ÿ“ Abstract
The precise prediction of human mobility has produced significant socioeconomic impacts, such as location recommendations and evacuation suggestions. However, existing methods suffer from limited generalization capability: unimodal approaches are constrained by data sparsity and inherent biases, while multi-modal methods struggle to effectively capture mobility dynamics caused by the semantic gap between static multi-modal representation and spatial-temporal dynamics. Therefore, we leverage multi-modal spatial-temporal knowledge to characterize mobility dynamics for the location recommendation task, dubbed as extbf{M}ulti- extbf{M}odal extbf{Mob}ility ( extbf{M}$^3$ extbf{ob}). First, we construct a unified spatial-temporal relational graph (STRG) for multi-modal representation, by leveraging the functional semantics and spatial-temporal knowledge captured by the large language models (LLMs)-enhanced spatial-temporal knowledge graph (STKG). Second, we design a gating mechanism to fuse spatial-temporal graph representations of different modalities, and propose an STKG-guided cross-modal alignment to inject spatial-temporal dynamic knowledge into the static image modality. Extensive experiments on six public datasets show that our proposed method not only achieves consistent improvements in normal scenarios but also exhibits significant generalization ability in abnormal scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited generalization in mobility prediction methods
Integrates multi-modal spatial-temporal knowledge for location recommendation
Enhances cross-modal alignment to capture mobility dynamics effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs unified spatial-temporal graph using LLM-enhanced knowledge
Fuses multi-modal representations via gating mechanism and cross-modal alignment
Injects spatial-temporal dynamics into static image modality for generalization
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Junshu Dai
Zhejiang University
Y
Yu Wang
Zhejiang University
Tongya Zheng
Tongya Zheng
Assitant Professor, Hangzhou City University
graph neural networkstemporal networksspatiotemporal learningtrajectory simulation
W
Wei Ji
Nanjing University
Q
Qinghong Guo
Zhejiang University
J
Ji Cao
Zhejiang University
J
Jie Song
Zhejiang University
Canghong Jin
Canghong Jin
Hangzhou City University
Data MiningBig data
M
Mingli Song
Zhejiang University