๐ค AI Summary
Existing next-location recommendation methods suffer from limited generalization: unimodal models are constrained by data sparsity and bias, while multimodal models struggle to bridge the semantic gap between static modality representations and dynamic spatiotemporal mobility patterns. To address this, we propose a Unified Spatiotemporal Relational Graph (STRG) to jointly model multimodal dynamics, and design an STKG-guided cross-modal alignment and gated fusion mechanism that bridges a large language modelโenhanced Spatiotemporal Knowledge Graph (STKG) with real-world mobility trajectories. Our core innovation lies in unifying multimodal semantics, spatiotemporal evolution, and knowledge priors within a single coherent framework. Extensive experiments on six public benchmarks demonstrate significant improvements over state-of-the-art methods: consistent performance gains under normal scenarios and substantially enhanced generalization capability under anomalous conditions.
๐ Abstract
The precise prediction of human mobility has produced significant socioeconomic impacts, such as location recommendations and evacuation suggestions. However, existing methods suffer from limited generalization capability: unimodal approaches are constrained by data sparsity and inherent biases, while multi-modal methods struggle to effectively capture mobility dynamics caused by the semantic gap between static multi-modal representation and spatial-temporal dynamics. Therefore, we leverage multi-modal spatial-temporal knowledge to characterize mobility dynamics for the location recommendation task, dubbed as extbf{M}ulti- extbf{M}odal extbf{Mob}ility ( extbf{M}$^3$ extbf{ob}). First, we construct a unified spatial-temporal relational graph (STRG) for multi-modal representation, by leveraging the functional semantics and spatial-temporal knowledge captured by the large language models (LLMs)-enhanced spatial-temporal knowledge graph (STKG). Second, we design a gating mechanism to fuse spatial-temporal graph representations of different modalities, and propose an STKG-guided cross-modal alignment to inject spatial-temporal dynamic knowledge into the static image modality. Extensive experiments on six public datasets show that our proposed method not only achieves consistent improvements in normal scenarios but also exhibits significant generalization ability in abnormal scenarios.