🤖 AI Summary
Predicting individual next locations suffers from poor generalization to unseen locations, as conventional methods rely solely on historical trajectories and fail to explicitly model spatial structure and urban semantics. To address this, we propose CaLLiPer, an inductive spatial-semantic joint embedding framework. CaLLiPer is the first to incorporate both geographic coordinates and POI semantics into contrastive learning, leveraging multimodal fusion, geocoding, and inductive graph representation learning to produce location representations that are spatially explicit, semantically enriched, and zero-shot ready. Evaluated under the inductive setting on four public benchmark datasets, CaLLiPer consistently outperforms strong baselines. All code and data are publicly released to ensure full reproducibility and facilitate future research.
📝 Abstract
Predicting individuals' next locations is a core task in human mobility modelling, with wide-ranging implications for urban planning, transportation, public policy and personalised mobility services. Traditional approaches largely depend on location embeddings learned from historical mobility patterns, limiting their ability to encode explicit spatial information, integrate rich urban semantic context, and accommodate previously unseen locations. To address these challenges, we explore the application of CaLLiPer -- a multimodal representation learning framework that fuses spatial coordinates and semantic features of points of interest through contrastive learning -- for location embedding in individual mobility prediction. CaLLiPer's embeddings are spatially explicit, semantically enriched, and inductive by design, enabling robust prediction performance even in scenarios involving emerging locations. Through extensive experiments on four public mobility datasets under both conventional and inductive settings, we demonstrate that CaLLiPer consistently outperforms strong baselines, particularly excelling in inductive scenarios. Our findings highlight the potential of multimodal, inductive location embeddings to advance the capabilities of human mobility prediction systems. We also release the code and data (https://github.com/xlwang233/Into-the-Unknown) to foster reproducibility and future research.