π€ AI Summary
Existing approaches typically target isolated tasks, limiting their capacity to comprehensively model human mobility and resulting in poor generalizability. To address this, we propose GDHME (Generalizable Dynamic Human Mobility Embedding), the first framework integrating continuous-time dynamic graph encoding with autoregressive self-supervised learning to jointly and finely characterize interactions among individuals, geographic regions, and time. Leveraging real-world cellular trajectory data, GDHME employs continuous-time graph neural networks and dynamic graph representation learning to enable cross-task knowledge transfer and uncover latent semantic patterns. Offline experiments demonstrate its ability to automatically learn discriminative node representations. GDHME has been integrated into the Jiutian Chuanliu large-scale foundation model and showcased at China Mobileβs 2023 Global Partner Conference, validating its effectiveness and broad applicability across multi-city urban sensing tasks.
π Abstract
As a window for urban sensing, human mobility contains rich spatiotemporal information that reflects both residents' behavior preferences and the functions of urban areas. The analysis of human mobility has attracted the attention of many researchers. However, existing methods often address specific tasks from a particular perspective, leading to insufficient modeling of human mobility and limited applicability of the learned knowledge in various downstream applications. To address these challenges, this paper proposes to push massive amounts of human mobility data into a spatiotemporal model, discover latent semantics behind mobility behavior and support various urban sensing tasks. Specifically, a large-scale and widely covering human mobility data is collected through the ubiquitous base station system and a framework named General-purpose and Dynamic Human Mobility Embedding (GDHME) for urban sensing is introduced. The framework follows the self-supervised learning idea and contains two major stages. In stage 1, GDHME treats people and regions as nodes within a dynamic graph, unifying human mobility data as people-region-time interactions. An encoder operating in continuous-time dynamically computes evolving node representations, capturing dynamic states for both people and regions. Moreover, an autoregressive self-supervised task is specially designed to guide the learning of the general-purpose node embeddings. In stage 2, these representations are utilized to support various tasks. To evaluate the effectiveness of our GDHME framework, we further construct a multi-task urban sensing benchmark. Offline experiments demonstrate GDHME's ability to automatically learn valuable node features from vast amounts of data. Furthermore, our framework is used to deploy the JiuTian ChuanLiu Big Model, a system that has been presented at the 2023 China Mobile Worldwide Partner Conference.