🤖 AI Summary
To address insufficient trajectory realism caused by semantic distortion in synthetic trajectories and the absence of geographic constraints, this paper proposes a geospatially aware autoregressive generation framework built upon the GPT architecture. Our method innovatively integrates gravity-guided sampling and an explicit road connectivity graph matrix to model geographic accessibility, and introduces a trajectory-feedback-driven reinforcement learning fine-tuning (RLTF) mechanism to ensure semantic coherence, geographic compliance, and controllable trajectory generation. Evaluated on multiple real-world datasets, our approach consistently outperforms state-of-the-art methods across key metrics—including origin-destination (OD) similarity, trip length distribution, activity radius, link-level spatial distribution, and adherence to the gravity law—demonstrating substantial improvements in spatial plausibility and real-world consistency of generated mobility trajectories.
📝 Abstract
Generative models have shown promising results in capturing human mobility characteristics and generating synthetic trajectories. However, it remains challenging to ensure that the generated geospatial mobility data is semantically realistic, including consistent location sequences, and reflects real-world characteristics, such as constraining on geospatial limits. We reformat human mobility modeling as an autoregressive generation task to address these issues, leveraging the Generative Pre-trained Transformer (GPT) architecture. To ensure its controllable generation to alleviate the above challenges, we propose a geospatially-aware generative model, MobilityGPT. We propose a gravity-based sampling method to train a transformer for semantic sequence similarity. Then, we constrained the training process via a road connectivity matrix that provides the connectivity of sequences in trajectory generation, thereby keeping generated trajectories in geospatial limits. Lastly, we proposed to construct a preference dataset for fine-tuning MobilityGPT via Reinforcement Learning from Trajectory Feedback (RLTF) mechanism, which minimizes the travel distance between training and the synthetically generated trajectories. Experiments on real-world datasets demonstrate MobilityGPT's superior performance over state-of-the-art methods in generating high-quality mobility trajectories that are closest to real data in terms of origin-destination similarity, trip length, travel radius, link, and gravity distributions.