TrajGPT-R: Generating Urban Mobility Trajectory with Reinforcement Learning-Enhanced Generative Pre-trained Transformer

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of limited access to real-world urban mobility trajectories due to privacy constraints, which hinders dynamic urban modeling and planning. The authors formulate trajectory generation as an offline reinforcement learning problem and propose a two-stage approach: first, they employ inverse reinforcement learning (IRL) to infer individual movement preferences from historical data by learning a reward function; then, they fine-tune a pre-trained generative Transformer using this reward function to enhance the long-term consistency and diversity of synthetic trajectories. Innovatively integrating IRL with a pre-trained Transformer—alongside trajectory tokenization and vocabulary compression techniques—the method effectively mitigates sparse rewards and long-horizon credit assignment issues. Evaluated on multiple real-world datasets, the approach significantly outperforms existing models in both realism and diversity of generated trajectories, offering high-quality synthetic data for traffic simulation and urban planning.

Technology Category

Application Category

📝 Abstract
Mobility trajectories are essential for understanding urban dynamics and enhancing urban planning, yet access to such data is frequently hindered by privacy concerns. This research introduces a transformative framework for generating large-scale urban mobility trajectories, employing a novel application of a transformer-based model pre-trained and fine-tuned through a two-phase process. Initially, trajectory generation is conceptualized as an offline reinforcement learning (RL) problem, with a significant reduction in vocabulary space achieved during tokenization. The integration of Inverse Reinforcement Learning (IRL) allows for the capture of trajectory-wise reward signals, leveraging historical data to infer individual mobility preferences. Subsequently, the pre-trained model is fine-tuned using the constructed reward model, effectively addressing the challenges inherent in traditional RL-based autoregressive methods, such as long-term credit assignment and handling of sparse reward environments. Comprehensive evaluations on multiple datasets illustrate that our framework markedly surpasses existing models in terms of reliability and diversity. Our findings not only advance the field of urban mobility modeling but also provide a robust methodology for simulating urban data, with significant implications for traffic management and urban development planning. The implementation is publicly available at https://github.com/Wangjw6/TrajGPT_R.
Problem

Research questions and friction points this paper is trying to address.

urban mobility trajectory
privacy
trajectory generation
urban planning
data simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Generative Pre-trained Transformer
Inverse Reinforcement Learning
Urban Mobility Trajectory
Offline RL
🔎 Similar Papers
No similar papers found.