🤖 AI Summary
To address low fidelity in synthetic human trajectory generation under privacy-sensitive and data-scarce scenarios—where existing methods suffer from statistical bias and narrow evaluation practices—this paper proposes MIRAGE, the first neural temporal point process framework that jointly models exploratory behavior and preference-driven return dynamics. We introduce a novel four-task evaluation protocol—extending beyond the Datasaurus paradigm—that simultaneously assesses distributional similarity (e.g., spatial-temporal statistics) and downstream utility (e.g., POI recommendation, stay prediction). Evaluated on three real-world trajectory datasets, MIRAGE achieves 59.0–67.7% improvement in distributional fidelity and 10.9–33.4% gains across downstream tasks, consistently outperforming state-of-the-art baselines.
📝 Abstract
Human trajectory data, which plays a crucial role in various applications such as crowd management and epidemic prevention, is challenging to obtain due to practical constraints and privacy concerns. In this context, synthetic human trajectory data is generated to simulate as close as possible to real-world human trajectories, often under summary statistics and distributional similarities. However, these similarities oversimplify complex human mobility patterns (a.k.a. ''Datasaurus''), resulting in intrinsic biases in both generative model design and benchmarks of the generated trajectories. Against this background, we propose MIRAGE, a huMan-Imitative tRAjectory GenErative model designed as a neural Temporal Point Process integrating an Exploration and Preferential Return model. It imitates the human decision-making process in trajectory generation, rather than fitting any specific statistical distributions as traditional methods do, thus avoiding the Datasaurus issue. We also propose a comprehensive task-based evaluation protocol beyond Datasaurus to systematically benchmark trajectory generative models on four typical downstream tasks, integrating multiple techniques and evaluation metrics for each task, to assess the ultimate utility of the generated trajectories. We conduct a thorough evaluation of MIRAGE on three real-world user trajectory datasets against a sizeable collection of baselines. Results show that compared to the best baselines, MIRAGE-generated trajectory data not only achieves the best statistical and distributional similarities with 59.0-67.7% improvement, but also yields the best performance in the task-based evaluation with 10.9-33.4% improvement. A series of ablation studies also validate the key design choices of MIRAGE.