🤖 AI Summary
This work addresses the limitation of existing large language model (LLM)-based human mobility simulation methods, which lack mechanisms for modeling collective coordination and thus fail to capture emergent group behaviors. To overcome this, the authors propose the M2LSimu framework, which uniquely integrates group-level mobility metrics—extracted from shared observational data—as guiding signals within a multi-granularity prompting strategy. This approach enables joint modeling of individual cognitive processes and aggregate statistical patterns. Under constrained computational budgets, M2LSimu simultaneously optimizes multiple group-level objectives, significantly outperforming state-of-the-art LLM baselines on two public datasets while enhancing both the individual realism and group-level statistical consistency of the generated trajectories.
📝 Abstract
Large-scale human mobility simulation is critical for many science domains such as urban science, epidemiology, and transportation analysis. Recent works treat large language models (LLMs) as human agents to simulate realistic mobility trajectories by modeling individual-level cognitive processes. However, these approaches generate individual mobility trajectories independently, without any population-level coordination mechanism, and thus fail to capture the emergence of collective behaviors. To address this issue, we design M2LSimu, a mobility measures-guided multi-prompt adjustment framework that leverages mobility measures derived from shared data as guidance to refine individual-level prompts for realistic mobility generation. Our framework applies coarse-grained adjustment strategies guided by mobility measures, progressively enabling fine-grained individual-level adaptation while satisfying multiple population-level mobility objectives under a limited budget. Experiments show that M2LSimu significantly outperforms state-of-the-art LLM-based methods on two public datasets.