🤖 AI Summary
Synthetic population generation in activity-based models (ABMs) struggles to simultaneously ensure feasibility and diversity—particularly in distinguishing rare but plausible attribute combinations (sampling zeros) from structurally impossible ones (structural zeros). Method: We propose the first generative framework integrating large language models (LLMs) with Bayesian networks (BNs), enforcing topological-order constraints on autoregressive generation. The approach leverages lightweight, open-source LLM fine-tuning and few-shot learning for end-to-end, low-cost, and reproducible synthesis. Contribution/Results: Evaluated on a megacity-scale scenario, our method achieves 95% feasibility—outperforming state-of-the-art deep generative models (DGMs) by 15 percentage points—while preserving diversity. It supports single-machine deployment and is fully open-sourced.
📝 Abstract
Generating a synthetic population that is both feasible and diverse is crucial for ensuring the validity of downstream activity schedule simulation in activity-based models (ABMs). While deep generative models (DGMs), such as variational autoencoders and generative adversarial networks, have been applied to this task, they often struggle to balance the inclusion of rare but plausible combinations (i.e., sampling zeros) with the exclusion of implausible ones (i.e., structural zeros). To improve feasibility while maintaining diversity, we propose a fine-tuning method for large language models (LLMs) that explicitly controls the autoregressive generation process through topological orderings derived from a Bayesian Network (BN). Experimental results show that our hybrid LLM-BN approach outperforms both traditional DGMs and proprietary LLMs (e.g., ChatGPT-4o) with few-shot learning. Specifically, our approach achieves approximately 95% feasibility, significantly higher than the ~80% observed in DGMs, while maintaining comparable diversity, making it well-suited for practical applications. Importantly, the method is based on a lightweight open-source LLM, enabling fine-tuning and inference on standard personal computing environments. This makes the approach cost-effective and scalable for large-scale applications, such as synthesizing populations in megacities, without relying on expensive infrastructure. By initiating the ABM pipeline with high-quality synthetic populations, our method improves overall simulation reliability and reduces downstream error propagation. The source code for these methods is available for research and practical application.