What Makes LLM Agent Simulations Useful for Policy? Insights From an Iterative Design Engagement in Emergency Preparedness

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Social simulation in policy-making suffers from limited credibility and trust across domains. Method: This study introduces a verifiable human-AI co-modeling paradigm grounded in large language model (LLM) agents, developed through a year-long iterative collaboration with a university emergency response team. The system comprises 13,000 LLM agents simulating crowd mobility and information diffusion dynamics during large-scale event emergencies. Contribution/Results: We propose three design principles: (1) initiating modeling with high-fidelity, verifiable scenarios to establish cross-disciplinary trust; (2) using preliminary simulations to elicit domain experts’ tacit knowledge; and (3) treating simulation development and policy refinement as a co-evolutionary process. The framework has directly informed operational improvements—including volunteer training optimization, evacuation protocol revision, and infrastructure layout adjustment—demonstrating end-to-end validation from simulation to policy implementation.

Technology Category

Application Category

📝 Abstract
There is growing interest in using Large Language Models as agents (LLM agents) for social simulations to inform policy, yet real-world adoption remains limited. This paper addresses the question: How can LLM agent simulations be made genuinely useful for policy? We report on a year-long iterative design engagement with a university emergency preparedness team. Across multiple design iterations, we iteratively developed a system of 13,000 LLM agents that simulate crowd movement and communication during a large-scale gathering under various emergency scenarios. These simulations informed actual policy implementation, shaping volunteer training, evacuation protocols, and infrastructure planning. Analyzing this process, we identify three design implications: start with verifiable scenarios and build trust gradually, use preliminary simulations to elicit tacit knowledge, and treat simulation and policy development as evolving together. These implications highlight actionable pathways to making LLM agent simulations that are genuinely useful for policy.
Problem

Research questions and friction points this paper is trying to address.

Developing trustworthy LLM agent simulations for policy decisions
Using agent simulations to inform emergency preparedness planning
Creating actionable pathways for LLM simulations in policy development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated crowd movement with 13,000 LLM agents
Used verifiable scenarios to build trust gradually
Integrated simulations with evolving policy development
🔎 Similar Papers
No similar papers found.