Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core challenges and integration pathways for incorporating large language models (LLMs) into agent-based social simulation (ABSS). Recognizing LLMs’ limitations in theory of mind and social inference—key cognitive modeling capabilities—it proposes a “rules + LLM” hybrid architecture: embedding LLMs within established simulation platforms (e.g., GAMA, NetLogo) to enhance agent behavioral expressivity and social interaction fidelity, while retaining rule-based modules to ensure transparency, interpretability, and reproducibility. The study systematically evaluates state-of-the-art implementations—including Smallville and AgentSociety—to delineate LLMs’ appropriate use cases in generative social interaction and their constraints in predictive modeling. Its primary contribution is a methodological framework for LLM-augmented ABSS that jointly optimizes behavioral fidelity, explainability, and reproducibility. Furthermore, it provides empirical evidence and design principles for robust LLM integration in social computing.

Technology Category

Application Category

📝 Abstract
This position paper examines the use of Large Language Models (LLMs) in social simulation, analyzing both their potential and their limitations from a computational social science perspective. The first part reviews recent findings on the ability of LLMs to replicate key aspects of human cognition, including Theory of Mind reasoning and social inference, while also highlighting significant limitations such as cognitive biases, lack of true understanding, and inconsistencies in behavior. The second part surveys emerging applications of LLMs in multi-agent simulation frameworks, focusing on system architectures, scale, and validation strategies. Notable projects such as Generative Agents (Smallville) and AgentSociety are discussed in terms of their design choices, empirical grounding, and methodological innovations. Particular attention is given to the challenges of behavioral fidelity, calibration, and reproducibility in large-scale LLM-driven simulations. The final section distinguishes between contexts where LLMs, like other black-box systems, offer direct value-such as interactive simulations and serious games-and those where their use is more problematic, notably in explanatory or predictive modeling. The paper concludes by advocating for hybrid approaches that integrate LLMs into traditional agent-based modeling platforms (GAMA, Netlogo, etc), enabling modelers to combine the expressive flexibility of language-based reasoning with the transparency and analytical rigor of classical rule-based systems.
Problem

Research questions and friction points this paper is trying to address.

Examines LLMs' potential and limitations in social simulation
Analyzes behavioral fidelity and reproducibility challenges in LLM-driven simulations
Advocates hybrid approaches combining LLMs with rule-based systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating LLMs in multi-agent simulation frameworks
Hybrid approaches combining LLMs with rule-based systems
Addressing behavioral fidelity and reproducibility challenges
🔎 Similar Papers
2024-10-06Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)Citations: 13