🤖 AI Summary
Low intelligence of computational agents in current social experiments constrains their applicability and ecological validity.
Method: We propose the first large language model (LLM)-based intelligent agent framework designed specifically for social experiment pre-enactment. It integrates social science theory to construct high-fidelity interactive environments and supports intervention design alongside automated acquisition and analysis of multimodal data—including behavioral logs, survey responses, and dialogue transcripts. LLMs serve as the cognitive core for individual decision-making, systematically embedded within experimental paradigms and data engineering pipelines.
Contribution/Results: Evaluated across three canonical social experiments, the framework faithfully reproduces both quantitative outcomes (e.g., obedience and cooperation rates) and qualitative patterns (e.g., discourse strategies and group dynamics), demonstrating strong validity and scalability. This work overcomes the intelligence bottleneck of traditional agent-based modeling, establishing an interpretable, iterative “silicon-based pre-enactment” paradigm for social science research.
📝 Abstract
Computational social experiments, which typically employ agent-based modeling to create testbeds for piloting social experiments, not only provide a computational solution to the major challenges faced by traditional experimental methods, but have also gained widespread attention in various research fields. Despite their significance, their broader impact is largely limited by the underdeveloped intelligence of their core component, i.e., agents. To address this limitation, we develop a framework grounded in well-established social science theories and practices, consisting of three key elements: (i) large language model (LLM)-driven experimental agents, serving as "silicon participants", (ii) methods for implementing various interventions or treatments, and (iii) tools for collecting behavioral, survey, and interview data. We evaluate its effectiveness by replicating three representative experiments, with results demonstrating strong alignment, both quantitatively and qualitatively, with real-world evidence. This work provides the first framework for designing LLM-driven agents to pilot social experiments, underscoring the transformative potential of LLMs and their agents in computational social science