🤖 AI Summary
This work addresses the instability and high sensitivity of large language models (LLMs) when used as simulated participants in social science research, where it is often difficult to disentangle model capabilities from experimental design effects. The authors reframe participant simulation as an agent design problem within a complete experimental protocol and propose a novel evaluation framework centered on fidelity to scientific inference. They introduce the first dynamic benchmark platform that enables end-to-end replication of human experiments and quantifies behavioral alignment between humans and LLM agents. The platform implements a four-stage Filter–Extract–Execute–Evaluate pipeline, integrating LLM agents, original experimental scripts, and statistical analysis workflows. Validated across 12 classic studies encompassing over 6,000 trials and 2,100+ human participants, the framework demonstrates robust effectiveness in cognitive, strategic interaction, and social psychological domains.
📝 Abstract
Large language models (LLMs) are increasingly used as simulated participants in social science experiments, but their behavior is often unstable and highly sensitive to design choices. Prior evaluations frequently conflate base-model capabilities with experimental instantiation, obscuring whether outcomes reflect the model itself or the agent setup. We instead frame participant simulation as an agent-design problem over full experimental protocols, where an agent is defined by a base model and a specification (e.g., participant attributes) that encodes behavioral assumptions. We introduce HUMANSTUDY-BENCH, a benchmark and execution engine that orchestrates LLM-based agents to reconstruct published human-subject experiments via a Filter--Extract--Execute--Evaluate pipeline, replaying trial sequences and running the original analysis pipeline in a shared runtime that preserves the original statistical procedures end to end. To evaluate fidelity at the level of scientific inference, we propose new metrics to quantify how much human and agent behaviors agree. We instantiate 12 foundational studies as an initial suite in this dynamic benchmark, spanning individual cognition, strategic interaction, and social psychology, and covering more than 6,000 trials with human samples ranging from tens to over 2,100 participants.