HumanStudy-Bench: Towards AI Agent Design for Participant Simulation

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and high sensitivity of large language models (LLMs) when used as simulated participants in social science research, where it is often difficult to disentangle model capabilities from experimental design effects. The authors reframe participant simulation as an agent design problem within a complete experimental protocol and propose a novel evaluation framework centered on fidelity to scientific inference. They introduce the first dynamic benchmark platform that enables end-to-end replication of human experiments and quantifies behavioral alignment between humans and LLM agents. The platform implements a four-stage Filter–Extract–Execute–Evaluate pipeline, integrating LLM agents, original experimental scripts, and statistical analysis workflows. Validated across 12 classic studies encompassing over 6,000 trials and 2,100+ human participants, the framework demonstrates robust effectiveness in cognitive, strategic interaction, and social psychological domains.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used as simulated participants in social science experiments, but their behavior is often unstable and highly sensitive to design choices. Prior evaluations frequently conflate base-model capabilities with experimental instantiation, obscuring whether outcomes reflect the model itself or the agent setup. We instead frame participant simulation as an agent-design problem over full experimental protocols, where an agent is defined by a base model and a specification (e.g., participant attributes) that encodes behavioral assumptions. We introduce HUMANSTUDY-BENCH, a benchmark and execution engine that orchestrates LLM-based agents to reconstruct published human-subject experiments via a Filter--Extract--Execute--Evaluate pipeline, replaying trial sequences and running the original analysis pipeline in a shared runtime that preserves the original statistical procedures end to end. To evaluate fidelity at the level of scientific inference, we propose new metrics to quantify how much human and agent behaviors agree. We instantiate 12 foundational studies as an initial suite in this dynamic benchmark, spanning individual cognition, strategic interaction, and social psychology, and covering more than 6,000 trials with human samples ranging from tens to over 2,100 participants.
Problem

Research questions and friction points this paper is trying to address.

participant simulation
large language models
agent design
behavioral fidelity
social science experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

agent design
participant simulation
LLM-based agents
experimental fidelity
benchmarking
🔎 Similar Papers
No similar papers found.
X
Xuan Liu
University of California San Diego
H
Haoyang Shang
Independent Researcher
Z
Zizhang Liu
Tsinghua University
X
Xinyan Liu
Independent Researcher
Yunze Xiao
Yunze Xiao
Language Technology Institute, Carnegie Mellon University
Natural Language ProcessingComputational Social ScienceAnthropomorphism
Y
Yiwen Tu
University of California San Diego
Haojian Jin
Haojian Jin
University of California San Diego
Human-Computer InteractionUbiquitous ComputingSecurity & PrivacyMobile Computing