ReplicatorBench: Benchmarking LLM Agents for Replicability in Social and Behavioral Sciences

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing replication benchmarks, which predominantly focus on computational reproducibility of published papers while overlooking real-world scenarios where new data are unavailable and failing to identify or holistically evaluate non-replicable studies. To bridge this gap, the authors propose ReplicatorBench, an end-to-end benchmark that, for the first time, integrates human-verified replicable and non-replicable research claims across the social and behavioral sciences. Built upon a large language model (LLM)-based agent framework, ReplicatorAgent incorporates web search, sandboxed execution, and multilingual programming support to systematically assess agent performance across three critical replication stages: data extraction, experimental design, and result interpretation. Experiments demonstrate that while current LLM agents can effectively execute computational experiments, they still face significant challenges in key aspects such as acquiring new data.

Technology Category

Application Category

๐Ÿ“ Abstract
The literature has witnessed an emerging interest in AI agents for automated assessment of scientific papers. Existing benchmarks focus primarily on the computational aspect of this task, testing agents'ability to reproduce or replicate research outcomes when having access to the code and data. This setting, while foundational, (1) fails to capture the inconsistent availability of new data for replication as opposed to reproduction, and (2) lacks ground-truth diversity by focusing only on reproducible papers, thereby failing to evaluate an agent's ability to identify non-replicable research. Furthermore, most benchmarks only evaluate outcomes rather than the replication process. In response, we introduce ReplicatorBench, an end-to-end benchmark, including human-verified replicable and non-replicable research claims in social and behavioral sciences for evaluating AI agents in research replication across three stages: (1) extraction and retrieval of replication data; (2) design and execution of computational experiments; and (3) interpretation of results, allowing a test of AI agents'capability to mimic the activities of human replicators in real world. To set a baseline of AI agents'capability, we develop ReplicatorAgent, an agentic framework equipped with necessary tools, like web search and iterative interaction with sandboxed environments, to accomplish tasks in ReplicatorBench. We evaluate ReplicatorAgent across four underlying large language models (LLMs), as well as different design choices of programming language and levels of code access. Our findings reveal that while current LLM agents are capable of effectively designing and executing computational experiments, they struggle with retrieving resources, such as new data, necessary to replicate a claim. All code and data are publicly available at https://github.com/CenterForOpenScience/llm-benchmarking.
Problem

Research questions and friction points this paper is trying to address.

replicability
LLM agents
social and behavioral sciences
benchmarking
scientific replication
Innovation

Methods, ideas, or system contributions that make the work stand out.

replicability
LLM agents
scientific benchmarking
social and behavioral sciences
end-to-end evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.