🤖 AI Summary
Empirical research often yields conflicting conclusions due to variations in analytical workflows, yet traditional multi-team replication efforts are costly and difficult to scale. This work proposes an autonomous AI analyst framework powered by large language models that, on a fixed dataset, generates diverse analytical pathways through varied prompts and model configurations. An integrated AI auditing mechanism filters out invalid analyses, enabling the first low-cost, large-scale simulation of human analytical diversity. The approach systematically reveals how preprocessing choices, modeling strategies, and inference procedures substantially influence effect sizes, p-values, and hypothesis support judgments. Furthermore, it demonstrates that analytical outcomes can be steered by modulating the AI’s role or underlying model, while remaining robust after excluding invalid analyses.
📝 Abstract
The conclusions of empirical research depend not only on data but on a sequence of analytic decisions that published results seldom make explicit. Past ``many-analyst" studies have demonstrated this: independent teams testing the same hypothesis on the same dataset regularly reach conflicting conclusions. But such studies require months of coordination among dozens of research groups and are therefore rarely conducted. In this work, we show that fully autonomous AI analysts built on large language models (LLMs) can reproduce a similar structured analytic diversity cheaply and at scale. We task these AI analysts with testing a pre-specified hypothesis on a fixed dataset, varying the underlying model and prompt framing across replicate runs. Each AI analyst independently constructs and executes a full analysis pipeline; an AI auditor then screens each run for methodological validity. Across three datasets spanning experimental and observational designs, AI analyst-produced analyses display wide dispersion in effect sizes, $p$-values, and binary decisions on supporting the hypothesis or not, frequently reversing whether a hypothesis is judged supported. This dispersion is structured: recognizable analytic choices in preprocessing, model specification, and inference differ systematically across LLM and persona conditions. Critically, the effects are \emph{steerable}: reassigning the analyst persona or LLM shifts the distribution of outcomes even after excluding methodologically deficient runs.