🤖 AI Summary
This study investigates whether large language models (LLMs) can reliably predict outcomes of field experiments in economics and the social sciences. We develop the first systematic evaluation framework, conducting zero-shot and few-shot predictions across 319 canonical field experiments. Our method introduces structured experimental design prompting and domain-knowledge injection to enhance model reasoning about causal interventions. Results show that LLMs achieve 78% prediction accuracy—significantly outperforming conventional baselines—and provide the first empirical evidence of LLMs’ validity in forecasting real-world social behavior interventions. We further identify systematic performance disparities along gender, ethnicity, and social norm dimensions. This work extends the application frontier of LLMs to causal inference in the social sciences and proposes an interpretable, socially aware sensitivity analysis paradigm—establishing both a methodological foundation and practical pathway for AI-augmented empirical social science.
📝 Abstract
Large language models (LLMs) have demonstrated unprecedented emergent capabilities, including content generation, translation, and the simulation of human behavior. Field experiments, despite their high cost, are widely employed in economics and the social sciences to study real-world human behavior through carefully designed manipulations and treatments. However, whether and how these models can be utilized to predict outcomes of field experiments remains unclear. In this paper, we propose and evaluate an automated LLM-based framework that produces predictions of field experiment outcomes. Applying this framework to 319 experiments drawn from renowned economics literature yields a notable prediction accuracy of 78%. Interestingly, we find that performance is highly skewed. We attribute this skewness to several factors, including gender differences, ethnicity, and social norms.