🤖 AI Summary
This paper identifies a fundamental causal validity issue when using large language models (LLMs) to simulate behavioral experiments: prompt ambiguity violates the no-confounding assumption, inducing causal confounding and focal bias. While conventional covariate control improves internal validity, it degrades ecological validity—revealing a unique trade-off. We formally characterize this challenge from a causal inference perspective, pinpointing flawed prompt design—not model incapacity—as the root cause. To address it, we propose paradigm shifts such as “non-blind design” and introduce a dual-dimensional evaluation framework jointly assessing ecological validity and unconfoundedness. Our approach integrates theoretical modeling, demand estimation, and comparative prompt-strategy experiments. Validated on real-world benchmarks, we demonstrate that standard prompts inevitably induce focalism; explicitly disclosing experimental logic significantly mitigates confounding and corrects counterfactual outcome biases.
📝 Abstract
Large Language Models (LLMs) have shown impressive potential to simulate human behavior. We identify a fundamental challenge in using them to simulate experiments: when LLM-simulated subjects are blind to the experimental design (as is standard practice with human subjects), variations in treatment systematically affect unspecified variables that should remain constant, violating the unconfoundedness assumption. Using demand estimation as a context and an actual experiment as a benchmark, we show this can lead to implausible results. While confounding may in principle be addressed by controlling for covariates, this can compromise ecological validity in the context of LLM simulations: controlled covariates become artificially salient in the simulated decision process, which introduces focalism. This trade-off between unconfoundedness and ecological validity is usually absent in traditional experimental design and represents a unique challenge in LLM simulations. We formalize this challenge theoretically, showing it stems from ambiguous prompting strategies, and hence cannot be fully addressed by improving training data or by fine-tuning. Alternative approaches that unblind the experimental design to the LLM show promise. Our findings suggest that effectively leveraging LLMs for experimental simulations requires fundamentally rethinking established experimental design practices rather than simply adapting protocols developed for human subjects.