🤖 AI Summary
This paper addresses the limitations of randomized experiments—including low statistical precision, high implementation cost, and substantial estimation uncertainty—by proposing a novel estimator that synergistically integrates predictions from multiple foundation models with observed experimental data. Grounded in the double-robustness principle and multi-model ensembling, the estimator guarantees statistical validity even when foundation models exhibit arbitrary misspecification. It is proven to be consistent and asymptotically normal, with an asymptotic variance no greater than that of the pure experimental estimator—a theoretical first that reconciles foundation-model-augmented estimation with rigorous statistical inference. Empirical validation across multiple real-world randomized experiments demonstrates substantial improvements in estimation accuracy: the proposed method achieves equivalent precision to conventional estimators while reducing required sample size by up to 20%.
📝 Abstract
Randomized experiments are the preferred approach for evaluating the effects of interventions, but they are costly and often yield estimates with substantial uncertainty. On the other hand, in silico experiments leveraging foundation models offer a cost-effective alternative that can potentially attain higher statistical precision. However, the benefits of in silico experiments come with a significant risk: statistical inferences are not valid if the models fail to accurately predict experimental responses to interventions. In this paper, we propose a novel approach that integrates the predictions from multiple foundation models with experimental data while preserving valid statistical inference. Our estimator is consistent and asymptotically normal, with asymptotic variance no larger than the standard estimator based on experimental data alone. Importantly, these statistical properties hold even when model predictions are arbitrarily biased. Empirical results across several randomized experiments show that our estimator offers substantial precision gains, equivalent to a reduction of up to 20% in the sample size needed to match the same precision as the standard estimator based on experimental data alone.