Efficient Randomized Experiments Using Foundation Models

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limitations of randomized experiments—including low statistical precision, high implementation cost, and substantial estimation uncertainty—by proposing a novel estimator that synergistically integrates predictions from multiple foundation models with observed experimental data. Grounded in the double-robustness principle and multi-model ensembling, the estimator guarantees statistical validity even when foundation models exhibit arbitrary misspecification. It is proven to be consistent and asymptotically normal, with an asymptotic variance no greater than that of the pure experimental estimator—a theoretical first that reconciles foundation-model-augmented estimation with rigorous statistical inference. Empirical validation across multiple real-world randomized experiments demonstrates substantial improvements in estimation accuracy: the proposed method achieves equivalent precision to conventional estimators while reducing required sample size by up to 20%.

Technology Category

Application Category

📝 Abstract
Randomized experiments are the preferred approach for evaluating the effects of interventions, but they are costly and often yield estimates with substantial uncertainty. On the other hand, in silico experiments leveraging foundation models offer a cost-effective alternative that can potentially attain higher statistical precision. However, the benefits of in silico experiments come with a significant risk: statistical inferences are not valid if the models fail to accurately predict experimental responses to interventions. In this paper, we propose a novel approach that integrates the predictions from multiple foundation models with experimental data while preserving valid statistical inference. Our estimator is consistent and asymptotically normal, with asymptotic variance no larger than the standard estimator based on experimental data alone. Importantly, these statistical properties hold even when model predictions are arbitrarily biased. Empirical results across several randomized experiments show that our estimator offers substantial precision gains, equivalent to a reduction of up to 20% in the sample size needed to match the same precision as the standard estimator based on experimental data alone.
Problem

Research questions and friction points this paper is trying to address.

Enhances precision in randomized experiments
Integrates foundation models with experimental data
Reduces sample size while maintaining statistical validity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates multiple foundation models
Preserves valid statistical inference
Enhances precision, reduces sample size
🔎 Similar Papers
No similar papers found.
Piersilvio De Bartolomeis
Piersilvio De Bartolomeis
ETH Zürich
Trial AugmentationCausal InferenceMachine LearningReinforcement Learning
Javier Abad
Javier Abad
ETH Zurich
Machine learningCausal inferenceSafetyPrivacy
G
Guanbo Wang
CAUSALab, Harvard T.H. Chan School of Public Health; Department of Epidemiology, Harvard T.H. Chan School of Public Health
Konstantin Donhauser
Konstantin Donhauser
ETH Zuerich
high dimensional statisticsstatistical machine learning
R
Raymond M. Duch
Department of Politics and International Relations, University of Oxford
Fanny Yang
Fanny Yang
ETH Zurich
Machine LearningStatistical LearningOptimizationHigh-dimensional Statistics
I
Issa J Dahabreh
CAUSALab, Harvard T.H. Chan School of Public Health; Department of Biostatistics, Harvard T.H. Chan School of Public Health