🤖 AI Summary
Traditional asymptotic theory (e.g., the Central Limit Theorem) often fails for complex research problems, and prediction-driven inference lacks flexible, assumption-light tools. Method: We propose PPBoot—a novel method that directly embeds standard bootstrap resampling into the Prediction-Powered Inference (PPI) framework, eliminating reliance on asymptotic normality assumptions. PPBoot integrates arbitrary black-box prediction models with bootstrap-based uncertainty quantification and incorporates bias correction to enable valid statistical inference for arbitrary estimators. Contribution/Results: Compared to CLT-dependent PPI(++) methods, PPBoot achieves comparable or superior performance in multivariate regression and causal effect estimation. Crucially, it remains robust in settings where the CLT breaks down—such as small samples and non-i.i.d. data—while maintaining conceptual simplicity and ease of implementation. PPBoot thus substantially broadens the applicability and reliability of prediction-powered inference.
📝 Abstract
We introduce PPBoot: a bootstrap-based method for prediction-powered inference. PPBoot is applicable to arbitrary estimation problems and is very simple to implement, essentially only requiring one application of the bootstrap. Through a series of examples, we demonstrate that PPBoot often performs nearly identically to (and sometimes better than) the earlier PPI(++) method based on asymptotic normality$unicode{x2013}$when the latter is applicable$unicode{x2013}$without requiring any asymptotic characterizations. Given its versatility, PPBoot could simplify and expand the scope of application of prediction-powered inference to problems where central limit theorems are hard to prove.