On Nonasymptotic Confidence Intervals for Treatment Effects in Randomized Experiments

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a novel approach to constructing non-asymptotic confidence intervals in randomized experiments that matches the effective sample size of asymptotic methods based on the central limit theoremβ€”a longstanding challenge in finite-sample inference. By systematically leveraging negative dependence and variance-adaptive techniques, the authors derive the first non-asymptotic confidence intervals that achieve the same effective sample size as their asymptotic counterparts. The resulting intervals not only exhibit comparable empirical performance but also attain the information-theoretic lower bound, thereby establishing their optimality for statistical inference with finite samples. This advancement bridges the gap between asymptotic efficiency and finite-sample guarantees, offering a theoretically optimal and practically viable solution for rigorous uncertainty quantification in experimental settings.

Technology Category

Application Category

πŸ“ Abstract
We study nonasymptotic (finite-sample) confidence intervals for treatment effects in randomized experiments. In the existing literature, the effective sample sizes of nonasymptotic confidence intervals tend to be looser than the corresponding central-limit-theorem-based confidence intervals by a factor depending on the square root of the propensity score. We show that this performance gap can be closed, designing nonasymptotic confidence intervals that have the same effective sample size as their asymptotic counterparts. Our approach involves systematic exploitation of negative dependence or variance adaptivity (or both). We also show that the nonasymptotic rates that we achieve are unimprovable in an information-theoretic sense.
Problem

Research questions and friction points this paper is trying to address.

nonasymptotic confidence intervals
treatment effects
randomized experiments
effective sample size
propensity score
Innovation

Methods, ideas, or system contributions that make the work stand out.

nonasymptotic confidence intervals
negative dependence
variance adaptivity
treatment effects
randomized experiments
πŸ”Ž Similar Papers
No similar papers found.
R
Ricardo J. Sandoval
University of California, Berkeley
S
Sivaraman Balakrishnan
Carnegie Mellon University
Avi Feller
Avi Feller
UC Berkeley
Michael I. Jordan
Michael I. Jordan
Professor of Electrical Engineering and Computer Sciences and Professor of Statistics, UC Berkeley
machine learningcomputer sciencestatisticsartificial intelligenceoptimization
I
Ian Waudby-Smith
University of California, Berkeley