π€ AI Summary
This work proposes a novel approach to constructing non-asymptotic confidence intervals in randomized experiments that matches the effective sample size of asymptotic methods based on the central limit theoremβa longstanding challenge in finite-sample inference. By systematically leveraging negative dependence and variance-adaptive techniques, the authors derive the first non-asymptotic confidence intervals that achieve the same effective sample size as their asymptotic counterparts. The resulting intervals not only exhibit comparable empirical performance but also attain the information-theoretic lower bound, thereby establishing their optimality for statistical inference with finite samples. This advancement bridges the gap between asymptotic efficiency and finite-sample guarantees, offering a theoretically optimal and practically viable solution for rigorous uncertainty quantification in experimental settings.
π Abstract
We study nonasymptotic (finite-sample) confidence intervals for treatment effects in randomized experiments. In the existing literature, the effective sample sizes of nonasymptotic confidence intervals tend to be looser than the corresponding central-limit-theorem-based confidence intervals by a factor depending on the square root of the propensity score. We show that this performance gap can be closed, designing nonasymptotic confidence intervals that have the same effective sample size as their asymptotic counterparts. Our approach involves systematic exploitation of negative dependence or variance adaptivity (or both). We also show that the nonasymptotic rates that we achieve are unimprovable in an information-theoretic sense.