🤖 AI Summary
This work proposes a novel asymptotic framework for post-hoc valid inference that overcomes the rigidity of traditional statistical testing, which requires pre-specified significance levels. By extending e-value methodology from non-asymptotic to asymptotic settings, the paper enables the construction of confidence sets and p-values at any significance level chosen after observing the data. The approach achieves sharper and more efficient inference than existing non-asymptotic methods under weaker assumptions—specifically, without requiring strong moment conditions—thereby circumventing the conservativeness inherent in conventional procedures. This advancement establishes a rigorous theoretical foundation for flexible, data-driven statistical analysis while preserving inferential validity in large-sample regimes.
📝 Abstract
We derive inferential procedures for large sample sizes that remain valid under data-dependent significance levels (so-called"post-hoc valid inference"). Classical statistical tools require that the significance level -- the"type-I error"-- is selected prior to seeing or analyzing any data. This restriction leads to some drawbacks. For instance, if an analyst generates an inconclusive confidence interval, repeating the process with a larger significance level is not an option -- the result is final. Recently, e-values have emerged as the solution to this problem, being both necessary and sufficient tools for performing various forms of post-hoc inference. All such results, however, have thus far been nonasymptotic. As a result, they inherit familiar limitations of nonasymptotic inferential procedures such as requiring strong moment assumptions and being conservative in general. This paper develops a theory of post-hoc inference in the asymptotic setting, yielding asymptotic post-hoc confidence sets and asymptotic post-hoc p-values that make weaker assumptions and are sharper than their nonasymptotic counterparts.