🤖 AI Summary
This paper addresses the “F-screening” problem in linear regression—where coefficient inference is reported only if the global F-test is significant—rendering standard p-values and confidence intervals invalid under the conditional event (F-rejection). We propose the first data-free method for conditionally valid inference, requiring only conventional regression output (e.g., coefficients, standard errors, F-statistic). Our approach introduces analytically tractable conditional p-values, bias-corrected point estimates, and adjusted confidence intervals derived from the F-screening selection mechanism, unifying selective inference with conditional hypothesis testing. The method enables retrospective analysis without sample splitting, rigorously controls selective Type I error, and achieves nominal coverage. Empirical evaluations demonstrate substantially higher statistical power than sample-splitting alternatives and successfully replicate findings from two biomedical studies.
📝 Abstract
Suppose that a data analyst wishes to report the results of a least squares linear regression only if the overall null hypothesis, $H_0^{1:p}: eta_1= eta_2 = ldots = eta_p=0$, is rejected. This practice, which we refer to as F-screening (since the overall null hypothesis is typically tested using an $F$-statistic), is in fact common practice across a number of applied fields. Unfortunately, it poses a problem: standard guarantees for the inferential outputs of linear regression, such as Type 1 error control of hypothesis tests and nominal coverage of confidence intervals, hold unconditionally, but fail to hold conditional on rejection of the overall null hypothesis. In this paper, we develop an inferential toolbox for the coefficients in a least squares model that are valid conditional on rejection of the overall null hypothesis. We develop selective p-values that lead to tests that control the selective Type 1 error, i.e., the Type 1 error conditional on having rejected the overall null hypothesis. Furthermore, they can be computed without access to the raw data, i.e., using only the standard outputs of a least squares linear regression, and therefore are suitable for use in a retrospective analysis of a published study. We also develop confidence intervals that attain nominal selective coverage, and point estimates that account for having rejected the overall null hypothesis. We show empirically that our selective procedure is preferable to an alternative approach that relies on sample splitting, and we demonstrate its performance via re-analysis of two datasets from the biomedical literature.