🤖 AI Summary
Existing online multiple testing methods underutilize feedback—both immediate and delayed—leading to suboptimal statistical power and conservative thresholding.
Method: We propose the Feedback-Augmented Generalized Alpha-Investing Framework (GAIF), which dynamically incorporates revealed ground-truth labels during sequential hypothesis testing to adaptively adjust significance thresholds. GAIF is the first to integrate online conformal prediction into p-value construction, yielding independent, well-calibrated p-values, and introduces a feedback-driven model selection criterion.
Contribution/Results: We prove that GAIF rigorously controls the false discovery rate (FDR) at level α even in finite samples. Extensive simulations and real-data analyses demonstrate that, while maintaining FDR ≤ α, GAIF achieves 15–40% higher statistical power than state-of-the-art online multiple testing methods.
📝 Abstract
We study online multiple testing with feedback, where decisions are made sequentially and the true state of the hypothesis is revealed after the decision has been made, either instantly or with a delay. We propose GAIF, a feedback-enhanced generalized alpha-investing framework that dynamically adjusts thresholds using revealed outcomes, ensuring finite-sample false discovery rate (FDR)/marginal FDR control. Extending GAIF to online conformal testing, we construct independent conformal $p$-values and introduce a feedback-driven model selection criterion to identify the best model/score, thereby improving statistical power. We demonstrate the effectiveness of our methods through numerical simulations and real-data applications.