π€ AI Summary
Existing multiple testing methods often suffer from limited statistical power and imprecise false discovery rate (FDR) control due to reliance on only a subset of dataβe.g., labeled null or alternative instances. To address this, we propose a unified conformal inference framework that, for the first time, systematically integrates null samples, alternative samples, and unlabeled data to construct nonconformity scores and calibrate p-values via full permutation. Unlike conventional approaches, our framework requires no additional data splitting, automatically selects the optimal conformal method, and is both model-agnostic and computationally efficient. Experiments demonstrate that our method achieves strict FDR control while significantly improving statistical power and cross-scenario robustness. It provides a scalable, robust paradigm for uncertainty quantification in high-dimensional multiple testing, advancing beyond traditional sample-partitioning strategies.
π Abstract
Conformalized multiple testing offers a model-free way to control predictive uncertainty in decision-making. Existing methods typically use only part of the available data to build score functions tailored to specific settings. We propose a unified framework that puts data utilization at the center: it uses all available data-null, alternative, and unlabeled-to construct scores and calibrate p-values through a full permutation strategy. This unified use of all available data significantly improves power by enhancing non-conformity score quality and maximizing calibration set size while rigorously controlling the false discovery rate. Crucially, our framework provides a systematic design principle for conformal testing and enables automatic selection of the best conformal procedure among candidates without extra data splitting. Extensive numerical experiments demonstrate that our enhanced methods deliver superior efficiency and adaptability across diverse scenarios.