๐ค AI Summary
Existing approximate conditional selective sampling (aCSS) methods are constrained by maximum-likelihood estimation and linear/smooth regularization, lack theoretical characterization of statistical power, and cannot accommodate nonlinear regularizers (e.g., group Lasso, nonconvex penalties), robust estimation, or nonparametric models. This paper proposes a generalized aCSS framework that unifies conditional sampling, weighted sampling mechanisms, and high-dimensional statistical inference for the first timeโsupporting arbitrary differentiable regularizers as well as robust and nonparametric estimators. Under unknown null distributions, we rigorously establish its high-dimensional consistency and statistical power optimality. Theoretically, this work introduces the first power analysis framework for aCSS applicable to nonconvex optimization and robust estimation. Empirically, the method substantially improves hypothesis testing efficacy in complex, high-dimensional models.
๐ Abstract
Approximate co-sufficient sampling (aCSS) offers a principled route to hypothesis testing when null distributions are unknown, yet current implementations are confined to maximum likelihood estimators with smooth or linear regularization and provide little theoretical insight into power. We present a generalized framework that widens the scope of the aCSS method to embrace nonlinear regularization, such as group lasso and nonconvex penalties, as well as robust and nonparametric estimators. Moreover, we introduce a weighted sampling scheme for enhanced flexibility and propose a generalized aCSS framework that unifies existing conditional sampling methods. Our theoretical analysis rigorously establishes validity and, for the first time, characterizes the power optimality of aCSS procedures in certain high-dimensional settings.