🤖 AI Summary
This work addresses the challenge of high-dimensional variable selection under stringent multiple error control—such as k-FWER and false discovery proportion (FDP)—while leveraging group structure among variables, a feature inadequately exploited by existing methods. The authors propose Group Stepdown SLOPE, which uniquely integrates the Lehmann–Romano stepdown procedure into the SLOPE framework, offering finite-sample control of both generalized k-FWER (gk-FWER) and generalized FDP (gFDP) under both orthogonal and non-orthogonal designs. The method combines a closed-form regularization sequence, Gaussian approximation, Monte Carlo calibration, and convex optimization, ensuring strong theoretical guarantees alongside computational scalability. Simulation studies demonstrate that Group Stepdown SLOPE achieves substantially higher statistical power than current stepdown approaches while maintaining nominal error rates.
📝 Abstract
High-dimensional feature selection is routinely required to balance statistical power with strict control of multiple-error metrics such as the k-Family-Wise Error Rate (k-FWER) and the False Discovery Proportion (FDP), yet some existing frameworks are confined to the narrower goal of controlling the expected False Discovery Rate (FDR) and can not exploit the group-structure of the covariates, such as Sorted L-One Penalized Estimation (SLOPE). We introduce the Group Stepdown SLOPE, a unified optimization procedure which is capable of embedding the Lehmann-Romano stepdown rules into SLOPE to achieve finite-sample guarantees under k-FWER and FDP thresholds. Specifically, we derive closed-form regularization sequences under orthogonal designs that provably bound k-FWER and FDP at user-specified levels, and extend these results to grouped settings via gk-SLOPE and gF-SLOPE, which control the analogous group-level errors gk-FWER and gFDP. For non-orthogonal general designs, we provide a calibrated data-driven sequence inspired by Gaussian approximation and Monte-Carlo correction, preserving convexity and scalability. Extensive simulations are conducted across sparse, correlated, and group-structured regimes. Empirical results corroborate our theoretical findings that the proposed methods achieve nominal error control, while yielding markedly higher power than competing stepdown procedures, thereby confirming the practical value of the theoretical advances.