Beyond False Discovery Rate: A Stepdown Group SLOPE Approach for Grouped Variable Selection

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-dimensional variable selection under stringent multiple error control—such as k-FWER and false discovery proportion (FDP)—while leveraging group structure among variables, a feature inadequately exploited by existing methods. The authors propose Group Stepdown SLOPE, which uniquely integrates the Lehmann–Romano stepdown procedure into the SLOPE framework, offering finite-sample control of both generalized k-FWER (gk-FWER) and generalized FDP (gFDP) under both orthogonal and non-orthogonal designs. The method combines a closed-form regularization sequence, Gaussian approximation, Monte Carlo calibration, and convex optimization, ensuring strong theoretical guarantees alongside computational scalability. Simulation studies demonstrate that Group Stepdown SLOPE achieves substantially higher statistical power than current stepdown approaches while maintaining nominal error rates.

Technology Category

Application Category

📝 Abstract
High-dimensional feature selection is routinely required to balance statistical power with strict control of multiple-error metrics such as the k-Family-Wise Error Rate (k-FWER) and the False Discovery Proportion (FDP), yet some existing frameworks are confined to the narrower goal of controlling the expected False Discovery Rate (FDR) and can not exploit the group-structure of the covariates, such as Sorted L-One Penalized Estimation (SLOPE). We introduce the Group Stepdown SLOPE, a unified optimization procedure which is capable of embedding the Lehmann-Romano stepdown rules into SLOPE to achieve finite-sample guarantees under k-FWER and FDP thresholds. Specifically, we derive closed-form regularization sequences under orthogonal designs that provably bound k-FWER and FDP at user-specified levels, and extend these results to grouped settings via gk-SLOPE and gF-SLOPE, which control the analogous group-level errors gk-FWER and gFDP. For non-orthogonal general designs, we provide a calibrated data-driven sequence inspired by Gaussian approximation and Monte-Carlo correction, preserving convexity and scalability. Extensive simulations are conducted across sparse, correlated, and group-structured regimes. Empirical results corroborate our theoretical findings that the proposed methods achieve nominal error control, while yielding markedly higher power than competing stepdown procedures, thereby confirming the practical value of the theoretical advances.
Problem

Research questions and friction points this paper is trying to address.

grouped variable selection
k-FWER
FDP
false discovery rate
high-dimensional feature selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group Stepdown SLOPE
k-FWER
FDP
grouped variable selection
convex optimization
🔎 Similar Papers
X
Xuelin Zhang
College of Informatics, Huazhong Agricultural University, Wuhan, 430070, Hubei Province, China
J
Jingxuan Liang
College of Informatics, Huazhong Agricultural University, Wuhan, 430070, Hubei Province, China
Xinyue Liu
Xinyue Liu
Amazon
Data MiningMachine Learning
Hong Chen
Hong Chen
Department of Mathematics and Statistics, Huazhong Agricultural University
Learning TheoryMachine Learning
B
Biqin Song
College of Informatics, Huazhong Agricultural University, Wuhan, 430070, Hubei Province, China