🤖 AI Summary
This study addresses the challenge that conventional variable selection methods struggle to control the false discovery rate (FDR) when predictors are highly correlated and often erroneously discard entire groups of related variables, thereby impairing predictive performance. To overcome this, the authors propose a hierarchical ensemble variable selection framework: variables are first grouped via hierarchical clustering, and each group is then tested for the presence of any non-zero effect, allowing any member to serve as a proxy. This approach uniquely integrates ensemble selection with hierarchical clustering into an FDR-controlling framework—extending beyond prior methods limited to family-wise error rate (FWER) control—and employs a generalized Benjamini–Hochberg/Yekutieli step-up procedure to account for logical dependencies among composite hypotheses. Simulations and empirical analyses demonstrate that the method achieves strict FDR control while substantially improving statistical power, yielding richer and more predictive variable selections.
📝 Abstract
Controlling the false discovery rate (FDR) in variable selection becomes challenging when predictors are correlated, as existing methods often exclude all members of correlated groups and consequently perform poorly for prediction. We introduce a new setwise variable-selection framework that identifies clusters of potential predictors rather than forcing selection of a single variable. By allowing any member of a selected set to serve as a surrogate predictor, our approach supports strong predictive performance while maintaining rigorous FDR control. We construct sets via hierarchical clustering of predictors based on correlation, then test whether each set contains any non-null effects. Similar clustering and setwise selection have been applied in the familywise error rate (FWER) control regime, but previous research has been unable to overcome the inherent challenges of extending this to the FDR control framework. To control the FDR, we develop substantial generalizations of linear step-up procedures, extending the Benjamini-Hochberg and Benjamini-Yekutieli methods to accommodate the logical dependencies among these composite hypotheses. We prove that these procedures control the FDR at the nominal level and highlight their broader applicability. Simulation studies and real-data analyses show that our methods achieve higher power than existing approaches while preserving FDR control, yielding more informative variable selections and improved predictive models.