🤖 AI Summary
Stability selection relies on manually specified stability thresholds, rendering variable selection results highly sensitive and leading to uncontrolled false discovery rates (FDR). To address this, we propose the Exclusion Automatic Threshold Selection (EATS) algorithm—a fully data-adaptive method that determines the stability threshold automatically, without prior assumptions or cross-validation. Grounded in the theoretically motivated Adaptive Threshold Selection (ATS) principle, EATS ensures statistical robustness and interpretability. The method integrates resampling-based statistics, selection probability modeling, and an exclusion-driven threshold search strategy. Extensive simulations across multiple algorithms and diverse scenarios demonstrate that EATS significantly improves selection consistency and achieves superior FDR control compared to all fixed-threshold alternatives. Moreover, EATS is plug-and-play—requiring no user tuning—and readily applicable to existing stability selection frameworks.
📝 Abstract
Stability selection has gained popularity as a method for enhancing the performance of variable selection algorithms while controlling false discovery rates. However, achieving these desirable properties depends on correctly specifying the stable threshold parameter, which can be challenging. An arbitrary choice of this parameter can substantially alter the set of selected variables, as the variables' selection probabilities are inherently data-dependent. To address this issue, we propose Exclusion Automatic Threshold Selection (EATS), a data-adaptive algorithm that streamlines stability selection by automating the threshold specification process. Additionally, we introduce Automatic Threshold Selection (ATS), the motivation behind EATS. We evaluate our approach through an extensive simulation study, benchmarking across commonly used variable selection algorithms and several static stable threshold values.