🤖 AI Summary
Hyperparameter tuning for Group SLOPE and Sparse Group SLOPE is computationally expensive due to the need to traverse the full regularization path. Method: This paper introduces, for the first time, theoretically guaranteed strong screening rules for both models—extended to the broader class of generalized group OWL penalties (e.g., OSCAR). The rules leverage KKT condition analysis, duality gap estimation, and group-structured optimization to safely discard irrelevant variables prior to model fitting, drastically reducing input dimensionality. Contribution/Results: Experiments on synthetic and real genomic datasets demonstrate that the proposed rules accelerate training by several-fold on high-dimensional genetic data (e.g., p ≈ 10⁴–10⁵) with zero false negatives. This enables, for the first time, efficient and scalable application of Group SLOPE in ultra-high-dimensional settings—previously infeasible due to prohibitive computational cost.
📝 Abstract
Tuning the regularization parameter in penalized regression models is an expensive task, requiring multiple models to be fit along a path of parameters. Strong screening rules drastically reduce computational costs by lowering the dimensionality of the input prior to fitting. We develop strong screening rules for group-based Sorted L-One Penalized Estimation (SLOPE) models: Group SLOPE and Sparse-group SLOPE. The developed rules are applicable to the wider family of group-based OWL models, including OSCAR. Our experiments on both synthetic and real data show that the screening rules significantly accelerate the fitting process. The screening rules make it accessible for group SLOPE and sparse-group SLOPE to be applied to high-dimensional datasets, particularly those encountered in genetics.