Sharp Trade-Offs in High-Dimensional Inference via 2-Level SLOPE

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-dimensional linear regression, Sorted L-One Penalized Estimation (SLOPE) improves upon LASSO but suffers from complex penalty sequences that hinder hyperparameter tuning. Method: We focus on two-level SLOPE—a subclass with only three hyperparameters—and propose simplified adaptive ℓ₁ regularization, which imposes stronger penalties on larger coefficients to better balance the true positive proportion (TPP) and false discovery proportion (FDP). Contribution/Results: For the first time, we provide a sharp, theoretically tight characterization of the TPP–FDP trade-off curve under the high-dimensional asymptotic regime—surpassing prior SLOPE analyses that only yield loose bounds. Through analytical derivation and numerical validation, our method demonstrates significant improvements over both LASSO and standard SLOPE in challenging settings—including strong correlations, high noise, and non-sparse signals—while offering theoretical rigor, ease of hyperparameter selection, and practical scalability.

Technology Category

Application Category

📝 Abstract
Among techniques for high-dimensional linear regression, Sorted L-One Penalized Estimation (SLOPE) generalizes the LASSO via an adaptive $l_1$ regularization that applies heavier penalties to larger coefficients in the model. To achieve such adaptivity, SLOPE requires the specification of a complex hierarchy of penalties, i.e., a monotone penalty sequence in $R^p$, in contrast to a single penalty scalar for LASSO. Tuning this sequence when $p$ is large poses a challenge, as brute force search over a grid of values is computationally prohibitive. In this work, we study the 2-level SLOPE, an important subclass of SLOPE, with only three hyperparameters. We demonstrate both empirically and analytically that 2-level SLOPE not only preserves the advantages of general SLOPE -- such as improved mean squared error and overcoming the Donoho-Tanner power limit -- but also exhibits computational benefits by reducing the penalty hyperparameter space. In particular, we prove that 2-level SLOPE admits a sharp, theoretically tight characterization of the trade-off between true positive proportion (TPP) and false discovery proportion (FDP), contrasting with general SLOPE where only upper and lower bounds are known. Empirical evaluations further underscore the effectiveness of 2-level SLOPE in settings where predictors exhibit high correlation, when the noise is large, or when the underlying signal is not sparse. Our results suggest that 2-level SLOPE offers a robust, scalable alternative to both LASSO and general SLOPE, making it particularly suited for practical high-dimensional data analysis.
Problem

Research questions and friction points this paper is trying to address.

Reducing hyperparameter tuning complexity in high-dimensional regression
Balancing true positive and false discovery rates effectively
Improving computational efficiency while preserving SLOPE advantages
Innovation

Methods, ideas, or system contributions that make the work stand out.

2-level SLOPE reduces hyperparameter space complexity
2-level SLOPE maintains SLOPE's adaptive regularization benefits
2-level SLOPE enables sharp TPP-FDP trade-off characterization
🔎 Similar Papers
No similar papers found.