A Stable Lasso

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Lasso suffers from unstable variable selection in high-dimensional settings due to collinearity among predictors. To address this, we propose a correlation-aware weighted Lasso: predictors are first ranked based on their marginal predictive power and pairwise correlations; then, adaptive weights—monotonically increasing with ranking—are incorporated into the Lasso penalty. This approach requires no additional modeling assumptions, substantially improves selection stability, and extends naturally to other regularized estimators (e.g., elastic net, SCAD). Extensive simulations and analyses of multiple real-world datasets demonstrate consistent gains in variable selection accuracy across diverse correlation structures and signal-to-noise ratios, with negligible computational overhead. Our key contribution is the systematic integration of a correlation-informed ranking-and-weighting mechanism into the regularization framework—yielding a general, interpretable, and stable paradigm for high-dimensional variable selection.

Technology Category

Application Category

📝 Abstract
The Lasso has been widely used as a method for variable selection, valued for its simplicity and empirical performance. However, Lasso's selection stability deteriorates in the presence of correlated predictors. Several approaches have been developed to mitigate this limitation. In this paper, we provide a brief review of existing approaches, highlighting their limitations. We then propose a simple technique to improve the selection stability of Lasso by integrating a weighting scheme into the Lasso penalty function, where the weights are defined as an increasing function of a correlation-adjusted ranking that reflects the predictive power of predictors. Empirical evaluations on both simulated and real-world datasets demonstrate the efficacy of the proposed method. Additional numerical results demonstrate the effectiveness of the proposed approach in stabilizing other regularization-based selection methods, indicating its potential as a general-purpose solution.
Problem

Research questions and friction points this paper is trying to address.

The Lasso suffers from unstable variable selection with correlated predictors
Existing approaches have limitations in improving selection stability
A weighting scheme is proposed to enhance Lasso's selection stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates weighting scheme into Lasso penalty function
Uses correlation-adjusted ranking for predictor weights
Stabilizes regularization-based selection methods generally
🔎 Similar Papers
No similar papers found.