Exact Coordinate Descent for High-Dimensional Regularized Huber Regression

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional sparse Huber regression suffers from slow and unstable optimization under heavy-tailed covariates and highly correlated designs, leading to ill-conditioned Hessian matrices. Method: We propose the first first-order exact coordinate descent algorithm specifically tailored for penalized Huber loss. It integrates marginal incremental analysis, monotonicity-based derivative screening, and a variable selection rule leveraging partial residual grids—collectively accelerating convergence and enhancing robustness in ill-conditioned settings. Contribution/Results: Theoretically, we establish the first non-asymptotic linear convergence rate guarantee for this problem. Empirically, our algorithm significantly outperforms state-of-the-art methods in both computational efficiency and estimation accuracy under heavy-tailed distributions and strongly correlated designs. It provides a scalable, numerically stable, and theoretically grounded new paradigm for high-dimensional robust regression.

Technology Category

Application Category

📝 Abstract
We develop an exact coordinate descent algorithm for high-dimensional regularized Huber regression. In contrast to composite gradient descent methods, our algorithm fully exploits the advantages of coordinate descent when the underlying model is sparse. Moreover, unlike existing second-order approximation methods previously introduced in the literature, it remains effective even when the Hessian becomes ill-conditioned due to high correlations among covariates drawn from heavy-tailed distributions. The key idea is that, for each coordinate, marginal increments arise only from inlier observations, while the derivatives remain monotonically increasing over a grid constructed from the partial residuals. Building on conventional coordinate descent strategies, we further propose variable screening rules that selectively determine which variables to update at each iteration, thereby accelerating convergence. To the best of our knowledge, this is the first work to develop a first-order coordinate descent algorithm for penalized Huber loss minimization. We bound the nonasymptotic convergence rate of the proposed algorithm by extending arguments developed for the Lasso and formally characterize the operation of the proposed screening rule. Extensive simulation studies under heavy-tailed and highly-correlated predictors, together with a real data application, demonstrate both the practical efficiency of the method and the benefits of the computational enhancements.
Problem

Research questions and friction points this paper is trying to address.

Develops exact coordinate descent for high-dimensional regularized Huber regression
Addresses ill-conditioned Hessian issues in heavy-tailed correlated covariates
Proposes variable screening rules to accelerate convergence efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exact coordinate descent for regularized Huber regression
Variable screening rules to accelerate convergence
Effective with ill-conditioned Hessian from heavy-tailed data
🔎 Similar Papers
No similar papers found.