🤖 AI Summary
High-dimensional covariance estimation suffers from parameter overfitting and inference challenges due to insufficient sample size (n ≪ p) and the discrete nature of model selection. To address this, we propose a novel paradigm—“eigen-gap sparsity”—which introduces the eigen-gap (i.e., the minimal spacing between adjacent eigenvalues) as a continuous, differentiable sparsity measure, unifying structural parsimony and isotropic shrinkage of the covariance matrix. Theoretically, we establish an intrinsic connection between eigenvalue equalization and the statistical accuracy–simplicity trade-off. Methodologically, within a penalized likelihood framework, we design a projected gradient descent algorithm on a monotone cone, equivalent to isotonic regression on the sample eigenvalues. Empirical results demonstrate that our approach significantly improves estimation stability and generalization in low-sample regimes, consistently outperforming thresholding estimators, graphical Lasso, and Ledoit–Wolf shrinkage.
📝 Abstract
Covariance estimation is a central problem in statistics. An important issue is that there are rarely enough samples $n$ to accurately estimate the $p (p+1) / 2$ coefficients in dimension $p$. Parsimonious covariance models are therefore preferred, but the discrete nature of model selection makes inference computationally challenging. In this paper, we propose a relaxation of covariance parsimony termed"eigengap sparsity"and motivated by the good accuracy-parsimony tradeoff of eigenvalue-equalization in covariance matrices. This new penalty can be included in a penalized-likelihood framework that we propose to solve with a projected gradient descent on a monotone cone. The algorithm turns out to resemble an isotonic regression of mutually-attracted sample eigenvalues, drawing an interesting link between covariance parsimony and shrinkage.