Eigengap Sparsity for Covariance Parsimony

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional covariance estimation suffers from parameter overfitting and inference challenges due to insufficient sample size (n ≪ p) and the discrete nature of model selection. To address this, we propose a novel paradigm—“eigen-gap sparsity”—which introduces the eigen-gap (i.e., the minimal spacing between adjacent eigenvalues) as a continuous, differentiable sparsity measure, unifying structural parsimony and isotropic shrinkage of the covariance matrix. Theoretically, we establish an intrinsic connection between eigenvalue equalization and the statistical accuracy–simplicity trade-off. Methodologically, within a penalized likelihood framework, we design a projected gradient descent algorithm on a monotone cone, equivalent to isotonic regression on the sample eigenvalues. Empirical results demonstrate that our approach significantly improves estimation stability and generalization in low-sample regimes, consistently outperforming thresholding estimators, graphical Lasso, and Ledoit–Wolf shrinkage.

Technology Category

Application Category

📝 Abstract
Covariance estimation is a central problem in statistics. An important issue is that there are rarely enough samples $n$ to accurately estimate the $p (p+1) / 2$ coefficients in dimension $p$. Parsimonious covariance models are therefore preferred, but the discrete nature of model selection makes inference computationally challenging. In this paper, we propose a relaxation of covariance parsimony termed"eigengap sparsity"and motivated by the good accuracy-parsimony tradeoff of eigenvalue-equalization in covariance matrices. This new penalty can be included in a penalized-likelihood framework that we propose to solve with a projected gradient descent on a monotone cone. The algorithm turns out to resemble an isotonic regression of mutually-attracted sample eigenvalues, drawing an interesting link between covariance parsimony and shrinkage.
Problem

Research questions and friction points this paper is trying to address.

Estimating high-dimensional covariance with limited samples
Relaxing discrete model selection via eigengap sparsity
Balancing accuracy and parsimony in covariance matrices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eigengap sparsity for covariance parsimony
Penalized-likelihood with projected gradient descent
Isotonic regression of sample eigenvalues
🔎 Similar Papers
No similar papers found.