BLOC: A Global Optimization Framework for Sparse Covariance Estimation with Non-Convex Penalties

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in estimating high-dimensional sparse covariance matrices—namely, optimization difficulties arising from nonconvex penalties and black-box loss functions, and the lack of guaranteed positive definiteness. To overcome these issues, the authors propose the BLOC framework, which employs an angular Cholesky mapping to transform constrained optimization over the correlation matrix manifold into an unconstrained problem in Euclidean space. By integrating gradient-free pattern search, adaptive coordinate polling, and multithreaded parallelization, BLOC enables global optimization of nonconvex, nonsmooth, or black-box objectives. It is the first method to introduce black-box global optimization into sparse covariance estimation while ensuring all iterates remain valid correlation matrices. Theoretically, the estimator is shown to be consistent, achieve minimax optimal convergence rates, and recover sparsity patterns without requiring Gaussian assumptions. Experiments demonstrate BLOC’s superior performance across low- and high-dimensional settings in terms of estimation accuracy, structure recovery, scalability, and robustness, as validated in proteomic network analysis.
📝 Abstract
We introduce BLOC (Black-box Optimization over Correlation matrices), a general framework for sparse covariance estimation with non-convex penalties. BLOC operates on the manifold of correlation matrices and reparameterizes it via an angular Cholesky mapping, transforming the positive-definite, unit-diagonal constraint into an unconstrained search over a Euclidean hyperrectangle. This enables gradient-free global optimization of diverse objectives, including non-differentiable or black-box losses, using a pattern search routine with adaptive coordinate polling, run-wise restarts to escape local minima, and leveraging up to $d(d-1)$ parallel threads when optimizing a $d$-dimensional correlation matrix. The method is penalty-agnostic and ensures that every iterate is a valid correlation matrix, from which covariance estimates are obtained. We establish convergence guarantees, including stationarity, probabilistic escape from poor local minima, and sublinear rates under smooth convex losses. From a statistical perspective, we prove consistency, convergence rates, and sparsistency for penalized correlation estimators under general conditions, extending sparse covariance theory beyond the Gaussian setting. Empirically, BLOC with nonconvex penalties such as SCAD and MCP outperforms leading estimators in both low- and high-dimensional regimes, achieving lower estimation error and improved sparsity recovery. A parallel implementation enhances scalability, and a proteomic network application demonstrates robust, positive-definite sparse covariance estimation.
Problem

Research questions and friction points this paper is trying to address.

sparse covariance estimation
non-convex penalties
correlation matrices
global optimization
positive-definite constraint
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-convex penalties
correlation matrix manifold
angular Cholesky mapping
gradient-free global optimization
sparsistency
🔎 Similar Papers
No similar papers found.
P
Priyam Das
Department of Biostatistics, Virginia Commonwealth University
Trambak Banerjee
Trambak Banerjee
University of Kansas, School of Business
P
Prajamitra Bhuyan
Department of Operations Management, Indian Institute of Management, India