🤖 AI Summary
This work addresses the estimation bias in block-sparse signal recovery under unknown block partitions by proposing two non-convex regularization methods, LogLOP-$\ell_2/\ell_1$ and AdaLOP-$\ell_2/\ell_1$. For the first time, the log-sum penalty and the Minimax Concave Penalty (MCP) are extended to the block-sparse setting. The proposed formulations adopt a variational representation that accommodates arbitrary data fidelity terms, thereby eliminating reliance on least-squares loss. The resulting models are efficiently solved via the Alternating Direction Method of Multipliers (ADMM). Extensive experiments on synthetic data, angular power spectrum estimation, and nanopore current denoising demonstrate that the proposed approaches achieve significantly higher estimation accuracy than state-of-the-art methods, offering both flexibility and precision.
📝 Abstract
We propose two nonconvex regularization methods, LogLOP-l2/l1 and AdaLOP-l2/l1, for recovering block-sparse signals with unknown block partitions. These methods address the underestimation bias of existing convex approaches by extending log-sum penalty and the Minimax Concave Penalty (MCP) to the block-sparse domain via novel variational formulations. Unlike Generalized Moreau Enhancement (GME) and Bayesian methods dependent on the squared-error data fidelity term, our proposed methods are compatible with a broad range of data fidelity terms. We develop efficient Alternating Direction Method of Multipliers (ADMM)-based algorithms for these formulations that exhibit stable empirical convergence. Numerical experiments on synthetic data, angular power spectrum estimation, and denoising of nanopore currents demonstrate that our methods outperform state-of-the-art baselines in estimation accuracy.