🤖 AI Summary
This work addresses a key limitation of traditional ℓ₀-norm-based sparsity methods, which treat all nonzero components equally and thereby tend to overestimate signal complexity—particularly when numerous weak coefficients are present. To overcome this, the paper introduces a novel paradigm termed “effective sparsity,” centered on the Effective Number of NonZeros (ENZ), a continuous, stable, and perturbation-insensitive regularization metric derived from normalized entropy. ENZ unifies Shannon and Rényi entropies to quantify the concentration of significant coefficients. Theoretically, the study establishes, for the first time, an intrinsic connection between ENZ and the ℓ₀ norm and provides uniqueness and stability guarantees under the Restricted Isometry Property (RIP) for noisy linear inverse problems. Experimental results demonstrate that the proposed framework substantially outperforms conventional sparsity-based approaches in both reconstruction accuracy and robustness.
📝 Abstract
Classical sparsity promoting methods rely on the l0 norm, which treats all nonzero components as equally significant. In practical inverse problems, however, solutions often exhibit many small amplitude components that have little effect on reconstruction but lead to an overestimation of signal complexity. We address this limitation by shifting the paradigm from discrete cardinality to effective sparsity. Our approach introduces the effective number of nonzeros (ENZ), a unified class of normalized entropy-based regularizers, including Shannon and Renyi forms, that quantifies the concentration of significant coefficients. We show that, unlike the classical l0 norm, the ENZ provides a stable and continuous measure of effective sparsity that is insensitive to negligible perturbations. For noisy linear inverse problems, we establish theoretical guarantees under the Restricted Isometry Property (RIP), proving that ENZ based recovery is unique and stable. We also derive a decomposition showing that the ENZ equals the support cardinality times a distributional efficiency term, thereby linking entropy with l0 regularization. Numerical experiments show that this effective sparsity framework outperforms traditional cardinality based methods in robustness and accuracy.