🤖 AI Summary
Sparse regularization is crucial for signal recovery and feature selection, yet mainstream strongly sparse-inducing penalties (e.g., SCAD, MCP) are non-differentiable, hindering integration with gradient-based optimizers. To address this, we propose WEEP—a novel, fully differentiable, *L*-smooth, and unbiased nonconvex sparse regularizer constructed via the weak convex envelope. Our key contribution lies in the first systematic application of weak convex envelope theory to differentiable sparse modeling: through piecewise penalty design and rigorous theoretical analysis, we guarantee both differentiability and strong sparsity induction. WEEP naturally accommodates standard first-order optimizers without requiring proximal operators or heuristic modifications. Extensive experiments on signal and image denoising demonstrate that WEEP consistently outperforms baselines—including ℓ₁, SCAD, and GIST—achieving simultaneous gains in reconstruction accuracy and convergence speed. These results validate WEEP’s superior balance between statistical efficacy and computational efficiency in sparse modeling.
📝 Abstract
Sparse regularization is fundamental in signal processing for efficient signal recovery and feature extraction. However, it faces a fundamental dilemma: the most powerful sparsity-inducing penalties are often non-differentiable, conflicting with gradient-based optimizers that dominate the field. We introduce WEEP (Weakly-convex Envelope of Piecewise Penalty), a novel, fully differentiable sparse regularizer derived from the weakly-convex envelope framework. WEEP provides strong, unbiased sparsity while maintaining full differentiability and L-smoothness, making it natively compatible with any gradient-based optimizer. This resolves the conflict between statistical performance and computational tractability. We demonstrate superior performance compared to the L1-norm and other established non-convex sparse regularizers on challenging signal and image denoising tasks.