WEEP: A Differentiable Nonconvex Sparse Regularizer via Weakly-Convex Envelope

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse regularization is crucial for signal recovery and feature selection, yet mainstream strongly sparse-inducing penalties (e.g., SCAD, MCP) are non-differentiable, hindering integration with gradient-based optimizers. To address this, we propose WEEP—a novel, fully differentiable, *L*-smooth, and unbiased nonconvex sparse regularizer constructed via the weak convex envelope. Our key contribution lies in the first systematic application of weak convex envelope theory to differentiable sparse modeling: through piecewise penalty design and rigorous theoretical analysis, we guarantee both differentiability and strong sparsity induction. WEEP naturally accommodates standard first-order optimizers without requiring proximal operators or heuristic modifications. Extensive experiments on signal and image denoising demonstrate that WEEP consistently outperforms baselines—including ℓ₁, SCAD, and GIST—achieving simultaneous gains in reconstruction accuracy and convergence speed. These results validate WEEP’s superior balance between statistical efficacy and computational efficiency in sparse modeling.

Technology Category

Application Category

📝 Abstract
Sparse regularization is fundamental in signal processing for efficient signal recovery and feature extraction. However, it faces a fundamental dilemma: the most powerful sparsity-inducing penalties are often non-differentiable, conflicting with gradient-based optimizers that dominate the field. We introduce WEEP (Weakly-convex Envelope of Piecewise Penalty), a novel, fully differentiable sparse regularizer derived from the weakly-convex envelope framework. WEEP provides strong, unbiased sparsity while maintaining full differentiability and L-smoothness, making it natively compatible with any gradient-based optimizer. This resolves the conflict between statistical performance and computational tractability. We demonstrate superior performance compared to the L1-norm and other established non-convex sparse regularizers on challenging signal and image denoising tasks.
Problem

Research questions and friction points this paper is trying to address.

Resolves conflict between non-differentiable sparsity penalties and gradient optimizers
Introduces WEEP for strong sparsity with full differentiability
Improves signal and image denoising over L1-norm and non-convex regularizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable nonconvex sparse regularizer WEEP
Weakly-convex envelope framework derivation
Maintains sparsity and L-smoothness
🔎 Similar Papers
No similar papers found.