π€ AI Summary
This work addresses image reconstruction under limited data, proposing a class of amplitude-equivariant learnable convex regularizers. Methodologically, it approximates power-type seminorms using polyhedral norms, yielding a dual-parameterization architecture that unifies synthesis-based (dictionary-driven ββ) and analysis-based (operator-driven ββ) formulations. We establish, for the first time, their universal approximation capability for arbitrary convex regularizing functionals; furthermore, within a tight frame setting, we design a weighted ββ structure that is both theoretically tractable and optimization-friendly. Experiments demonstrate that the proposed regularizer significantly outperforms conventional compressed sensing sparsity-based methods in biomedical image denoising and reconstruction, while rigorously preserving convergence guarantees and robustness.
π Abstract
This paper addresses the task of learning convex regularizers to guide the reconstruction of images from limited data. By imposing that the reconstruction be amplitude-equivariant, we narrow down the class of admissible functionals to those that can be expressed as a power of a seminorm. We then show that such functionals can be approximated to arbitrary precision with the help of polyhedral norms. In particular, we identify two dual parameterizations of such systems: (i) a synthesis form with an $ell_1$-penalty that involves some learnable dictionary; and (ii) an analysis form with an $ell_infty$-penalty that involves a trainable regularization operator. After having provided geometric insights and proved that the two forms are universal, we propose an implementation that relies on a specific architecture (tight frame with a weighted $ell_1$ penalty) that is easy to train. We illustrate its use for denoising and the reconstruction of biomedical images. We find that the proposed framework outperforms the sparsity-based methods of compressed sensing, while it offers essentially the same convergence and robustness guarantees.