Unsupervised Training of Convex Regularizers using Maximum Likelihood Estimation

📅 2024-04-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For ill-posed inverse problems such as image reconstruction under unsupervised settings—where ground-truth labels are unavailable—this paper proposes an unsupervised learning framework regularized by convex neural networks (CNNs). Methodologically, it integrates maximum marginal likelihood estimation (MMLE) with a verifiably convex neural network regularizer within a Bayesian variational inference framework, enabling end-to-end training with theoretical convergence guarantees. The key contribution lies in the first combination of MMLE with a provably convex neural regularizer, achieving both strong representational capacity and rigorous interpretability. Experiments across diverse degradation operators—including blur and downsampling—demonstrate that the learned prior achieves performance comparable to supervised methods and significantly outperforms existing unsupervised baselines. Moreover, the approach exhibits superior generalization robustness and requires neither paired data nor handcrafted priors.

Technology Category

Application Category

📝 Abstract
Imaging is a standard example of an inverse problem, where the task of reconstructing a ground truth from a noisy measurement is ill-posed. Recent state-of-the-art approaches for imaging use deep learning, spearheaded by unrolled and end-to-end models and trained on various image datasets. However, many such methods require the availability of ground truth data, which may be unavailable or expensive, leading to a fundamental barrier that can not be bypassed by choice of architecture. Unsupervised learning presents an alternative paradigm that bypasses this requirement, as they can be learned directly on noisy data and do not require any ground truths. A principled Bayesian approach to unsupervised learning is to maximize the marginal likelihood with respect to the given noisy measurements, which is intrinsically linked to classical variational regularization. We propose an unsupervised approach using maximum marginal likelihood estimation to train a convex neural network-based image regularization term directly on noisy measurements, improving upon previous work in both model expressiveness and dataset size. Experiments demonstrate that the proposed method produces priors that are near competitive when compared to the analogous supervised training method for various image corruption operators, maintaining significantly better generalization properties when compared to end-to-end methods. Moreover, we provide a detailed theoretical analysis of the convergence properties of our proposed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Image Deblurring
Deep Learning
Limited Training Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised Training
Maximum Likelihood Estimation
Convex Regularizers
🔎 Similar Papers
No similar papers found.