Implicit Regularization of the Deep Inverse Prior Trained with Inertia

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Self-supervised deep inverse priors lack theoretical guarantees on convergence and exact recovery in inverse problems. Method: This paper proposes the first inertial training framework integrating viscous damping and Hessian-driven damping. By modeling the learning dynamics as a continuous-time dynamical system, it establishes rigorous convergence and exact recovery guarantees for deep inverse priors. Contributions/Results: Theoretically, the continuous dynamics achieve optimal accelerated exponential convergence, while the discrete inertial algorithm preserves linear reconstruction accuracy. The framework reveals an implicit regularization effect induced by inertia and introduces an adaptive step-size strategy to enhance robustness under limited data. Experiments demonstrate significant improvements in reconstruction accuracy and stability for low-data inverse problems—including denoising and super-resolution—outperforming existing self-supervised methods.

Technology Category

Application Category

📝 Abstract
Solving inverse problems with neural networks benefits from very few theoretical guarantees when it comes to the recovery guarantees. We provide in this work convergence and recovery guarantees for self-supervised neural networks applied to inverse problems, such as Deep Image/Inverse Prior, and trained with inertia featuring both viscous and geometric Hessian-driven dampings. We study both the continuous-time case, i.e., the trajectory of a dynamical system, and the discrete case leading to an inertial algorithm with an adaptive step-size. We show in the continuous-time case that the network can be trained with an optimal accelerated exponential convergence rate compared to the rate obtained with gradient flow. We also show that training a network with our inertial algorithm enjoys similar recovery guarantees though with a less sharp linear convergence rate.
Problem

Research questions and friction points this paper is trying to address.

Providing convergence guarantees for self-supervised neural networks in inverse problems
Studying inertial training with viscous and Hessian-driven dampings in continuous and discrete cases
Demonstrating accelerated exponential convergence rates for network training compared to gradient flow
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised neural networks for inverse problems
Inertial training with viscous and Hessian dampings
Optimal accelerated exponential convergence rate
🔎 Similar Papers
No similar papers found.
Nathan Buskulic
Nathan Buskulic
Machine Learning Genoa Center
Inverse problemsNeural networksOptimization
J
Jalal Fadil
Greyc, Normandie Univ., UNICAEN, ENSICAEN, CNRS, 6 Boulevard Maréchal Juin, Caen, 14000 France
Y
Yvain Qu'eau
Greyc, Normandie Univ., UNICAEN, ENSICAEN, CNRS, 6 Boulevard Maréchal Juin, Caen, 14000 France