🤖 AI Summary
Self-supervised deep inverse priors lack theoretical guarantees on convergence and exact recovery in inverse problems.
Method: This paper proposes the first inertial training framework integrating viscous damping and Hessian-driven damping. By modeling the learning dynamics as a continuous-time dynamical system, it establishes rigorous convergence and exact recovery guarantees for deep inverse priors.
Contributions/Results: Theoretically, the continuous dynamics achieve optimal accelerated exponential convergence, while the discrete inertial algorithm preserves linear reconstruction accuracy. The framework reveals an implicit regularization effect induced by inertia and introduces an adaptive step-size strategy to enhance robustness under limited data. Experiments demonstrate significant improvements in reconstruction accuracy and stability for low-data inverse problems—including denoising and super-resolution—outperforming existing self-supervised methods.
📝 Abstract
Solving inverse problems with neural networks benefits from very few theoretical guarantees when it comes to the recovery guarantees. We provide in this work convergence and recovery guarantees for self-supervised neural networks applied to inverse problems, such as Deep Image/Inverse Prior, and trained with inertia featuring both viscous and geometric Hessian-driven dampings. We study both the continuous-time case, i.e., the trajectory of a dynamical system, and the discrete case leading to an inertial algorithm with an adaptive step-size. We show in the continuous-time case that the network can be trained with an optimal accelerated exponential convergence rate compared to the rate obtained with gradient flow. We also show that training a network with our inertial algorithm enjoys similar recovery guarantees though with a less sharp linear convergence rate.