Restarted contractive operators to learn at equilibrium

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In imaging inverse problems, bilevel hyperparameter learning suffers from gradient distortion and poor approximation of deep equilibrium solutions due to truncation in unrolled networks. Method: This paper proposes an equilibrium-point learning framework based on a restart-based contraction operator. It pioneers the integration of Jacobian-Free Backpropagation with a restart strategy, eliminating dependence on finite unrolling depth and enabling arbitrary-precision approximation of infinite-depth equilibrium solutions—e.g., step sizes and other hyperparameters. Contribution/Results: The method naturally extends to non-ideal settings, including weighted norms, PnP prior step sizes and regularization parameters, and joint training with DRUNet. Extensive experiments across multiple imaging tasks demonstrate superior robustness and state-of-the-art performance, significantly improving hyperparameter learning accuracy and model generalization capability.

Technology Category

Application Category

📝 Abstract
Bilevel optimization offers a methodology to learn hyperparameters in imaging inverse problems, yet its integration with automatic differentiation techniques remains challenging. On the one hand, inverse problems are typically solved by iterating arbitrarily many times some elementary scheme which maps any point to the minimizer of an energy functional, known as equilibrium point. On the other hand, introducing parameters to be learned in the energy functional yield architectures very reminiscent of Neural Networks (NN) known as Unrolled NN and thus suggests the use of Automatic Differentiation (AD) techniques. Yet, applying AD requires for the NN to be of relatively small depth, thus making necessary to truncate an unrolled scheme to a finite number of iterations. First, we show that, at the minimizer, the optimal gradient descent step computed in the Deep Equilibrium (DEQ) framework admits an approximation, known as Jacobian Free Backpropagation (JFB), that is much easier to compute and can be made arbitrarily good by controlling Lipschitz properties of the truncated unrolled scheme. Second, we introduce an algorithm that combines a restart strategy with JFB computed by AD and we show that the learned steps can be made arbitrarily close to the optimal DEQ framework. Third, we complement the theoretical analysis by applying the proposed method to a variety of problems in imaging that progressively depart from the theoretical framework. In particular we show that this method is effective for training weights in weighted norms; stepsizes and regularization levels of Plug-and-Play schemes; and a DRUNet denoiser embedded in Forward-Backward iterates.
Problem

Research questions and friction points this paper is trying to address.

Integrating bilevel optimization with automatic differentiation for imaging inverse problems
Approximating optimal gradient descent steps using Jacobian Free Backpropagation in DEQ
Applying restart strategy with JFB to train parameters in imaging problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jacobian Free Backpropagation for gradient approximation
Restart strategy combined with Automatic Differentiation
Application to imaging problems beyond theoretical framework
🔎 Similar Papers
No similar papers found.
L
Leo Davy
Laboratoire de Physique de l’ENS Lyon, CNRS UMR 5672, F69007 Lyon, France
L
Luis M. Brice˜no-Arias
Universidad Técnica Federico Santa Maria, Departamento de Matemática, 8940897, San Joaquín, Santiago de Chile
Nelly Pustelnik
Nelly Pustelnik
CNRS researcher, Laboratoire de Physique de l'ENS Lyon
nelly.pustelnik@ens-lyon.fr