π€ AI Summary
For the inverse problem of Poisson imaging, this paper introduces Deep Equilibrium Models (DEQs) into the KL-divergence data-fidelity setting for the first time, proposing Mirror Descent-based DEQs (MD-DEQ)βa non-Euclidean implicit network grounded in mirror descent. MD-DEQ customizes the Bregman divergence geometry to match Poisson statistics, enabling end-to-end learning of neural regularizers while ensuring fixed-point convergence without explicit iterative unrolling. Unlike conventional model-based methods and Bregman Plug-and-Play (PnP), MD-DEQ eliminates the need for hyperparameter tuning and sensitive initializations, offering fully plug-and-play inference. Experiments demonstrate that MD-DEQ achieves reconstruction accuracy competitive with state-of-the-art differentiable PnP approaches, while exhibiting superior training stability. The code is publicly available.
π Abstract
Deep Equilibrium Models (DEQs) are implicit neural networks with fixed points, which have recently gained attention for learning image regularization functionals, particularly in settings involving Gaussian fidelities, where assumptions on the forward operator ensure contractiveness of standard (proximal) Gradient Descent operators. In this work, we extend the application of DEQs to Poisson inverse problems, where the data fidelity term is more appropriately modeled by the Kullback-Leibler divergence. To this end, we introduce a novel DEQ formulation based on Mirror Descent defined in terms of a tailored non-Euclidean geometry that naturally adapts with the structure of the data term. This enables the learning of neural regularizers within a principled training framework. We derive sufficient conditions to guarantee the convergence of the learned reconstruction scheme and propose computational strategies that enable both efficient training and fully parameter-free inference. Numerical experiments show that our method outperforms traditional model-based approaches and it is comparable to the performance of Bregman Plug-and-Play methods, while mitigating their typical drawbacks - namely, sensitivity to initialization and careful tuning of hyperparameters. The code is publicly available at https://github.com/christiandaniele/DEQ-MD.