🤖 AI Summary
This work addresses the lack of a unified theoretical foundation for learned iterative networks in computational imaging and inverse problems. We propose a continuous-domain reconstruction framework grounded in operator learning, which explicitly decouples *how to compute* (algorithmic architecture) from *what to compute* (the reconstruction operator), thereby bridging the theoretical gap between classical optimization-based methods and data-driven models. Methodologically, we integrate variational unfolding, operator modeling in function spaces, deep neural network parameterization, and end-to-end training into a single coherent framework—yielding a learnable, interpretable, and generalizable reconstruction operator. Our approach unifies major classes of learned iterative methods under a common theoretical umbrella. Extensive numerical experiments validate its effectiveness. The framework establishes a new paradigm for designing reconstruction networks that simultaneously offer rigorous theoretical guarantees and strong practical performance.
📝 Abstract
Learned image reconstruction has become a pillar in computational imaging and inverse problems. Among the most successful approaches are learned iterative networks, which are formulated by unrolling classical iterative optimisation algorithms for solving variational problems. While the underlying algorithm is usually formulated in the functional analytic setting, learned approaches are often viewed as purely discrete. In this chapter we present a unified operator view for learned iterative networks. Specifically, we formulate a learned reconstruction operator, defining how to compute, and separately the learning problem, which defines what to compute. In this setting we present common approaches and show that many approaches are closely related in their core. We review linear as well as nonlinear inverse problems in this framework and present a short numerical study to conclude.