π€ AI Summary
This work addresses the challenge of prohibitive memory costs associated with global forward operators in large-scale 3D imaging inverse problems, which hinder their integration into deep unrolling networks. To overcome this limitation, the authors propose a scalable domain partitioning strategy combined with a regularized operator approximation method, enabling, for the first time, end-to-end training of a full forward model on a single GPU. The approach efficiently embeds forward operators of arbitrary scale while substantially reducing both computational and memory overhead. It is applicable to 3D cone-beam CT and multi-coil accelerated MRI reconstruction. Experiments demonstrate that the proposed framework achieves state-of-the-art performance on both tasks, with training and inference feasible on a single GPU.
π Abstract
Deep learning-based methods have revolutionized the field of imaging inverse problems, yielding state-of-the-art performance across various imaging domains. The best performing networks incorporate the imaging operator within the network architecture, typically in the form of deep unrolling. However, in large-scale problems, such as 3D imaging, most existing methods fail to incorporate the operator in the architecture due to the prohibitive amount of memory required by global forward operators, which hinder typical patching strategies. In this work, we present a domain partitioning strategy and normal operator approximations that enable the training of end-to-end reconstruction models incorporating forward operators of arbitrarily large problems into their architecture. The proposed method achieves state-of-the-art performance on 3D X-ray cone-beam tomography and 3D multi-coil accelerated MRI, while requiring only a single GPU for both training and inference.