🤖 AI Summary
Training deep equilibrium models (DEQs) for unsupervised equivariant imaging (EI) is challenging due to instability and high computational cost of implicit differentiation when solving fixed points under complex EI losses.
Method: We propose a modular backpropagation framework that decouples symmetry constraints from reconstruction optimization, substantially simplifying implicit differentiation for DEQs under EI losses. Theoretically, the learned model approximates the proximal operator of an invariant prior. Our method is fully self-supervised—requiring no paired ground-truth labels—and constructs objectives solely from group symmetries inherent in observed data.
Results: Experiments on diverse imaging inverse problems—including CT and MRI—demonstrate that our approach significantly outperforms Jacobian-free backpropagation and other baselines, achieving superior reconstruction accuracy and generalization.
📝 Abstract
Equivariant imaging (EI) enables training signal reconstruction models without requiring ground truth data by leveraging signal symmetries. Deep equilibrium models (DEQs) are a powerful class of neural networks where the output is a fixed point of a learned operator. However, training DEQs with complex EI losses requires implicit differentiation through fixed-point computations, whose implementation can be challenging. We show that backpropagation can be implemented modularly, simplifying training. Experiments demonstrate that DEQs trained with implicit differentiation outperform those trained with Jacobian-free backpropagation and other baseline methods. Additionally, we find evidence that EI-trained DEQs approximate the proximal map of an invariant prior.