🤖 AI Summary
This work addresses inverse problem reconstruction—such as image inpainting, accelerated MRI, and compressed sensing—under a challenging setting: only a single incomplete forward model (e.g., low-rank or highly underdetermined) is available, and no ground-truth labels exist. We propose a novel self-supervised learning framework centered on an equivariant reconstruction network, whose output is theoretically guaranteed to be equivariant to transformations of the observed measurement. Leveraging this property, we design a self-supervised split loss that provides an unbiased estimator—in expectation—of the ideal supervised loss. Crucially, our method requires neither clean labels nor auxiliary data assumptions; it relies solely on one degraded observation and structural priors encoded in the forward model. Extensive experiments demonstrate that our approach significantly outperforms existing self-supervised and weakly supervised methods, achieving state-of-the-art reconstruction quality—especially in highly underdetermined regimes.
📝 Abstract
Self-supervised learning for inverse problems allows to train a reconstruction network from noise and/or incomplete data alone. These methods have the potential of enabling learning-based solutions when obtaining ground-truth references for training is expensive or even impossible. In this paper, we propose a new self-supervised learning strategy devised for the challenging setting where measurements are observed via a single incomplete observation model. We introduce a new definition of equivariance in the context of reconstruction networks, and show that the combination of self-supervised splitting losses and equivariant reconstruction networks results in unbiased estimates of the supervised loss. Through a series of experiments on image inpainting, accelerated magnetic resonance imaging, and compressive sensing, we demonstrate that the proposed loss achieves state-of-the-art performance in settings with highly rank-deficient forward models.