Equivariant Splitting: Self-supervised learning from incomplete data

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses inverse problem reconstruction—such as image inpainting, accelerated MRI, and compressed sensing—under a challenging setting: only a single incomplete forward model (e.g., low-rank or highly underdetermined) is available, and no ground-truth labels exist. We propose a novel self-supervised learning framework centered on an equivariant reconstruction network, whose output is theoretically guaranteed to be equivariant to transformations of the observed measurement. Leveraging this property, we design a self-supervised split loss that provides an unbiased estimator—in expectation—of the ideal supervised loss. Crucially, our method requires neither clean labels nor auxiliary data assumptions; it relies solely on one degraded observation and structural priors encoded in the forward model. Extensive experiments demonstrate that our approach significantly outperforms existing self-supervised and weakly supervised methods, achieving state-of-the-art reconstruction quality—especially in highly underdetermined regimes.

Technology Category

Application Category

📝 Abstract
Self-supervised learning for inverse problems allows to train a reconstruction network from noise and/or incomplete data alone. These methods have the potential of enabling learning-based solutions when obtaining ground-truth references for training is expensive or even impossible. In this paper, we propose a new self-supervised learning strategy devised for the challenging setting where measurements are observed via a single incomplete observation model. We introduce a new definition of equivariance in the context of reconstruction networks, and show that the combination of self-supervised splitting losses and equivariant reconstruction networks results in unbiased estimates of the supervised loss. Through a series of experiments on image inpainting, accelerated magnetic resonance imaging, and compressive sensing, we demonstrate that the proposed loss achieves state-of-the-art performance in settings with highly rank-deficient forward models.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised learning for inverse problems without ground-truth data
Handling single incomplete observation models in reconstruction networks
Achieving unbiased supervised loss via equivariance and splitting losses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning from incomplete data
Equivariant reconstruction networks for unbiased estimates
Combining splitting losses with equivariant networks
🔎 Similar Papers
No similar papers found.
V
Victor Sechaud
LPENSL, CNRS, ENS de Lyon, France
J
Jérémy Scanvic
LPENSL, CNRS, ENS de Lyon, France; Prysm, Le Cannet, France
Quentin Barthélemy
Quentin Barthélemy
ML researcher, PhD @ Foxstream
Signal and image processingVideo analysisMachine learningDeep learningStatistics
P
Patrice Abry
LPENSL, CNRS, ENS de Lyon, France
Julián Tachella
Julián Tachella
CNRS research scientist at ENS de Lyon
Signal ProcessingImage ProcessingMachine Learning