🤖 AI Summary
To address the challenge of insufficient clean training data in low-data inverse imaging reconstruction, this paper proposes UGoDIT: an unsupervised grouped deep image prior framework. UGoDIT requires only *M* undersampled measurements—no ground-truth labels or large-scale datasets—and jointly learns transferable priors via a shared encoder and decoupled multi-decoder architecture; at test time, the encoder is frozen while individual decoders are fine-tuned for rapid adaptive reconstruction. Its key innovation lies in the first integration of grouped deep image priors with transferable weight learning, balancing generalizability and computational efficiency. Extensive experiments on multi-coil MRI reconstruction, single-image super-resolution, and nonlinear deblurring demonstrate that UGoDIT significantly outperforms single-sample DIP—achieving faster convergence and reconstruction quality on par with state-of-the-art supervised methods and diffusion-based models.
📝 Abstract
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.