Self-diffusion for Solving Inverse Problems

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional inverse problem solving relies heavily on pre-trained generative models, limiting generalizability and requiring extensive prior training. Method: This paper introduces “Self-Diffusion”—a self-supervised diffusion-based optimization framework that eliminates the need for pre-trained score functions or external denoisers. It employs a randomly initialized, untrained CNN as an *in situ* denoiser, jointly optimizing both network parameters and latent variables during reverse diffusion. The method integrates data fidelity constraints, spectral-bias-guided noise scheduling, and iterative denoising to enable unsupervised reconstruction under arbitrary forward operators and noisy observations. Contribution/Results: The core innovation is a fully self-contained, prior-free diffusion optimization pipeline. Experiments demonstrate competitive or superior performance over supervised and pre-trained baselines across diverse linear inverse problems—including CT reconstruction, single-image super-resolution, and compressive sensing—validating its universality, effectiveness, and practical applicability.

Technology Category

Application Category

📝 Abstract
We propose self-diffusion, a novel framework for solving inverse problems without relying on pretrained generative models. Traditional diffusion-based approaches require training a model on a clean dataset to learn to reverse the forward noising process. This model is then used to sample clean solutions -- corresponding to posterior sampling from a Bayesian perspective -- that are consistent with the observed data under a specific task. In contrast, self-diffusion introduces a self-contained iterative process that alternates between noising and denoising steps to progressively refine its estimate of the solution. At each step of self-diffusion, noise is added to the current estimate, and a self-denoiser, which is a single untrained convolutional network randomly initialized from scratch, is continuously trained for certain iterations via a data fidelity loss to predict the solution from the noisy estimate. Essentially, self-diffusion exploits the spectral bias of neural networks and modulates it through a scheduled noise process. Without relying on pretrained score functions or external denoisers, this approach still remains adaptive to arbitrary forward operators and noisy observations, making it highly flexible and broadly applicable. We demonstrate the effectiveness of our approach on a variety of linear inverse problems, showing that self-diffusion achieves competitive or superior performance compared to other methods.
Problem

Research questions and friction points this paper is trying to address.

Solving inverse problems without pretrained generative models
Introducing self-contained iterative noising-denoising process
Adapting to arbitrary forward operators and noisy observations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-diffusion uses iterative noising-denoising refinement process
Employs single untrained convolutional network as self-denoiser
Leverages neural network spectral bias via scheduled noise modulation