Integrating Intermediate Layer Optimization and Projected Gradient Descent for Solving Inverse Problems with Diffusion Models

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited expressiveness of traditional priors and the high computational cost and poor convergence of existing diffusion-based methods for image inverse problems, this paper proposes an Intermediate-Layer Optimization (ILO) framework leveraging pre-trained diffusion models. We introduce DMILO, the first method to extend the expressive capacity of diffusion priors via sparse deviation modeling. Furthermore, we integrate projected gradient descent (PGD) to yield DMILO-PGD, ensuring robust iterative convergence, and provide theoretical convergence analysis. Evaluated on diverse linear and nonlinear inverse problems—including super-resolution, denoising, and compressed sensing—our approach consistently outperforms state-of-the-art methods: average PSNR improves by 1.2–2.8 dB, memory consumption decreases by 37%, and required iterations reduce by 42%.

Technology Category

Application Category

📝 Abstract
Inverse problems (IPs) involve reconstructing signals from noisy observations. Traditional approaches often rely on handcrafted priors, which can fail to capture the complexity of real-world data. The advent of pre-trained generative models has introduced new paradigms, offering improved reconstructions by learning rich priors from data. Among these, diffusion models (DMs) have emerged as a powerful framework, achieving remarkable reconstruction performance across numerous IPs. However, existing DM-based methods frequently encounter issues such as heavy computational demands and suboptimal convergence. In this work, building upon the idea of the recent work DMPlug~cite{wang2024dmplug}, we propose two novel methods, DMILO and DMILO-PGD, to address these challenges. Our first method, DMILO, employs intermediate layer optimization (ILO) to alleviate the memory burden inherent in DMPlug. Additionally, by introducing sparse deviations, we expand the range of DMs, enabling the exploration of underlying signals that may lie outside the range of the diffusion model. We further propose DMILO-PGD, which integrates ILO with projected gradient descent (PGD), thereby reducing the risk of suboptimal convergence. We provide an intuitive theoretical analysis of our approach under appropriate conditions and validate its superiority through extensive experiments on diverse image datasets, encompassing both linear and nonlinear IPs. Our results demonstrate significant performance gains over state-of-the-art methods, highlighting the effectiveness of DMILO and DMILO-PGD in addressing common challenges in DM-based IP solvers.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational demands in diffusion model-based inverse problems
Improving convergence in inverse problem solutions with diffusion models
Expanding signal exploration beyond diffusion model range limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses intermediate layer optimization for memory efficiency
Integrates projected gradient descent for better convergence
Expands DM range with sparse deviation technique
🔎 Similar Papers
No similar papers found.
Y
Yang Zheng
University of Electronic Science and Technology of China
W
Wen Li
University of Electronic Science and Technology of China
Zhaoqiang Liu
Zhaoqiang Liu
Data Intelligence Group, UESTC
Theoretical machine learningDiffusion modelsLarge language models