Learning Deblurring Texture Prior from Unpaired Data with Diffusion Model

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses blind image deblurring without paired blurry-sharp image data. We propose the first unsupervised diffusion-based framework for this task. Methodologically, we design a Texture Prior Encoder (TPE) to model image texture distributions, introduce a Texture Transfer Transformer (TTformer) with Filter-Modulated Multi-Head Self-Attention (FM-MSA) for spatially adaptive deblurring, and incorporate an adversarial loss in the wavelet domain to enhance fine-detail reconstruction. Our key contributions are threefold: (i) the first application of diffusion models to unpaired blind deblurring; (ii) memory-augmented texture prior learning; and (iii) frequency-aware self-attention mechanisms. Extensive experiments demonstrate that our method significantly outperforms existing unsupervised approaches across multiple benchmarks, particularly excelling in real-world complex blur scenarios with superior texture recovery and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Since acquiring large amounts of realistic blurry-sharp image pairs is difficult and expensive, learning blind image deblurring from unpaired data is a more practical and promising solution. Unfortunately, dominant approaches rely heavily on adversarial learning to bridge the gap from blurry domains to sharp domains, ignoring the complex and unpredictable nature of real-world blur patterns. In this paper, we propose a novel diffusion model (DM)-based framework, dubbed ours, for image deblurring by learning spatially varying texture prior from unpaired data. In particular, ours performs DM to generate the prior knowledge that aids in recovering the textures of blurry images. To implement this, we propose a Texture Prior Encoder (TPE) that introduces a memory mechanism to represent the image textures and provides supervision for DM training. To fully exploit the generated texture priors, we present the Texture Transfer Transformer layer (TTformer), in which a novel Filter-Modulated Multi-head Self-Attention (FM-MSA) efficiently removes spatially varying blurring through adaptive filtering. Furthermore, we implement a wavelet-based adversarial loss to preserve high-frequency texture details. Extensive evaluations show that ours provides a promising unsupervised deblurring solution and outperforms SOTA methods in widely-used benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Learning blind image deblurring from unpaired data
Overcoming complex real-world blur patterns with diffusion models
Enhancing texture recovery using adaptive filtering and wavelet loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion model for unpaired image deblurring
Introduces Texture Prior Encoder with memory
Proposes Filter-Modulated Self-Attention in TTformer
🔎 Similar Papers
No similar papers found.