Disentangled Textual Priors for Diffusion-based Image Super-Resolution

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing diffusion-based image super-resolution methods, which struggle to precisely control structural and textural details as well as global and local semantics due to their reliance on entangled or coarse-grained textual priors. To overcome this, we propose DTPSR, a novel framework that decouples textual priors along two orthogonal dimensions—spatial hierarchy (global/local) and frequency semantics (low/high frequency)—to guide the diffusion model in generating high-fidelity, semantically consistent high-resolution images in a staged manner. We introduce the DisText-SR dataset and design a frequency-aware, multi-branch classifier-free guidance strategy to enhance controllability and semantic alignment. Extensive experiments demonstrate that our method achieves superior perceptual quality, fidelity, and cross-degradation generalization on both synthetic and real-world scenes, significantly mitigating semantic drift and hallucination artifacts.

Technology Category

Application Category

📝 Abstract
Image Super-Resolution (SR) aims to reconstruct high-resolution images from degraded low-resolution inputs. While diffusion-based SR methods offer powerful generative capabilities, their performance heavily depends on how semantic priors are structured and integrated into the generation process. Existing approaches often rely on entangled or coarse-grained priors that mix global layout with local details, or conflate structural and textural cues, thereby limiting semantic controllability and interpretability. In this work, we propose DTPSR, a novel diffusion-based SR framework that introduces disentangled textual priors along two complementary dimensions: spatial hierarchy (global vs. local) and frequency semantics (low- vs. high-frequency). By explicitly separating these priors, DTPSR enables the model to simultaneously capture scene-level structure and object-specific details with frequency-aware semantic guidance. The corresponding embeddings are injected via specialized cross-attention modules, forming a progressive generation pipeline that reflects the semantic granularity of visual content, from global layout to fine-grained textures. To support this paradigm, we construct DisText-SR, a large-scale dataset containing approximately 95,000 image-text pairs with carefully disentangled global, low-frequency, and high-frequency descriptions. To further enhance controllability and consistency, we adopt a multi-branch classifier-free guidance strategy with frequency-aware negative prompts to suppress hallucinations and semantic drift. Extensive experiments on synthetic and real-world benchmarks show that DTPSR achieves high perceptual quality, competitive fidelity, and strong generalization across diverse degradation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Image Super-Resolution
Diffusion Models
Semantic Priors
Disentanglement
Controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

disentangled textual priors
diffusion-based super-resolution
spatial-frequency disentanglement
cross-attention guidance
classifier-free guidance
🔎 Similar Papers
No similar papers found.
L
Lei Jiang
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
X
Xin Liu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
X
Xinze Tong
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Z
Zhiliang Li
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Jie Liu
Jie Liu
Nanjing University
Jie Tang
Jie Tang
UW Madison
Computed Tomography
G
Gangshan Wu
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China