SRSR: Enhancing Semantic Accuracy in Real-World Image Super-Resolution with Spatially Re-Focused Text-Conditioning

πŸ“… 2025-10-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing diffusion-based super-resolution methods suffer from semantic misalignment and hallucinated details due to inaccurate text conditioning and cross-attention distraction toward irrelevant pixels. To address this, we propose a plug-and-play spatial refocusing framework that introduces, for the first time, **Segmentation-mask-guided Spatial Refocusing Cross-Attention (SRCA)** and **Spatially Targeted Classifier-Free Guidance (STCFG)**. Leveraging vision-guided segmentation masks, SRCA dynamically suppresses attention over regions lacking textual correspondence, while STCFG enforces fine-grained text–image spatial alignment without modifying the backbone architecture. Our approach significantly improves semantic accuracy and visual fidelity. Extensive experiments on multiple synthetic and real-world datasets demonstrate consistent superiority over seven state-of-the-art methods: it achieves leading PSNR and SSIM scores and attains optimal perceptual quality as measured by LPIPS and DISTS.

Technology Category

Application Category

πŸ“ Abstract
Existing diffusion-based super-resolution approaches often exhibit semantic ambiguities due to inaccuracies and incompleteness in their text conditioning, coupled with the inherent tendency for cross-attention to divert towards irrelevant pixels. These limitations can lead to semantic misalignment and hallucinated details in the generated high-resolution outputs. To address these, we propose a novel, plug-and-play spatially re-focused super-resolution (SRSR) framework that consists of two core components: first, we introduce Spatially Re-focused Cross-Attention (SRCA), which refines text conditioning at inference time by applying visually-grounded segmentation masks to guide cross-attention. Second, we introduce a Spatially Targeted Classifier-Free Guidance (STCFG) mechanism that selectively bypasses text influences on ungrounded pixels to prevent hallucinations. Extensive experiments on both synthetic and real-world datasets demonstrate that SRSR consistently outperforms seven state-of-the-art baselines in standard fidelity metrics (PSNR and SSIM) across all datasets, and in perceptual quality measures (LPIPS and DISTS) on two real-world benchmarks, underscoring its effectiveness in achieving both high semantic fidelity and perceptual quality in super-resolution.
Problem

Research questions and friction points this paper is trying to address.

Reduces semantic ambiguities in image super-resolution
Prevents hallucinated details in high-resolution outputs
Improves text-conditioning accuracy for better semantic alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatially Re-focused Cross-Attention refines text conditioning
Spatially Targeted Classifier-Free Guidance prevents hallucinations
Plug-and-play framework enhances semantic accuracy super-resolution
πŸ”Ž Similar Papers
No similar papers found.
C
Chen Chen
Amazon, The University of Sydney
Majid Abdolshah
Majid Abdolshah
Amazon
Violetta Shevchenko
Violetta Shevchenko
Pluralis Research
H
Hongdong Li
Amazon, Australian National University
C
Chang Xu
The University of Sydney
Pulak Purkait
Pulak Purkait
Amazon
Computer VisionMachine LearningImage Processing