π€ AI Summary
Existing diffusion-based super-resolution methods suffer from semantic misalignment and hallucinated details due to inaccurate text conditioning and cross-attention distraction toward irrelevant pixels. To address this, we propose a plug-and-play spatial refocusing framework that introduces, for the first time, **Segmentation-mask-guided Spatial Refocusing Cross-Attention (SRCA)** and **Spatially Targeted Classifier-Free Guidance (STCFG)**. Leveraging vision-guided segmentation masks, SRCA dynamically suppresses attention over regions lacking textual correspondence, while STCFG enforces fine-grained textβimage spatial alignment without modifying the backbone architecture. Our approach significantly improves semantic accuracy and visual fidelity. Extensive experiments on multiple synthetic and real-world datasets demonstrate consistent superiority over seven state-of-the-art methods: it achieves leading PSNR and SSIM scores and attains optimal perceptual quality as measured by LPIPS and DISTS.
π Abstract
Existing diffusion-based super-resolution approaches often exhibit semantic ambiguities due to inaccuracies and incompleteness in their text conditioning, coupled with the inherent tendency for cross-attention to divert towards irrelevant pixels. These limitations can lead to semantic misalignment and hallucinated details in the generated high-resolution outputs. To address these, we propose a novel, plug-and-play spatially re-focused super-resolution (SRSR) framework that consists of two core components: first, we introduce Spatially Re-focused Cross-Attention (SRCA), which refines text conditioning at inference time by applying visually-grounded segmentation masks to guide cross-attention. Second, we introduce a Spatially Targeted Classifier-Free Guidance (STCFG) mechanism that selectively bypasses text influences on ungrounded pixels to prevent hallucinations. Extensive experiments on both synthetic and real-world datasets demonstrate that SRSR consistently outperforms seven state-of-the-art baselines in standard fidelity metrics (PSNR and SSIM) across all datasets, and in perceptual quality measures (LPIPS and DISTS) on two real-world benchmarks, underscoring its effectiveness in achieving both high semantic fidelity and perceptual quality in super-resolution.