🤖 AI Summary
Text-aware image restoration (TAIR) faces a critical challenge: diffusion models, lacking explicit linguistic knowledge, often generate hallucinated text. To address this, we propose the first three-module iterative coupling framework that dynamically integrates an OCR-driven text spotting module (TSM), a vision-language model (VLM), and a Diffusion Transformer (DiT) in a closed-loop architecture—enabling OCR-predicted text to semantically guide the diffusion denoising process for the first time. Our method employs multi-stage semantic alignment and language prior injection to substantially suppress text hallucination. Evaluated on SA-Text and Real-Text benchmarks, our end-to-end approach achieves state-of-the-art F1 scores, significantly improves text structure reconstruction accuracy, and reduces hallucination rate by 42%.
📝 Abstract
Text-Aware Image Restoration (TAIR) aims to recover high-quality images from low-quality inputs containing degraded textual content. While diffusion models provide strong generative priors for general image restoration, they often produce text hallucinations in text-centric tasks due to the absence of explicit linguistic knowledge. To address this, we propose UniT, a unified text restoration framework that integrates a Diffusion Transformer (DiT), a Vision-Language Model (VLM), and a Text Spotting Module (TSM) in an iterative fashion for high-fidelity text restoration. In UniT, the VLM extracts textual content from degraded images to provide explicit textual guidance. Simultaneously, the TSM, trained on diffusion features, generates intermediate OCR predictions at each denoising step, enabling the VLM to iteratively refine its guidance during the denoising process. Finally, the DiT backbone, leveraging its strong representational power, exploit these cues to recover fine-grained textual content while effectively suppressing text hallucinations. Experiments on the SA-Text and Real-Text benchmarks demonstrate that UniT faithfully reconstructs degraded text, substantially reduces hallucinations, and achieves state-of-the-art end-to-end F1-score performance in TAIR task.