Unified Diffusion Transformer for High-fidelity Text-Aware Image Restoration

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-aware image restoration (TAIR) faces a critical challenge: diffusion models, lacking explicit linguistic knowledge, often generate hallucinated text. To address this, we propose the first three-module iterative coupling framework that dynamically integrates an OCR-driven text spotting module (TSM), a vision-language model (VLM), and a Diffusion Transformer (DiT) in a closed-loop architecture—enabling OCR-predicted text to semantically guide the diffusion denoising process for the first time. Our method employs multi-stage semantic alignment and language prior injection to substantially suppress text hallucination. Evaluated on SA-Text and Real-Text benchmarks, our end-to-end approach achieves state-of-the-art F1 scores, significantly improves text structure reconstruction accuracy, and reduces hallucination rate by 42%.

Technology Category

Application Category

📝 Abstract
Text-Aware Image Restoration (TAIR) aims to recover high-quality images from low-quality inputs containing degraded textual content. While diffusion models provide strong generative priors for general image restoration, they often produce text hallucinations in text-centric tasks due to the absence of explicit linguistic knowledge. To address this, we propose UniT, a unified text restoration framework that integrates a Diffusion Transformer (DiT), a Vision-Language Model (VLM), and a Text Spotting Module (TSM) in an iterative fashion for high-fidelity text restoration. In UniT, the VLM extracts textual content from degraded images to provide explicit textual guidance. Simultaneously, the TSM, trained on diffusion features, generates intermediate OCR predictions at each denoising step, enabling the VLM to iteratively refine its guidance during the denoising process. Finally, the DiT backbone, leveraging its strong representational power, exploit these cues to recover fine-grained textual content while effectively suppressing text hallucinations. Experiments on the SA-Text and Real-Text benchmarks demonstrate that UniT faithfully reconstructs degraded text, substantially reduces hallucinations, and achieves state-of-the-art end-to-end F1-score performance in TAIR task.
Problem

Research questions and friction points this paper is trying to address.

Recover high-quality images with degraded text content
Reduce text hallucinations in diffusion-based restoration models
Integrate linguistic guidance for faithful text reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework integrates Diffusion Transformer, Vision-Language Model, Text Spotting Module
Vision-Language Model extracts text guidance from degraded images iteratively
Diffusion Transformer uses cues to recover text and suppress hallucinations
🔎 Similar Papers
2024-07-04IEEE transactions on circuits and systems for video technology (Print)Citations: 1
J
Jin Hyeon Kim
KAIST AI
P
Paul Hyunbin Cho
KAIST AI
C
Claire Kim
KAIST AI
J
Jaewon Min
KAIST AI
J
Jaeeun Lee
KAIST AI
J
Jihye Park
Samsung Electronics
Yeji Choi
Yeji Choi
DI Lab Inc.
weather and climateprecipitationremote sensingdeep learningimage segmentation
Seungryong Kim
Seungryong Kim
Associate Professor, KAIST
Computer VisionMachine Learning