DualTSR: Unified Dual-Diffusion Transformer for Scene Text Image Super-Resolution

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first end-to-end unified framework for scene text image super-resolution, addressing the reliance on external OCR systems or complex multi-component architectures. By integrating continuous image and discrete text diffusion objectives within a single multimodal Transformer, the model jointly models conditional flow matching and discrete diffusion to enable deep fusion of visual and textual information at every layer. This design allows the network to intrinsically infer textual priors without external OCR supervision. Evaluated on both synthetic Chinese datasets and real-world scene text benchmarks, the approach achieves state-of-the-art performance in perceptual quality and text fidelity, while simultaneously enhancing training stability and architectural simplicity.

Technology Category

Application Category

📝 Abstract
Scene Text Image Super-Resolution (STISR) aims to restore high-resolution details in low-resolution text images, which is crucial for both human readability and machine recognition. Existing methods, however, often depend on external Optical Character Recognition (OCR) models for textual priors or rely on complex multi-component architectures that are difficult to train and reproduce. In this paper, we introduce DualTSR, a unified end-to-end framework that addresses both issues. DualTSR employs a single multimodal transformer backbone trained with a dual diffusion objective. It simultaneously models the continuous distribution of high-resolution images via Conditional Flow Matching and the discrete distribution of textual content via discrete diffusion. This shared design enables visual and textual information to interact at every layer, allowing the model to infer text priors internally instead of relying on an external OCR module. Compared with prior multi-branch diffusion systems, DualTSR offers a simpler end-to-end formulation with fewer hand-crafted components. Experiments on synthetic Chinese benchmarks and a curated real-world evaluation protocol show that DualTSR achieves strong perceptual quality and text fidelity.
Problem

Research questions and friction points this paper is trying to address.

Scene Text Image Super-Resolution
OCR dependency
multi-component architecture
text priors
model reproducibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Diffusion
Multimodal Transformer
Scene Text Image Super-Resolution
Conditional Flow Matching
Text Prior Learning
🔎 Similar Papers
No similar papers found.