Type-R: Automatically Retouching Typos for Text-to-Image Generation

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image generation models frequently suffer from inaccurate text rendering, resulting in typographical errors or missing characters. To address this, we propose Type-R, an end-to-end post-processing framework that, without modifying the original generator, unifies three key components for font- and layout-aware text correction: OCR-driven erroneous text detection, diffusion-based text region inpainting, and CLIP-OCR joint-constrained character-level correction. Type-R automatically localizes erroneous text regions, precisely erases them, and reconstructs semantically consistent and visually harmonious text. Evaluated across multiple benchmarks, Type-R achieves up to a 42.6% improvement in text accuracy while maintaining high image fidelity—introducing negligible degradation (<0.02) in LPIPS score. It significantly outperforms existing text-focused generative approaches, establishing a new state-of-the-art in post-hoc text correction for diffusion-based image synthesis.

Technology Category

Application Category

📝 Abstract
While recent text-to-image models can generate photorealistic images from text prompts that reflect detailed instructions, they still face significant challenges in accurately rendering words in the image. In this paper, we propose to retouch erroneous text renderings in the post-processing pipeline. Our approach, called Type-R, identifies typographical errors in the generated image, erases the erroneous text, regenerates text boxes for missing words, and finally corrects typos in the rendered words. Through extensive experiments, we show that Type-R, in combination with the latest text-to-image models such as Stable Diffusion or Flux, achieves the highest text rendering accuracy while maintaining image quality and also outperforms text-focused generation baselines in terms of balancing text accuracy and image quality.
Problem

Research questions and friction points this paper is trying to address.

Correcting typographical errors in text-to-image outputs
Enhancing text rendering accuracy without compromising image quality
Automating post-processing to fix typos in generated images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically retouches typographical errors post-generation
Identifies and erases erroneous text in images
Regenerates and corrects text boxes for accuracy
🔎 Similar Papers
No similar papers found.
W
Wataru Shimoda
CyberAgent
Naoto Inoue
Naoto Inoue
Apple
Computer VisionComputer GraphicsMachine Learning
D
Daichi Haraguchi
CyberAgent
H
Hayato Mitani
Kyushu University
S
Seichi Uchida
Kyushu University
Kota Yamaguchi
Kota Yamaguchi
CyberAgent
Computer VisionMachine Learning