🤖 AI Summary
Text-to-image generation models frequently suffer from inaccurate text rendering, resulting in typographical errors or missing characters. To address this, we propose Type-R, an end-to-end post-processing framework that, without modifying the original generator, unifies three key components for font- and layout-aware text correction: OCR-driven erroneous text detection, diffusion-based text region inpainting, and CLIP-OCR joint-constrained character-level correction. Type-R automatically localizes erroneous text regions, precisely erases them, and reconstructs semantically consistent and visually harmonious text. Evaluated across multiple benchmarks, Type-R achieves up to a 42.6% improvement in text accuracy while maintaining high image fidelity—introducing negligible degradation (<0.02) in LPIPS score. It significantly outperforms existing text-focused generative approaches, establishing a new state-of-the-art in post-hoc text correction for diffusion-based image synthesis.
📝 Abstract
While recent text-to-image models can generate photorealistic images from text prompts that reflect detailed instructions, they still face significant challenges in accurately rendering words in the image. In this paper, we propose to retouch erroneous text renderings in the post-processing pipeline. Our approach, called Type-R, identifies typographical errors in the generated image, erases the erroneous text, regenerates text boxes for missing words, and finally corrects typos in the rendered words. Through extensive experiments, we show that Type-R, in combination with the latest text-to-image models such as Stable Diffusion or Flux, achieves the highest text rendering accuracy while maintaining image quality and also outperforms text-focused generation baselines in terms of balancing text accuracy and image quality.