π€ AI Summary
Diffusion models commonly suffer from text omission in text-to-image generation, particularly exhibiting insufficient rendering completeness for Chinese text. To address this, we propose a training-free attention-alignment guidance mechanism. First, we uncover the distribution pattern of text-related tokens within the self-attention layers of MM-DiT. Second, we design a dual-loss latent-space guidance paradigm for early denoising stages, explicitly modeling correspondences between textual tokens and image text regions to achieve end-to-end, zero-training rendering correction. Our method integrates OCR-driven evaluation with a textβimage region alignment loss. Experiments demonstrate state-of-the-art performance across text recall rate, OCR accuracy, and CLIP-Score, significantly improving both textual rendering completeness and fidelity.
π Abstract
Despite recent advances, diffusion-based text-to-image models still struggle with accurate text rendering. Several studies have proposed fine-tuning or training-free refinement methods for accurate text rendering. However, the critical issue of text omission, where the desired text is partially or entirely missing, remains largely overlooked. In this work, we propose TextGuider, a novel training-free method that encourages accurate and complete text appearance by aligning textual content tokens and text regions in the image. Specifically, we analyze attention patterns in MM-DiT models, particularly for text-related tokens intended to be rendered in the image. Leveraging this observation, we apply latent guidance during the early stage of denoising steps based on two loss functions that we introduce. Our method achieves state-of-the-art performance in test-time text rendering, with significant gains in recall and strong results in OCR accuracy and CLIP score.