๐ค AI Summary
In text-to-image (T2I) generation, background regions often fail to harmonize naturally with textual layout, resulting in ambiguous visual hierarchy between text and imageโseverely limiting practical applications such as graphic design. To address this, we propose a training-free, dynamic blank-region optimization method. Our approach introduces a novel conflict-object relocation mechanism grounded in cross-attention map analysis and force-directed graph repositioning, which preserves text-region attention fidelity without degrading overall image quality. We further incorporate attention exclusion constraints and CLIP-based semantic evaluation, and propose a new Vision-Text Consistency Metric (VTCM) to quantify alignment. Evaluated on a 27,000-image benchmark, our method reduces text-region saliency overlap by 23%, achieves 98% CLIP semantic fidelity, and attains significantly higher VTCM scores than state-of-the-art methods.
๐ Abstract
Text-to-image (T2I) generation has made remarkable progress in producing high-quality images, but a fundamental challenge remains: creating backgrounds that naturally accommodate text placement without compromising image quality. This capability is non-trivial for real-world applications like graphic design, where clear visual hierarchy between content and text is essential. Prior work has primarily focused on arranging layouts within existing static images, leaving unexplored the potential of T2I models for generating text-friendly backgrounds. We present TextCenGen, a training-free dynamic background adaptation in the blank region for text-friendly image generation. Instead of directly reducing attention in text areas, which degrades image quality, we relocate conflicting objects before background optimization. Our method analyzes cross-attention maps to identify conflicting objects overlapping with text regions and uses a force-directed graph approach to guide their relocation, followed by attention excluding constraints to ensure smooth backgrounds. Our method is plug-and-play, requiring no additional training while well balancing both semantic fidelity and visual quality. Evaluated on our proposed text-friendly T2I benchmark of 27,000 images across four seed datasets, TextCenGen outperforms existing methods by achieving 23% lower saliency overlap in text regions while maintaining 98% of the semantic fidelity measured by CLIP score and our proposed Visual-Textual Concordance Metric (VTCM).