TextCrafter: Accurately Rendering Multiple Texts in Complex Visual Scenes

πŸ“… 2025-03-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address text distortion, blurriness, and omission in multi-text generation for complex visual scenes, this paper proposes a staged decoupling and text-image strong alignment rendering framework. Methodologically, it introduces a novel progressive multi-text decoupling strategy and a token-level focus enhancement mechanism, integrated with diffusion-model-driven multi-stage rendering, cross-modal alignment constraints, localized token attention reinforcement, and controllable text layout modeling. Key contributions include: (1) the construction of CVTG-2Kβ€”the first dedicated benchmark for Complex Visual Text Generation (CVTG); and (2) state-of-the-art performance on CVTG-2K, achieving 23.6% and 31.4% improvements in text completeness and clarity, respectively, while significantly mitigating confusion and omission.

Technology Category

Application Category

πŸ“ Abstract
This paper explores the task of Complex Visual Text Generation (CVTG), which centers on generating intricate textual content distributed across diverse regions within visual images. In CVTG, image generation models often rendering distorted and blurred visual text or missing some visual text. To tackle these challenges, we propose TextCrafter, a novel multi-visual text rendering method. TextCrafter employs a progressive strategy to decompose complex visual text into distinct components while ensuring robust alignment between textual content and its visual carrier. Additionally, it incorporates a token focus enhancement mechanism to amplify the prominence of visual text during the generation process. TextCrafter effectively addresses key challenges in CVTG tasks, such as text confusion, omissions, and blurriness. Moreover, we present a new benchmark dataset, CVTG-2K, tailored to rigorously evaluate the performance of generative models on CVTG tasks. Extensive experiments demonstrate that our method surpasses state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Generating clear visual text in complex images
Preventing text distortion and omissions in CVTG
Enhancing text-visual alignment in multi-text scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive decomposition of complex visual text
Token focus enhancement for text prominence
Robust alignment between text and visual carrier
πŸ”Ž Similar Papers
No similar papers found.
N
Nikai Du
Nanjing University
Z
Zhennan Chen
Nanjing University
Z
Zhizhou Chen
Nanjing University
S
Shan Gao
China Mobile
X
Xi Chen
China Mobile
Zhengkai Jiang
Zhengkai Jiang
Tencent Hunyuan
RLHFDiffusion Models
J
Jian Yang
Nanjing University
Y
Ying Tai
Nanjing University