WordCon: Word-level Typography Control in Scene Text Rendering

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of word-level typographic control in generated images, this paper introduces WordCon—the first word-level controllable scene text generation framework. Methodologically: (1) we construct the first large-scale, word-level controllable scene text dataset; (2) we propose a Text-Image Alignment (TIA) cross-modal alignment framework that integrates grounding-based localization, masked latent-space loss, and joint attention supervision to achieve multi-word disentanglement and region-specific focus; and (3) we design a hybrid parameter-efficient fine-tuning strategy to enhance training efficiency and generalization. Experiments demonstrate that WordCon consistently outperforms state-of-the-art methods across key metrics—including typographic precision and text fidelity—while supporting diverse applications such as artistic font generation and image-conditioned text editing. WordCon delivers strong controllability, high computational efficiency, and robust transferability across domains.

Technology Category

Application Category

📝 Abstract
Achieving precise word-level typography control within generated images remains a persistent challenge. To address it, we newly construct a word-level controlled scene text dataset and introduce the Text-Image Alignment (TIA) framework. This framework leverages cross-modal correspondence between text and local image regions provided by grounding models to enhance the Text-to-Image (T2I) model training. Furthermore, we propose WordCon, a hybrid parameter-efficient fine-tuning (PEFT) method. WordCon reparameterizes selective key parameters, improving both efficiency and portability. This allows seamless integration into diverse pipelines, including artistic text rendering, text editing, and image-conditioned text rendering. To further enhance controllability, the masked loss at the latent level is applied to guide the model to concentrate on learning the text region in the image, and the joint-attention loss provides feature-level supervision to promote disentanglement between different words. Both qualitative and quantitative results demonstrate the superiority of our method to the state of the art. The datasets and source code will be available for academic use.
Problem

Research questions and friction points this paper is trying to address.

Achieving precise word-level typography control in generated images
Enhancing Text-to-Image model training with cross-modal alignment
Improving efficiency and portability in text rendering pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Word-level controlled dataset with TIA framework
Hybrid PEFT method WordCon for efficiency
Masked and joint-attention losses enhance controllability
🔎 Similar Papers
No similar papers found.
W
Wenda Shi
The Hong Kong Polytechnic University, China
Yiren Song
Yiren Song
PH.D student, National University of Singapore
Generative AIDiffusionUnified model
Z
Zihan Rao
Chongqing Univesity, China
D
Dengming Zhang
Zhejiang University, China
J
Jiaming Liu
Tiamat AI, China
Xingxing Zou
Xingxing Zou
School of Fashion and Textiles, The Hong Kong Polytechnic Univeristy
ai art