ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering without Font Annotations

šŸ“… 2025-02-16
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
To address the limitation of font-label dependency in multilingual text rendering, this paper proposes a label-free, self-supervised controllable font generation method. Methodologically, it integrates a conditional diffusion model with a text-region segmentation model, leveraging pixel-level segmentation masks to implicitly learn cross-lingual font representations—eliminating the need for explicit font category annotations. A multilingual text layout adaptation mechanism is further introduced to enable zero-shot font substitution and editing across arbitrary languages (e.g., Chinese, English, Japanese, Korean). Key contributions include: (1) the first self-supervised learning framework for font representation without font labels; (2) zero-shot controllable text generation across both fonts and languages; and (3) comprehensive evaluation—both qualitative and quantitative—demonstrating high font fidelity, text readability, and layout robustness.

Technology Category

Application Category

šŸ“ Abstract
This work demonstrates that diffusion models can achieve font-controllable multilingual text rendering using just raw images without font label annotations. Visual text rendering remains a significant challenge. While recent methods condition diffusion on glyphs, it is impossible to retrieve exact font annotations from large-scale, real-world datasets, which prevents user-specified font control. To address this, we propose a data-driven solution that integrates the conditional diffusion model with a text segmentation model, utilizing segmentation masks to capture and represent fonts in pixel space in a self-supervised manner, thereby eliminating the need for any ground-truth labels and enabling users to customize text rendering with any multilingual font of their choice. The experiment provides a proof of concept of our algorithm in zero-shot text and font editing across diverse fonts and languages, providing valuable insights for the community and industry toward achieving generalized visual text rendering.
Problem

Research questions and friction points this paper is trying to address.

Achieves font-controllable multilingual text rendering.
Eliminates need for font label annotations.
Enables user-specified font control in text rendering.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models for font control
Self-supervised text segmentation
Multilingual text rendering without labels
šŸ”Ž Similar Papers
No similar papers found.