Zero-Shot Styled Text Image Generation, but Make It Autoregressive

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of existing stylized handwritten text generation (HTG) methods—poor generalization to unseen styles, constrained output length, and low training efficiency—by proposing the first autoregressive text-to-image generation framework for zero-shot style transfer. Our method employs a VAE to extract compact, disentangled representations of text images and leverages an autoregressive Transformer to model character-level sequential generation. Given only a few style exemplars (e.g., a single font or handwriting sample) and arbitrary-length input text, it synthesizes high-fidelity, background-artifact-free styled text images. The model is trained exclusively on large-scale synthetic data (>100K fonts). Experiments demonstrate substantial improvements over state-of-the-art GAN- and diffusion-based approaches on both printed and authentic handwritten text generation. Notably, our framework achieves the first zero-shot transfer to unseen fonts and user-specific handwriting styles, exhibiting strong cross-style generalization and adaptability to downstream tasks.

Technology Category

Application Category

📝 Abstract
Styled Handwritten Text Generation (HTG) has recently received attention from the computer vision and document analysis communities, which have developed several solutions, either GAN- or diffusion-based, that achieved promising results. Nonetheless, these strategies fail to generalize to novel styles and have technical constraints, particularly in terms of maximum output length and training efficiency. To overcome these limitations, in this work, we propose a novel framework for text image generation, dubbed Emuru. Our approach leverages a powerful text image representation model (a variational autoencoder) combined with an autoregressive Transformer. Our approach enables the generation of styled text images conditioned on textual content and style examples, such as specific fonts or handwriting styles. We train our model solely on a diverse, synthetic dataset of English text rendered in over 100,000 typewritten and calligraphy fonts, which gives it the capability to reproduce unseen styles (both fonts and users' handwriting) in zero-shot. To the best of our knowledge, Emuru is the first autoregressive model for HTG, and the first designed specifically for generalization to novel styles. Moreover, our model generates images without background artifacts, which are easier to use for downstream applications. Extensive evaluation on both typewritten and handwritten, any-length text image generation scenarios demonstrates the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations in styled text image generation generalization
Enabling zero-shot styled text generation with autoregressive models
Eliminating background artifacts for better downstream application use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive Transformer for text image generation
Variational autoencoder for text image representation
Zero-shot styled text generation from synthetic data