🤖 AI Summary
Existing GAN/VAE-based approaches for Korean font generation—especially for handwritten and printed styles—suffer from training instability, mode collapse, loss of fine-grained details, and poor generalization to unseen characters. To address these challenges, this paper proposes the first diffusion-based single-shot Korean font generation framework. Our key contributions are: (1) a novel phoneme-level text encoder that enables accurate semantic modeling of out-of-vocabulary Korean characters; (2) a coupled architecture integrating a pretrained DG-FONT style encoder with LPIPS-based perceptual loss to ensure both global style consistency and local stroke fidelity; and (3) a progressive denoising mechanism enabling high-fidelity generation of over 2,000 Hangul characters from only one reference image. Extensive experiments demonstrate that our method significantly outperforms GAN/VAE baselines in structural accuracy, texture detail preservation, and cross-character style consistency, while supporting practical, multi-style, one-click font generation in real-world scenarios.
📝 Abstract
Automatic font generation (AFG) is the process of creating a new font using only a few examples of the style images. Generating fonts for complex languages like Korean and Chinese, particularly in handwritten styles, presents significant challenges. Traditional AFGs, like Generative adversarial networks (GANs) and Variational Auto-Encoders (VAEs), are usually unstable during training and often face mode collapse problems. They also struggle to capture fine details within font images. To address these problems, we present a diffusion-based AFG method which generates high-quality, diverse Korean font images using only a single reference image, focusing on handwritten and printed styles. Our approach refines noisy images incrementally, ensuring stable training and visually appealing results. A key innovation is our text encoder, which processes phonetic representations to generate accurate and contextually correct characters, even for unseen characters. We used a pre-trained style encoder from DG FONT to effectively and accurately encode the style images. To further enhance the generation quality, we used perceptual loss that guides the model to focus on the global style of generated images. Experimental results on over 2000 Korean characters demonstrate that our model consistently generates accurate and detailed font images and outperforms benchmark methods, making it a reliable tool for generating authentic Korean fonts across different styles.