Legacy Learning Using Few-Shot Font Generation Models for Automatic Text Design in Metaverse Content: Cases Studies in Korean and Chinese

📅 2024-08-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Font generation for large-character-set languages (e.g., Korean with 11,172 characters and Chinese with over 60,000) in the metaverse faces high design costs and structural instability in generated glyphs. Method: We propose a few-shot font generation framework introducing “Legacy Learning”—a novel paradigm that reconstructs existing fonts and injects controllable stylistic variations while strictly preserving semantic glyph structure. It integrates font-structure constraint modeling, stroke-level topological alignment, and style-disentangled representation learning. Contribution/Results: Experiments demonstrate significant improvement in structural accuracy of generated glyphs. A SUS usability evaluation with metaverse designers yields a score of 95.8/100, confirming industrial viability. To our knowledge, this is the first work achieving few-shot, high-fidelity, and highly controllable font generation for large character sets—simultaneously ensuring strong stylistic diversity (>30%) and visual fidelity.

Technology Category

Application Category

📝 Abstract
Generally, the components constituting a metaverse are classified into hardware, software, and content categories. As a content component, text design is known to positively affect user immersion and usability. Unlike English, where designing texts involves only 26 letters, designing texts in Korean and Chinese requires creating 11,172 and over 60,000 individual glyphs, respectively, owing to the nature of the languages. Consequently, applying new text designs to enhance user immersion within the metaverse can be tedious and expensive, particularly for certain languages. Recently, efforts have been devoted toward addressing this issue using generative artificial intelligence (AI). However, challenges remain in creating new text designs for the metaverse owing to inaccurate character structures. This study proposes a new AI learning method known as Legacy Learning, which enables high-quality text design at a lower cost. Legacy Learning involves recombining existing text designs and intentionally introducing variations to produce fonts that are distinct from the originals while maintaining high quality. To demonstrate the effectiveness of the proposed method in generating text designs for the metaverse, we performed evaluations from the following three aspects: 1) Quantitative performance evaluation 2) Qualitative evaluationand 3) User usability evaluation. The quantitative and qualitative performance results indicated that the generated text designs differed from the existing ones by an average of over 30% while still maintaining high visual quality. Additionally, the SUS test performed with metaverse content designers achieved a score of 95.8, indicating high usability.
Problem

Research questions and friction points this paper is trying to address.

Reducing high costs of creating new text designs
Generating thousands of unique glyphs for Korean and Chinese
Improving quality without manual design processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Legacy Learning strategy recombines existing text models
Improves font generation quality without manual design
Enhances structural consistency and recognition accuracy iteratively
🔎 Similar Papers
No similar papers found.
Y
Younghwi Kim
Safe & Clean Supply Chain Research Center, Pusan National University, 30-Jan-jeon Dong, Geum-Jeong Gu, 46241, Busan, South Korea
S
Seok Chan Jeong
Department of e-Business & AI Grand ICT Research Center, Dong-eui University, 176 Eomgwang No, Gaya Dong 24, Busanjin Gu, 47340 Busan, South Korea
Sunghyun Sim
Sunghyun Sim
Changwon National University
Industrial EngineeringData ScienceIndustrial AIEfficient AI