DA-Font: Few-Shot Font Generation via Dual-Attention Hybrid Integration

๐Ÿ“… 2025-09-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Few-shot font generation faces critical challenges including character structural distortion, stroke inaccuracies, and texture blurriness. To address these, we propose a dual-attention hybrid network that jointly models local glyph structures and inter-component geometric relationships via part-wise and relational attention mechanisms. We introduce a corner-point consistency loss to enforce alignment of key structural points and an elastic grid feature loss to enhance local texture fidelity. Furthermore, the framework integrates content-style feature fusion, adversarial training, and geometric consistency constraints. Extensive experiments on multi-font and multi-character benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches, achieving significant improvements in structural integrity, stroke sharpness, and detail realismโ€”thereby advancing the practical applicability of few-shot font generation.

Technology Category

Application Category

๐Ÿ“ Abstract
Few-shot font generation aims to create new fonts with a limited number of glyph references. It can be used to significantly reduce the labor cost of manual font design. However, due to the variety and complexity of font styles, the results generated by existing methods often suffer from visible defects, such as stroke errors, artifacts and blurriness. To address these issues, we propose DA-Font, a novel framework which integrates a Dual-Attention Hybrid Module (DAHM). Specifically, we introduce two synergistic attention blocks: the component attention block that leverages component information from content images to guide the style transfer process, and the relation attention block that further refines spatial relationships through interacting the content feature with both original and stylized component-wise representations. These two blocks collaborate to preserve accurate character shapes and stylistic textures. Moreover, we also design a corner consistency loss and an elastic mesh feature loss to better improve geometric alignment. Extensive experiments show that our DA-Font outperforms the state-of-the-art methods across diverse font styles and characters, demonstrating its effectiveness in enhancing structural integrity and local fidelity. The source code can be found at href{https://github.com/wrchen2001/DA-Font}{ extit{https://github.com/wrchen2001/DA-Font}}.
Problem

Research questions and friction points this paper is trying to address.

Generating new fonts with few glyph references while reducing manual design costs
Addressing visible defects like stroke errors, artifacts and blurriness in generated fonts
Enhancing structural integrity and local fidelity across diverse font styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Attention Hybrid Module integrates synergistic attention blocks
Component attention block leverages component information for style transfer
Relation attention block refines spatial relationships between content features
๐Ÿ”Ž Similar Papers
No similar papers found.
Weiran Chen
Weiran Chen
School of Computer Science and Technology, Soochow University
Document Analysis and RecognitionFont GenerationImage Quality Assessment
G
Guiqian Zhu
School of Computer Science and Technology, Soochow University, Suzhou, China
Y
Ying Li
School of Computer Science and Technology, Soochow University, Suzhou, China
Yi Ji
Yi Ji
Research Statistician Developer, JMP Statistical Discovery LLC
Computer ExperimentsUncertainty QuantificationDesign of Experiments
C
Chunping Liu
School of Computer Science and Technology, Soochow University, Suzhou, China