🤖 AI Summary
This work addresses the challenge of content distortion and insufficient style fidelity in few-shot Chinese font generation, which arises from the difficulty of disentangling content and style. To this end, the authors propose a structure-level disentangled diffusion model that separately processes content templates and style features. Style semantics are extracted using CLIP and precisely fused with content via a cross-attention mechanism. A background noise removal module is further introduced to enhance generation quality in regions with complex strokes. Innovatively departing from feature-level disentanglement, the method adopts structure-level separation and employs a parameter-efficient fine-tuning strategy that updates only style-related components, thereby mitigating overfitting and improving adaptability to novel styles. Experiments demonstrate that the approach significantly enhances style fidelity while preserving content accuracy, with comprehensive evaluation supported by newly introduced Grey and OCR metrics for content quality assessment.
📝 Abstract
Few-shot Chinese font generation aims to synthesize new characters in a target style using only a handful of reference images. Achieving accurate content rendering and faithful style transfer requires effective disentanglement between content and style. However, existing approaches achieve only feature-level disentanglement, allowing the generator to re-entangle these features, leading to content distortion and degraded style fidelity. We propose the Structure-Level Disentangled Diffusion Model (SLD-Font), which receives content and style information from two separate channels. SimSun-style images are used as content templates and concatenated with noisy latent features as the input. Style features extracted by a CLIP model from target-style images are integrated via cross-attention. Additionally, we train a Background Noise Removal module in the pixel space to remove background noise in complex stroke regions. Based on theoretical validation of disentanglement effectiveness, we introduce a parameter-efficient fine-tuning strategy that updates only the style-related modules. This allows the model to better adapt to new styles while avoiding overfitting to the reference images' content. We further introduce the Grey and OCR metrics to evaluate the content quality of generated characters. Experimental results show that SLD-Font achieves significantly higher style fidelity while maintaining comparable content accuracy to existing state-of-the-art methods.