🤖 AI Summary
Current English-dominant text-to-image models (e.g., Flux) exhibit cultural semantic misalignment when processing Chinese prompts due to their English-centric training data, resulting in visually distorted or semantically inaccurate outputs. To address this, we propose a lightweight, full-parameter-free adaptation method built upon the MultiModal Diffusion Transformer (MMDiT), which introduces a direct semantic control mechanism to inject precise Chinese semantics into Flux’s text encoding pathway. Our approach seamlessly integrates with mainstream plug-ins—including LoRA, IP-Adapter, and ControlNet—while significantly improving visual fidelity and semantic accuracy for culturally specific Chinese elements (e.g., traditional attire, festive scenes, calligraphic motifs). Experiments demonstrate that our method preserves high-generation quality under both Chinese and English prompts, and notably outperforms translation-based baselines and bilingual fine-tuning in cultural fidelity and perceptual realism for Chinese prompts.
📝 Abstract
We proposed the Chinese Text Adapter-Flux (CTA-Flux). An adaptation method fits the Chinese text inputs to Flux, a powerful text-to-image (TTI) generative model initially trained on the English corpus. Despite the notable image generation ability conditioned on English text inputs, Flux performs poorly when processing non-English prompts, particularly due to linguistic and cultural biases inherent in predominantly English-centric training datasets. Existing approaches, such as translating non-English prompts into English or finetuning models for bilingual mappings, inadequately address culturally specific semantics, compromising image authenticity and quality. To address this issue, we introduce a novel method to bridge Chinese semantic understanding with compatibility in English-centric TTI model communities. Existing approaches relying on ControlNet-like architectures typically require a massive parameter scale and lack direct control over Chinese semantics. In comparison, CTA-flux leverages MultiModal Diffusion Transformer (MMDiT) to control the Flux backbone directly, significantly reducing the number of parameters while enhancing the model's understanding of Chinese semantics. This integration significantly improves the generation quality and cultural authenticity without extensive retraining of the entire model, thus maintaining compatibility with existing text-to-image plugins such as LoRA, IP-Adapter, and ControlNet. Empirical evaluations demonstrate that CTA-flux supports Chinese and English prompts and achieves superior image generation quality, visual realism, and faithful depiction of Chinese semantics.