LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional text encoders (e.g., CLIP, T5) suffer from limited multilingual understanding and coarse-grained semantic representations, constraining text-to-image generation performance. To address this, we propose LDGen—a lightweight large language model (LLM) integration framework. Methodologically, LDGen introduces a novel hierarchical caption optimization strategy guided by human instructions to enhance linguistic representation; it further incorporates a lightweight adapter and a cross-modal refiner to efficiently align LLM-derived language features with diffusion-based image representations. Experiments demonstrate that LDGen consistently outperforms CLIP/T5 baselines in multilingual prompt adherence and image aesthetic quality, enabling zero-shot generation across Chinese, English, Japanese, French, and other languages, while significantly improving training efficiency. Our core contribution is the first deep integration of a lightweight LLM into the text-to-image diffusion pipeline—achieving strong semantic modeling without compromising practical deployability.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands. Traditional text encoders, such as CLIP and T5, exhibit limitations in multilingual processing, hindering image generation across diverse languages. We address these challenges by leveraging the advanced capabilities of LLMs. Our approach employs a language representation strategy that applies hierarchical caption optimization and human instruction techniques to derive precise semantic information,. Subsequently, we incorporate a lightweight adapter and a cross-modal refiner to facilitate efficient feature alignment and interaction between LLMs and image features. LDGen reduces training time and enables zero-shot multilingual image generation. Experimental results indicate that our method surpasses baseline models in both prompt adherence and image aesthetic quality, while seamlessly supporting multiple languages. Project page: https://zrealli.github.io/LDGen.
Problem

Research questions and friction points this paper is trying to address.

Enhance text-to-image synthesis
Overcome multilingual processing limitations
Reduce computational demands efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs into diffusion models
Uses hierarchical caption optimization
Employs lightweight adapter and cross-modal refiner
🔎 Similar Papers
No similar papers found.