AlignGen: Boosting Personalized Image Generation with Cross-Modality Prior Alignment

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In personalized image generation, diffusion models often over-rely on text prompts when they conflict with reference-image priors, leading to severe degradation of reference content fidelity. To address this, we propose a cross-modal prior alignment mechanism within the diffusion Transformer framework, enabling explicit co-modeling of textual and visual priors. Specifically, we introduce learnable bridging tokens to fuse dual-modal priors, design a robust alignment training strategy to mitigate misalignment-induced interference, and construct selective cross-modal attention masks to dynamically suppress irrelevant modality responses. Evaluated under zero-shot settings, our method significantly outperforms existing optimization-free approaches. It achieves reference-content preservation and generation consistency on par with—or even surpassing—主流 test-time fine-tuning methods. To our knowledge, this is the first work to realize efficient and stable multi-modal prior alignment in diffusion Transformers.

Technology Category

Application Category

📝 Abstract
Personalized image generation aims to integrate user-provided concepts into text-to-image models, enabling the generation of customized content based on a given prompt. Recent zero-shot approaches, particularly those leveraging diffusion transformers, incorporate reference image information through multi-modal attention mechanism. This integration allows the generated output to be influenced by both the textual prior from the prompt and the visual prior from the reference image. However, we observe that when the prompt and reference image are misaligned, the generated results exhibit a stronger bias toward the textual prior, leading to a significant loss of reference content. To address this issue, we propose AlignGen, a Cross-Modality Prior Alignment mechanism that enhances personalized image generation by: 1) introducing a learnable token to bridge the gap between the textual and visual priors, 2) incorporating a robust training strategy to ensure proper prior alignment, and 3) employing a selective cross-modal attention mask within the multi-modal attention mechanism to further align the priors. Experimental results demonstrate that AlignGen outperforms existing zero-shot methods and even surpasses popular test-time optimization approaches.
Problem

Research questions and friction points this paper is trying to address.

Misalignment between text prompts and reference images in personalized generation
Strong bias toward textual prior causing loss of reference content
Need for cross-modality alignment to improve personalized image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable token bridges textual and visual priors
Robust training strategy ensures prior alignment
Selective cross-modal attention mask aligns priors
🔎 Similar Papers
No similar papers found.
Yiheng Lin
Yiheng Lin
California Institute of Technology
Online Algorithmscontrol
S
Shifang Zhao
Institute of Information Science, Beijing Jiaotong University, China
T
Ting Liu
MT Lab, Meitu Inc., China
X
Xiaochao Qu
MT Lab, Meitu Inc., China
Luoqi Liu
Luoqi Liu
Director of MT Lab; Meitu
Computer Vision
Y
Yao Zhao
Institute of Information Science, Beijing Jiaotong University, China
Yunchao Wei
Yunchao Wei
Professor, Beijing Jiaotong University, UTS, UIUC, NUS
Computer VisionMachine Learning