🤖 AI Summary
Existing personalized text-to-image (T2I) methods struggle to simultaneously customize multiple concepts—especially abstract ones such as pose and lighting—and rely on test-time fine-tuning, leading to overfitting and low efficiency. This paper proposes the first test-time-fine-tuning-free framework for unified multi-concept customization. Built upon a pre-trained diffusion Transformer (DiT), it introduces a lightweight Mod-Adapter module that integrates cross-modal attention with a Mixture-of-Experts (MoE) architecture to enable semantic mapping from textual concepts to modulation spaces. To bridge the modality gap, we further propose a vision-language model (VLM)-guided pre-training strategy. Evaluated on an extended multi-concept benchmark, our method comprehensively surpasses state-of-the-art approaches in quantitative metrics, qualitative generation quality, and human evaluation. Crucially, it supports single-deployment, zero-shot adaptation to arbitrary new concepts without any fine-tuning.
📝 Abstract
Personalized text-to-image generation aims to synthesize images of user-provided concepts in diverse contexts. Despite recent progress in multi-concept personalization, most are limited to object concepts and struggle to customize abstract concepts (e.g., pose, lighting). Some methods have begun exploring multi-concept personalization supporting abstract concepts, but they require test-time fine-tuning for each new concept, which is time-consuming and prone to overfitting on limited training images. In this work, we propose a novel tuning-free method for multi-concept personalization that can effectively customize both object and abstract concepts without test-time fine-tuning. Our method builds upon the modulation mechanism in pretrained Diffusion Transformers (DiTs) model, leveraging the localized and semantically meaningful properties of the modulation space. Specifically, we propose a novel module, Mod-Adapter, to predict concept-specific modulation direction for the modulation process of concept-related text tokens. It incorporates vision-language cross-attention for extracting concept visual features, and Mixture-of-Experts (MoE) layers that adaptively map the concept features into the modulation space. Furthermore, to mitigate the training difficulty caused by the large gap between the concept image space and the modulation space, we introduce a VLM-guided pretraining strategy that leverages the strong image understanding capabilities of vision-language models to provide semantic supervision signals. For a comprehensive comparison, we extend a standard benchmark by incorporating abstract concepts. Our method achieves state-of-the-art performance in multi-concept personalization, supported by quantitative, qualitative, and human evaluations.