Mod-Adapter: Tuning-Free and Versatile Multi-concept Personalization via Modulation Adapter

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing personalized text-to-image (T2I) methods struggle to simultaneously customize multiple concepts—especially abstract ones such as pose and lighting—and rely on test-time fine-tuning, leading to overfitting and low efficiency. This paper proposes the first test-time-fine-tuning-free framework for unified multi-concept customization. Built upon a pre-trained diffusion Transformer (DiT), it introduces a lightweight Mod-Adapter module that integrates cross-modal attention with a Mixture-of-Experts (MoE) architecture to enable semantic mapping from textual concepts to modulation spaces. To bridge the modality gap, we further propose a vision-language model (VLM)-guided pre-training strategy. Evaluated on an extended multi-concept benchmark, our method comprehensively surpasses state-of-the-art approaches in quantitative metrics, qualitative generation quality, and human evaluation. Crucially, it supports single-deployment, zero-shot adaptation to arbitrary new concepts without any fine-tuning.

Technology Category

Application Category

📝 Abstract
Personalized text-to-image generation aims to synthesize images of user-provided concepts in diverse contexts. Despite recent progress in multi-concept personalization, most are limited to object concepts and struggle to customize abstract concepts (e.g., pose, lighting). Some methods have begun exploring multi-concept personalization supporting abstract concepts, but they require test-time fine-tuning for each new concept, which is time-consuming and prone to overfitting on limited training images. In this work, we propose a novel tuning-free method for multi-concept personalization that can effectively customize both object and abstract concepts without test-time fine-tuning. Our method builds upon the modulation mechanism in pretrained Diffusion Transformers (DiTs) model, leveraging the localized and semantically meaningful properties of the modulation space. Specifically, we propose a novel module, Mod-Adapter, to predict concept-specific modulation direction for the modulation process of concept-related text tokens. It incorporates vision-language cross-attention for extracting concept visual features, and Mixture-of-Experts (MoE) layers that adaptively map the concept features into the modulation space. Furthermore, to mitigate the training difficulty caused by the large gap between the concept image space and the modulation space, we introduce a VLM-guided pretraining strategy that leverages the strong image understanding capabilities of vision-language models to provide semantic supervision signals. For a comprehensive comparison, we extend a standard benchmark by incorporating abstract concepts. Our method achieves state-of-the-art performance in multi-concept personalization, supported by quantitative, qualitative, and human evaluations.
Problem

Research questions and friction points this paper is trying to address.

Tuning-free multi-concept personalization for objects and abstract concepts
Overcoming test-time fine-tuning limitations in concept customization
Enhancing modulation space mapping with vision-language cross-attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mod-Adapter predicts concept-specific modulation directions
Uses Mixture-of-Experts layers for adaptive mapping
VLM-guided pretraining provides semantic supervision signals
🔎 Similar Papers
No similar papers found.
Weizhi Zhong
Weizhi Zhong
University of Hong Kong
Text-to-Image Generation
H
Huan Yang
Kuaishou Technology
Z
Zheng Liu
Sun Yat-sen University
Huiguo He
Huiguo He
South China University of Technology
Z
Zijian He
Sun Yat-sen University
Xuesong Niu
Xuesong Niu
Institute of Computing Technology; Kuaishou Technology
Affective ComputingComputer Vision
D
Di Zhang
Kuaishou Technology
G
Guanbin Li
Sun Yat-sen University