๐ค AI Summary
To address subject detail loss, compositional incoherence, and text-image misalignment in multi-subject text-to-image generation, this paper proposes a zero-shot multi-subject personalization framework. Methodologically, we introduce a novel layout-guided multi-subject cross-attention mechanism, coupled with grounding-token-driven local feature resampling, enabling disentangled control over each subjectโs spatial placement, appearance, and semantic attributes conditioned solely on text input. The framework requires no fine-tuning or additional training, supporting plug-and-play personalization. Extensive evaluation demonstrates state-of-the-art performance across three critical dimensions: multi-subject fidelity, text-image alignment, and compositional plausibility. On multiple benchmarks, our approach achieves significant improvements in subject consistency and layout controllability, outperforming prior methods without architectural or training overhead.
๐ Abstract
Recent advancements in text-to-image generation models have dramatically enhanced the generation of photorealistic images from textual prompts, leading to an increased interest in personalized text-to-image applications, particularly in multi-subject scenarios. However, these advances are hindered by two main challenges: firstly, the need to accurately maintain the details of each referenced subject in accordance with the textual descriptions; and secondly, the difficulty in achieving a cohesive representation of multiple subjects in a single image without introducing inconsistencies. To address these concerns, our research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects. This innovative approach integrates grounding tokens with the feature resampler to maintain detail fidelity among subjects. With the layout guidance, MS-Diffusion further improves the cross-attention to adapt to the multi-subject inputs, ensuring that each subject condition acts on specific areas. The proposed multi-subject cross-attention orchestrates harmonious inter-subject compositions while preserving the control of texts. Comprehensive quantitative and qualitative experiments affirm that this method surpasses existing models in both image and text fidelity, promoting the development of personalized text-to-image generation. The project page is https://MS-Diffusion.github.io.