MUSE: Multi-Subject Unified Synthesis via Explicit Layout Semantic Expansion

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of spatial layout control for multi-subject composition in text-to-image generation. We propose a unified diffusion-based generation framework that enables precise spatial localization and subject identity preservation. Our method introduces three key innovations: (1) an explicit semantic space expansion mechanism that encodes layout constraints as learnable semantic anchors; (2) cascaded cross-attention (CCA), which jointly aligns layout guidance and text conditions in both directions; and (3) a two-stage progressive training strategy that decouples subject reconstruction from spatial composition. The model operates end-to-end under zero-shot settings without requiring fine-tuning or additional supervision. Extensive experiments demonstrate significant improvements over state-of-the-art methods in both spatial accuracy—measured by layout fidelity—and identity consistency—assessed via subject preservation metrics. Our approach establishes a new paradigm for controllable multi-subject image synthesis, offering enhanced compositional expressivity while maintaining semantic coherence.

Technology Category

Application Category

📝 Abstract
Existing text-to-image diffusion models have demonstrated remarkable capabilities in generating high-quality images guided by textual prompts. However, achieving multi-subject compositional synthesis with precise spatial control remains a significant challenge. In this work, we address the task of layout-controllable multi-subject synthesis (LMS), which requires both faithful reconstruction of reference subjects and their accurate placement in specified regions within a unified image. While recent advancements have separately improved layout control and subject synthesis, existing approaches struggle to simultaneously satisfy the dual requirements of spatial precision and identity preservation in this composite task. To bridge this gap, we propose MUSE, a unified synthesis framework that employs concatenated cross-attention (CCA) to seamlessly integrate layout specifications with textual guidance through explicit semantic space expansion. The proposed CCA mechanism enables bidirectional modality alignment between spatial constraints and textual descriptions without interference. Furthermore, we design a progressive two-stage training strategy that decomposes the LMS task into learnable sub-objectives for effective optimization. Extensive experiments demonstrate that MUSE achieves zero-shot end-to-end generation with superior spatial accuracy and identity consistency compared to existing solutions, advancing the frontier of controllable image synthesis. Our code and model are available at https://github.com/pf0607/MUSE.
Problem

Research questions and friction points this paper is trying to address.

Achieving multi-subject synthesis with precise spatial layout control
Simultaneously satisfying spatial precision and identity preservation requirements
Bridging the gap between layout control and subject fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit semantic space expansion for integration
Concatenated cross-attention enables bidirectional modality alignment
Progressive two-stage training strategy for optimization
🔎 Similar Papers
No similar papers found.
F
Fei Peng
Beijing University of Posts and Telecommunications, China
J
Junqiang Wu
Kuaishou Technology
Y
Yan Li
Kuaishou Technology
T
Tingting Gao
Kuaishou Technology
D
Di Zhang
Kuaishou Technology
Huiyuan Fu
Huiyuan Fu
Beijing University of Posts and Telecommunications