EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unified multimodal models suffer from inefficiency and limited cross-task synergy. Method: We propose EMMA-4B—a unified architecture featuring a 32× compressed autoencoder for efficient visual tokenization, channel-level visual token concatenation, and a shared-decoupled network to jointly support understanding, generation, and editing; additionally, a vision understanding Mixture-of-Experts (MoE) mechanism enables task-adaptive sparse activation. Contribution/Results: EMMA-4B achieves joint multi-task optimization within a single token processing pipeline, substantially reducing computational overhead. Experiments demonstrate that EMMA-4B surpasses unified models like BAGEL-7B in efficiency while matching the understanding and generation performance of specialized models such as Qwen3-VL. To our knowledge, EMMA-4B is the first unified architecture to simultaneously achieve high efficiency and strong generalization across diverse multimodal tasks.

Technology Category

Application Category

📝 Abstract
We propose EMMA, an efficient and unified architecture for multimodal understanding, generation and editing. Specifically, EMMA primarily consists of 1) An efficient autoencoder with a 32x compression ratio, which significantly reduces the number of tokens required for generation. This also ensures the training balance between understanding and generation tasks by applying the same compression ratio to images. 2) Channel-wise concatenation instead of token-wise concatenation among visual understanding and generation tokens, which further reduces the visual tokens in unified architectures. 3) A shared-and-decoupled network that enables mutual improvements across tasks while meeting the task-specific modeling requirements. 4) A mixture-of-experts mechanism adopted for visual understanding encoder, which substantially improves perceptual capabilities with a few parameters increase. Extensive experiments have shown that EMMA-4B can significantly outperform state-of-the-art unified multimodal approaches (e.g., BAGEL-7B) in both efficiency and performance, while also achieving competitive results compared to recent multimodal understanding and generation experts (e.g., Qwen3-VL and Qwen-Image). We believe that EMMA lays a solid foundation for the future development of unified multimodal architectures.
Problem

Research questions and friction points this paper is trying to address.

Efficient multimodal understanding, generation, and editing with a unified architecture
Reduces visual tokens via compression and channel-wise concatenation
Enables mutual task improvements while meeting specific modeling needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient autoencoder with 32x compression reduces tokens
Channel-wise concatenation minimizes visual tokens in architecture
Shared-and-decoupled network balances task-specific and mutual improvements
🔎 Similar Papers
No similar papers found.
X
Xin He
Huawei Inc.
Longhui Wei
Longhui Wei
Senior Researcher, Huawei
multimodal&Visual pre-trainingVLMMultimodal Generation
J
Jianbo Ouyang
Huawei Inc.
L
Lingxi Xie
Huawei Inc.
Q
Qi Tian
Huawei Inc.