MMGen: Unified Multi-modal Image Generation and Understanding in One Go

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fragmentation between multimodal image generation and understanding tasks by proposing MMGen, a unified diffusion framework. Methodologically, MMGen introduces a diffusion Transformer architecture capable of flexibly generating diverse modalities—including RGB, depth, surface normals, and semantic segmentation—while incorporating modality-decoupled encoding and joint multi-task training. To our knowledge, it is the first model enabling simultaneous class-conditional or cross-modal generation and multi-task visual understanding within a single forward pass. Experiments demonstrate that MMGen surpasses task-specific baselines in both generation quality (measured by FID and LPIPS) and understanding accuracy (mIoU for segmentation, RMSE for depth estimation). Moreover, it achieves significant improvements in cross-modal consistency, controllability, and generalization. MMGen thus establishes a novel paradigm for multimodal vision foundation models.

Technology Category

Application Category

📝 Abstract
A unified diffusion framework for multi-modal generation and understanding has the transformative potential to achieve seamless and controllable image diffusion and other cross-modal tasks. In this paper, we introduce MMGen, a unified framework that integrates multiple generative tasks into a single diffusion model. This includes: (1) multi-modal category-conditioned generation, where multi-modal outputs are generated simultaneously through a single inference process, given category information; (2) multi-modal visual understanding, which accurately predicts depth, surface normals, and segmentation maps from RGB images; and (3) multi-modal conditioned generation, which produces corresponding RGB images based on specific modality conditions and other aligned modalities. Our approach develops a novel diffusion transformer that flexibly supports multi-modal output, along with a simple modality-decoupling strategy to unify various tasks. Extensive experiments and applications demonstrate the effectiveness and superiority of MMGen across diverse tasks and conditions, highlighting its potential for applications that require simultaneous generation and understanding.
Problem

Research questions and friction points this paper is trying to address.

Unified framework for multi-modal image generation and understanding
Simultaneous multi-modal output generation via single inference
Accurate visual understanding from RGB images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified diffusion framework for multi-modal tasks
Novel diffusion transformer supports multi-modal output
Modality-decoupling strategy unifies diverse tasks
🔎 Similar Papers
No similar papers found.