Mogao: An Omni Foundation Model for Interleaved Multi-Modal Generation

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unified multimodal models are largely restricted to unidirectional, single-modality generation, failing to support sequence-level interleaved co-generation of text and images. To address this, we propose Mogao—the first unified foundation model capable of generating arbitrarily long, interleaved text-image sequences. Its core innovations include: (i) interleaved rotary position encoding, (ii) a dual-visual-encoder architecture, (iii) a multimodal classifier-free guidance mechanism, and (iv) a joint training paradigm integrating causal modeling with diffusion priors. Mogao achieves zero-shot image editing and compositional generation—emergent capabilities previously unattainable in unified models. It establishes new state-of-the-art performance on multimodal understanding and text-to-image synthesis. Moreover, it generates high-fidelity, semantically coherent interleaved sequences and significantly improves the quality of complex edits and compositional generation.

Technology Category

Application Category

📝 Abstract
Recent progress in unified models for image understanding and generation has been impressive, yet most approaches remain limited to single-modal generation conditioned on multiple modalities. In this paper, we present Mogao, a unified framework that advances this paradigm by enabling interleaved multi-modal generation through a causal approach. Mogao integrates a set of key technical improvements in architecture design, including a deep-fusion design, dual vision encoders, interleaved rotary position embeddings, and multi-modal classifier-free guidance, which allow it to harness the strengths of both autoregressive models for text generation and diffusion models for high-quality image synthesis. These practical improvements also make Mogao particularly effective to process interleaved sequences of text and images arbitrarily. To further unlock the potential of unified models, we introduce an efficient training strategy on a large-scale, in-house dataset specifically curated for joint text and image generation. Extensive experiments show that Mogao not only achieves state-of-the-art performance in multi-modal understanding and text-to-image generation, but also excels in producing high-quality, coherent interleaved outputs. Its emergent capabilities in zero-shot image editing and compositional generation highlight Mogao as a practical omni-modal foundation model, paving the way for future development and scaling the unified multi-modal systems.
Problem

Research questions and friction points this paper is trying to address.

Enabling interleaved multi-modal generation via causal approach
Integrating autoregressive and diffusion models for text-image synthesis
Advancing unified models with large-scale joint training strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep-fusion design with dual vision encoders
Interleaved rotary position embeddings technique
Multi-modal classifier-free guidance approach
🔎 Similar Papers
C
Chao Liao
ByteDance Seed
L
Liyang Liu
ByteDance Seed
X
Xun Wang
ByteDance Seed
Zhengxiong Luo
Zhengxiong Luo
Bytedance Seed
Super-ResolutionHuman Pose EstimationMultimodal Generation
X
Xinyu Zhang
ByteDance Seed
Wenliang Zhao
Wenliang Zhao
Tsinghua University
Computer VisionGenerative Models
J
Jie Wu
ByteDance Seed
L
Liang Li
ByteDance Seed
Z
Zhi Tian
ByteDance Seed
Weilin Huang
Weilin Huang
Bytedance Seed
Computer VisionDeep Learning