DuoGen: Towards General Purpose Interleaved Multimodal Generation

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interleaved multimodal generative models struggle to achieve high-quality text-image interleaved generation due to scarce training data and limited capabilities of underlying foundation models. This work proposes DuoGen, a framework that leverages a large-scale, high-quality instruction-tuning dataset and integrates a multimodal large language model (MLLM) with a video-pretrained diffusion Transformer (DiT). DuoGen employs a two-stage decoupled strategy to jointly optimize comprehension and generation capabilities, eliminating the need for costly unimodal pretraining and enabling flexible selection of foundation models. Furthermore, it establishes the first comprehensive evaluation benchmark tailored for interleaved generation. Experiments demonstrate that DuoGen significantly outperforms existing open-source models in text quality, image fidelity, and text-image alignment, achieving state-of-the-art performance in both text-to-image generation and image editing within a unified architecture.

Technology Category

Application Category

📝 Abstract
Interleaved multimodal generation enables capabilities beyond unimodal generation models, such as step-by-step instructional guides, visual planning, and generating visual drafts for reasoning. However, the quality of existing interleaved generation models under general instructions remains limited by insufficient training data and base model capacity. We present DuoGen, a general-purpose interleaved generation framework that systematically addresses data curation, architecture design, and evaluation. On the data side, we build a large-scale, high-quality instruction-tuning dataset by combining multimodal conversations rewritten from curated raw websites, and diverse synthetic examples covering everyday scenarios. Architecturally, DuoGen leverages the strong visual understanding of a pretrained multimodal LLM and the visual generation capabilities of a diffusion transformer (DiT) pretrained on video generation, avoiding costly unimodal pretraining and enabling flexible base model selection. A two-stage decoupled strategy first instruction-tunes the MLLM, then aligns DiT with it using curated interleaved image-text sequences. Across public and newly proposed benchmarks, DuoGen outperforms prior open-source models in text quality, image fidelity, and image-context alignment, and also achieves state-of-the-art performance on text-to-image and image editing among unified generation models. Data and code will be released at https://research.nvidia.com/labs/dir/duogen/.
Problem

Research questions and friction points this paper is trying to address.

interleaved multimodal generation
training data scarcity
base model capacity
image-text alignment
general-purpose generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

interleaved multimodal generation
instruction tuning
diffusion transformer
multimodal LLM
decoupled training strategy
🔎 Similar Papers
No similar papers found.