Wan-Weaver: Interleaved Multi-modal Generation via Decoupled Training

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models struggle to generate interleaved text-and-image content, primarily due to the scarcity of real-world interleaved data and the difficulty of modeling long-range cross-modal dependencies. This work proposes a decoupled training framework that decomposes interleaved generation into two stages: textual planning and visual consistency modeling. A planner first generates dense textual descriptions, which a visualizer then uses to synthesize corresponding images. By leveraging large-scale text-based proxy interleaved data and reference-guided image synthesis for training, the model achieves strong long-range textual coherence and visual consistency without requiring real interleaved samples. The approach enables unified multi-task inference and significantly outperforms existing methods on a newly constructed multidimensional evaluation benchmark, demonstrating superior capabilities in interleaved generation, task reasoning, and cross-modal alignment.

Technology Category

Application Category

📝 Abstract
Recent unified models have made unprecedented progress in both understanding and generation. However, while most of them accept multi-modal inputs, they typically produce only single-modality outputs. This challenge of producing interleaved content is mainly due to training data scarcity and the difficulty of modeling long-range cross-modal context. To address this issue, we decompose interleaved generation into textual planning and visual consistency modeling, and introduce a framework consisting of a planner and a visualizer. The planner produces dense textual descriptions for visual content, while the visualizer synthesizes images accordingly. Under this guidance, we construct large-scale textual-proxy interleaved data (where visual content is represented in text) to train the planner, and curate reference-guided image data to train the visualizer. These designs give rise to Wan-Weaver, which exhibits emergent interleaved generation ability with long-range textual coherence and visual consistency. Meanwhile, the integration of diverse understanding and generation data into planner training enables Wan-Weaver to achieve robust task reasoning and generation proficiency. To assess the model's capability in interleaved generation, we further construct a benchmark that spans a wide range of use cases across multiple dimensions. Extensive experiments demonstrate that, even without access to any real interleaved data, Wan-Weaver achieves superior performance over existing methods.
Problem

Research questions and friction points this paper is trying to address.

interleaved generation
multi-modal generation
cross-modal context
training data scarcity
visual consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

interleaved multi-modal generation
decoupled training
textual planning
visual consistency modeling
textual-proxy data
🔎 Similar Papers