WeGen: A Unified Model for Interactive Multimodal Generation as We Chat

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal generative models face two key bottlenecks in design assistance: insufficient comprehension of ambiguous instructions and difficulty maintaining both content consistency and creativity under reference guidance. To address these, we propose WeGen—the first unified architecture enabling bidirectional generation-understanding co-evolution. It integrates dynamical alignment via interleaved sequence modeling, consistency-aware generation, and prompt self-rewriting to support interactive, iterative multimodal creation. Built upon multimodal sequence modeling, WeGen leverages foundation-model–self-annotated dynamical datasets and interleaved object-dynamics representations, enabling controllable refinement while preserving user-satisfying content. Experiments demonstrate that WeGen achieves state-of-the-art performance on visual generation benchmarks, significantly improving creativity, reference fidelity, and user controllability—validating its effectiveness as an efficient, intuitive design collaborator.

Technology Category

Application Category

📝 Abstract
Existing multimodal generative models fall short as qualified design copilots, as they often struggle to generate imaginative outputs once instructions are less detailed or lack the ability to maintain consistency with the provided references. In this work, we introduce WeGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation. It can generate diverse results with high creativity for less detailed instructions. And it can progressively refine prior generation results or integrating specific contents from references following the instructions in its chat with users. During this process, it is capable of preserving consistency in the parts that the user is already satisfied with. To this end, we curate a large-scale dataset, extracted from Internet videos, containing rich object dynamics and auto-labeled dynamics descriptions by advanced foundation models to date. These two information are interleaved into a single sequence to enable WeGen to learn consistency-aware generation where the specified dynamics are generated while the consistency of unspecified content is preserved aligned with instructions. Besides, we introduce a prompt self-rewriting mechanism to enhance generation diversity. Extensive experiments demonstrate the effectiveness of unifying multimodal understanding and generation in WeGen and show it achieves state-of-the-art performance across various visual generation benchmarks. These also demonstrate the potential of WeGen as a user-friendly design copilot as desired. The code and models will be available at https://github.com/hzphzp/WeGen.
Problem

Research questions and friction points this paper is trying to address.

Enhances multimodal generation for less detailed instructions
Maintains consistency with user references during iterative generation
Improves creativity and diversity in visual content generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies multimodal generation and understanding
Uses large-scale dataset with auto-labeled dynamics
Introduces prompt self-rewriting for diverse outputs
🔎 Similar Papers
No similar papers found.
Zhipeng Huang
Zhipeng Huang
Microsoft Research Asia && University of Science and Technology of China
Multi-ModalityComputer Vision
Shaobin Zhuang
Shaobin Zhuang
Shanghai Jiaotong University
Video GenerationComputer Vision
C
Canmiao Fu
WeChat, Tencent Inc
B
Binxin Yang
WeChat, Tencent Inc
Y
Ying Zhang
WeChat, Tencent Inc
Chong Sun
Chong Sun
Tencent WeChat
Computer Vision
Z
Zhizheng Zhang
Galbot
Y
Yali Wang
Chinese Academy of Sciences
C
Chen Li
WeChat, Tencent Inc
Z
Zheng-Jun Zha
University of Science and Technology of China