InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent trade-off in existing unified multimodal models between semantic understanding and generation, which struggle to efficiently balance comprehension, reasoning, generation, and editing. To overcome this limitation, the authors propose InternVL-U, a lightweight unified model with only 4 billion parameters. InternVL-U decouples visual representations from modality-specific modules, employs a unified context modeling framework, and integrates an MMDiT-based visual generation head with chain-of-thought (CoT)-guided synthetic data of high semantic density. Despite its compact size, the model achieves state-of-the-art performance across diverse generation and editing tasks, outperforming baselines with over three times more parameters—such as the 14B-parameter BAGEL—while maintaining strong multimodal understanding and reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Unified multimodal models (UMMs) that integrate understanding, reasoning, generation, and editing face inherent trade-offs between maintaining strong semantic comprehension and acquiring powerful generation capabilities. In this report, we present InternVL-U, a lightweight 4B-parameter UMM that democratizes these capabilities within a unified framework. Guided by the principles of unified contextual modeling and modality-specific modular design with decoupled visual representations, InternVL-U integrates a state-of-the-art Multimodal Large Language Model (MLLM) with a specialized MMDiT-based visual generation head. To further bridge the gap between aesthetic generation and high-level intelligence, we construct a comprehensive data synthesis pipeline targeting high-semantic-density tasks, such as text rendering and scientific reasoning, under a reasoning-centric paradigm that leverages Chain-of-Thought (CoT) to better align abstract user intent with fine-grained visual generation details. Extensive experiments demonstrate that InternVL-U achieves a superior performance - efficiency balance. Despite using only 4B parameters, it consistently outperforms unified baseline models with over 3x larger scales such as BAGEL (14B) on various generation and editing tasks, while retaining strong multimodal understanding and reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Unified Multimodal Models
Semantic Comprehension
Generation Capabilities
Multimodal Understanding
Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Multimodal Model
Modality-Specific Modular Design
MMDiT-based Visual Generation
Chain-of-Thought Reasoning
High-Semantic-Density Data Synthesis
🔎 Similar Papers
No similar papers found.
Changyao Tian
Changyao Tian
MMLab, CUHK
Computer VisionDeep Learning
Danni Yang
Danni Yang
Xiamen University
Multimodal LearningVideo Editing
Guanzhou Chen
Guanzhou Chen
Shanghai Jiao Tong University; Shanghai AI Laboratory
Erfei Cui
Erfei Cui
Shanghai AI Laboratory; Shanghai JiaoTong University
Computer Vision
Zhaokai Wang
Zhaokai Wang
Shanghai Jiao Tong University; Shanghai AI Laboratory
Computer VisionAI MusicMLLMs
Y
Yuchen Duan
Shanghai AI Laboratory
P
Penghao Yin
Shanghai AI Laboratory
S
Sitao Chen
Shanghai AI Laboratory
Ganlin Yang
Ganlin Yang
University of Science and Technology of China && Shanghai AI Laboratory
Computer vision3D visionMultimodal models
M
Mingxin Liu
Shanghai Jiao Tong University
Z
Zirun Zhu
Shanghai Jiao Tong University
Z
Ziqian Fan
South China University of Technology
L
Leyao Gu
Shanghai Jiao Tong University
Haomin Wang
Haomin Wang
Shanghai AI Laboratory | Shanghai Jiao Tong University
Computer VisionMultimodal Large Language Models
Qi Wei
Qi Wei
Associate Professor of Bioengineering, George Mason University
Biomechanicsmodeling and simulationbiomedical imaging
J
Jinhui Yin
Shanghai AI Laboratory
X
Xue Yang
Shanghai Jiao Tong University
Zhihang Zhong
Zhihang Zhong
Researcher, Shanghai AI Laboratory
Computer visionDeep learning
Q
Qi Qin
Shanghai AI Laboratory
Yi Xin
Yi Xin
California Institute of Technology
Industrial OrganizationEconometrics
B
Bin Fu
Shanghai AI Laboratory
Yihao Liu
Yihao Liu
Shanghai Artificial Intelligence Laboratory
computer visionmultimodal generationimage restoration
J
Jiaye Ge
Shanghai AI Laboratory
Qipeng Guo
Qipeng Guo
Fudan University
Gen Luo
Gen Luo
Shanghai AI Laboratory
computer visionvision and language