MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from low training and inference efficiency, particularly at scale. To address this, we propose an efficient MLLM framework targeting the 8B-parameter regime. Our method introduces a unified 3D-Resampler architecture for compact joint encoding of images and videos; a lightweight, integrated multi-task learning paradigm for document understanding and text recognition—eliminating complex data engineering; and a hybrid reinforcement learning strategy that jointly optimizes short- and long-horizon reasoning capabilities. Leveraging high-quality image-text-video data alongside model compression techniques, our approach achieves state-of-the-art performance on VideoMME with only 46.7% GPU memory consumption and 8.7% inference latency relative to baselines, while surpassing GPT-4o-latest and Qwen2.5-VL-72B on OpenCompass.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) are undergoing rapid progress and represent the frontier of AI development. However, their training and inference efficiency have emerged as a core bottleneck in making MLLMs more accessible and scalable. To address the challenges, we present MiniCPM-V 4.5, an 8B parameter model designed for high efficiency and strong performance. We introduce three core improvements in model architecture, data strategy and training method: a unified 3D-Resampler model architecture for highly compact encoding over images and videos, a unified learning paradigm for document knowledge and text recognition without heavy data engineering, and a hybrid reinforcement learning strategy for proficiency in both short and long reasoning modes. Comprehensive experimental results in OpenCompass evaluation show that MiniCPM-V 4.5 surpasses widely used proprietary models such as GPT-4o-latest, and significantly larger open-source models such as Qwen2.5-VL 72B. Notably, the strong performance is achieved with remarkable efficiency. For example, on the widely adopted VideoMME benchmark, MiniCPM-V 4.5 achieves state-of-the-art performance among models under 30B size, using just 46.7% GPU memory cost and 8.7% inference time of Qwen2.5-VL 7B.
Problem

Research questions and friction points this paper is trying to address.

Improving training and inference efficiency of multimodal large language models
Addressing scalability and accessibility challenges in MLLM development
Achieving strong performance with compact 8B parameter model architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-Resampler architecture for compact image-video encoding
Unified learning paradigm for document and text recognition
Hybrid reinforcement learning for short and long reasoning
🔎 Similar Papers
No similar papers found.
Tianyu Yu
Tianyu Yu
Tsinghua University
multi-modal learning
Zefan Wang
Zefan Wang
Tsinghua University
machine learning
C
Chongyi Wang
MiniCPM-V Team, OpenBMB
F
Fuwei Huang
MiniCPM-V Team, OpenBMB
W
Wenshuo Ma
MiniCPM-V Team, OpenBMB
Z
Zhihui He
MiniCPM-V Team, OpenBMB
Tianchi Cai
Tianchi Cai
LLM Alignment, Minimax
LLMAlignmentRLBudget allocation
Weize Chen
Weize Chen
Tsinghua University
NLPML
Yuxiang Huang
Yuxiang Huang
Tsinghua University
Efficient AINatural Language ProcessingMachine Learning System
Y
Yuanqian Zhao
MiniCPM-V Team, OpenBMB
B
Bokai Xu
MiniCPM-V Team, OpenBMB
Junbo Cui
Junbo Cui
Tsinghua University
Y
Yingjing Xu
MiniCPM-V Team, OpenBMB
L
Liqing Ruan
MiniCPM-V Team, OpenBMB
L
Luoyuan Zhang
MiniCPM-V Team, OpenBMB
Hanyu Liu
Hanyu Liu
Key Laboratory of Material Simulation Methods and Software of MOE, Jilin University
Computational scienceHigh pressure
J
Jingkun Tang
MiniCPM-V Team, OpenBMB
Hongyuan Liu
Hongyuan Liu
Stevens Institute of Technology
Parallel ComputingComputer ArchitectureGPUs
Q
Qining Guo
MiniCPM-V Team, OpenBMB
W
Wenhao Hu
MiniCPM-V Team, OpenBMB
Bingxiang He
Bingxiang He
Second year PhD Candidate, Tsinghua University
Natural Language Processing
J
Jie Zhou
MiniCPM-V Team, OpenBMB
J
Jie Cai
MiniCPM-V Team, OpenBMB
J
Ji Qi
MiniCPM-V Team, OpenBMB
Zonghao Guo
Zonghao Guo
University of Chinese Academy of Sciences