🤖 AI Summary
Existing monolithic multimodal large language models (MLLMs) suffer from training instability and catastrophic forgetting during pretraining. To address this, we propose EViP++, an end-to-end vision-expert fusion architecture coupled with progressive endogenous vision pretraining, which embeds the vision parameter space into a frozen language model for stable and efficient learning. Our approach introduces four key innovations: (1) a multimodal mixture-of-experts (MoE) design; (2) delta fine-tuning; (3) CUDA-fused kernels; and (4) an enhanced vision-attention expert mechanism—collectively reducing both training and inference costs while preserving performance. Evaluated across 15 benchmarks, EViP++ achieves state-of-the-art (SOTA) results on 12, outperforming Emu3 by +114 points on OCRBench and reducing first-token latency by 69%. Its overall performance matches that of modular architectures, demonstrating that monolithic MLLMs can achieve competitive efficiency and accuracy without architectural fragmentation.
📝 Abstract
This paper focuses on monolithic Multimodal Large Language Models (MLLMs), which integrate visual encoding and language decoding into a single model. Existing structures and pre-training strategies for monolithic MLLMs often suffer from unstable optimization and catastrophic forgetting. To address these challenges, our key idea is to embed a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge from noisy data via delta tuning. Based on this principle, we first introduce Mono-InternVL, an advanced monolithic MLLM that incorporates a set of visual experts through a multimodal mixture-of-experts architecture. In addition, we design an innovative Endogenous Visual Pre-training (EViP) for Mono-InternVL to maximize its visual capabilities via progressive learning. Mono-InternVL achieves competitive performance against existing MLLMs but also leads to relatively expensive data cost. Therefore, we further present Mono-InternVL-1.5, a cheaper and stronger monolithic MLLM equipped with an improved EViP (EViP++). EViP++ introduces additional visual attention experts to Mono-InternVL-1.5 and re-organizes the pre-training process in an efficient manner. During inference, it includes a fused CUDA kernel to speed up its MoE operations. With these designs, Mono-InternVL-1.5 significantly reduces training and inference costs, while still maintaining competitive performance with Mono-InternVL. To evaluate our approach, we conduct extensive experiments across 15 benchmarks. Results demonstrate that Mono-InternVL outperforms existing monolithic MLLMs on 12 out of 15 benchmarks, e.g., +114-point improvement over Emu3 on OCRBench. Compared to its modular counterpart, i.e., InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%. Code and models are released at https://github.com/OpenGVLab/Mono-InternVL.