π€ AI Summary
Existing monolithic vision-language models (VLMs) suffer from a lack of unified vision-language embedding modules, hindering simultaneous achievement of strong multimodal capabilities and native large language model (LLM) linguistic performance. To address this, we propose HoVLEβa high-performance monolithic VLM that preserves the original LLM architecture while enabling native multimodal understanding. Its core innovation is a novel all-modal joint embedding module that maps visual and textual inputs into a shared semantic space. HoVLE employs a three-stage training paradigm: (1) vision-text feature distillation, (2) cross-modal contrastive alignment, and (3) instruction fine-tuning. On benchmarks including MMBench and OCRBench, HoVLE matches or exceeds the performance of prominent modular VLMs (e.g., Qwen-VL, LLaVA) and significantly outperforms prior monolithic models (e.g., KOSMOS-2, mPLUG-Owl2). The model is publicly released on Hugging Face.
π Abstract
The rapid advance of Large Language Models (LLMs) has catalyzed the development of Vision-Language Models (VLMs). Monolithic VLMs, which avoid modality-specific encoders, offer a promising alternative to the compositional ones but face the challenge of inferior performance. Most existing monolithic VLMs require tuning pre-trained LLMs to acquire vision abilities, which may degrade their language capabilities. To address this dilemma, this paper presents a novel high-performance monolithic VLM named HoVLE. We note that LLMs have been shown capable of interpreting images, when image embeddings are aligned with text embeddings. The challenge for current monolithic VLMs actually lies in the lack of a holistic embedding module for both vision and language inputs. Therefore, HoVLE introduces a holistic embedding module that converts visual and textual inputs into a shared space, allowing LLMs to process images in the same way as texts. Furthermore, a multi-stage training strategy is carefully designed to empower the holistic embedding module. It is first trained to distill visual features from a pre-trained vision encoder and text embeddings from the LLM, enabling large-scale training with unpaired random images and text tokens. The whole model further undergoes next-token prediction on multi-modal data to align the embeddings. Finally, an instruction-tuning stage is incorporated. Our experiments show that HoVLE achieves performance close to leading compositional models on various benchmarks, outperforming previous monolithic models by a large margin. Model available at https://huggingface.co/OpenGVLab/HoVLE.