EVEv2: Improved Baselines for Encoder-Free Vision-Language Models

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While encoder-free vision-language models (VLMs) have rapidly approached the performance of encoder-based counterparts, their underlying mechanisms and optimization principles remain poorly understood. Method: We propose the first end-to-end, decoder-only VLM framework—eliminating visual encoders entirely—and introduce hierarchical visual tokenization, intra-modal decomposition, and cross-level alignment. We further design a joint training strategy tailored to the encoder-free paradigm, integrating contrastive learning with instruction tuning. Contribution/Results: This work provides the first systematic empirical validation of strong generalization capability in pure decoder architectures for multimodal understanding. Experiments demonstrate competitive performance against state-of-the-art encoder-based methods across multiple visual reasoning and multimodal understanding benchmarks, while achieving a 40% improvement in training data efficiency. The code and model are publicly released.

Technology Category

Application Category

📝 Abstract
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment. We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We develop efficient strategies for encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth investigation, we launch EVEv2.0, a new and improved family of encoder-free VLMs. We show that: (i) Properly decomposing and hierarchically associating vision and language within a unified model reduces interference between modalities. (ii) A well-designed training strategy enables effective optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0 represents a thorough study for developing a decoder-only architecture across modalities, demonstrating superior data efficiency and strong vision-reasoning capability. Code is publicly available at: https://github.com/baaivision/EVE.
Problem

Research questions and friction points this paper is trying to address.

Improving encoder-free vision-language models
Reducing interference between vision and language
Enhancing data efficiency and vision-reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-free vision-language models
Hierarchical modality association
Efficient decoder-only architecture
🔎 Similar Papers