🤖 AI Summary
Mainstream multimodal models (e.g., LLaVA) suffer from inherent modality mismatch between text and vision, leading to inefficient visual feature integration. To address this, we propose LLaViT—a novel framework that empowers the large language model (LLM) itself to serve as the visual encoder, achieving endogenous modality unification. Our key contributions are: (1) dedicated QKV projections for visual tokens; (2) a cross-modal bidirectional attention mechanism; and (3) fusion of global–local, multi-granularity visual representations. By abandoning the conventional dual-encoder paradigm, LLaViT resolves modality fragmentation at the architectural level. Extensive experiments demonstrate that LLaViT significantly outperforms LLaVA on benchmarks including MMBench and OCRBench. Notably, it achieves superior performance with ≤50% of the parameters of competitive baselines—surpassing even those with twice its parameter count—validating its effectiveness, generalizability, and scalability.
📝 Abstract
Despite the remarkable success of the LLaVA architecture for vision-language tasks, its design inherently struggles to effectively integrate visual features due to the inherent mismatch between text and vision modalities. We tackle this issue from a novel perspective in which the LLM not only serves as a language model but also a powerful vision encoder. To this end, we present LLaViT - Large Language Models as extended Vision Transformers - which enables the LLM to simultaneously function as a vision encoder through three key modifications: (1) learning separate QKV projections for vision modality, (2) enabling bidirectional attention on visual tokens, and (3) incorporating both global and local visual representations. Through extensive controlled experiments on a wide range of LLMs, we demonstrate that LLaViT significantly outperforms the baseline LLaVA method on a multitude of benchmarks, even surpassing models with double its parameter count, establishing a more effective approach to vision-language modeling.