🤖 AI Summary
This work addresses the limitation of existing vision-language models, which typically decouple visual encoders from large language models and struggle to effectively integrate hierarchical visual features. To overcome this, the authors propose the HIVE framework, which introduces a hierarchical cross-attention mechanism between the two components for the first time, enabling structured multi-layer feature interaction. A three-stage progressive pretraining strategy is further designed to enhance multimodal alignment and gradient flow. By moving beyond conventional flattened embeddings, HIVE achieves state-of-the-art performance across multiple benchmarks, significantly outperforming current self-attention-based approaches on image classification as well as vision-language tasks including MME, GQA, OK-VQA, and ScienceQA.
📝 Abstract
The field of computer vision has experienced significant advancements through scalable vision encoders and multimodal pre-training frameworks. However, existing approaches often treat vision encoders and large language models (LLMs) as independent modules, limiting the integration of hierarchical visual features. In this work, we propose HIVE (Hierarchical Pre-Training of Vision Encoders), a novel framework that enhances vision-language alignment by introducing hierarchical cross-attention between the vision encoder and LLM. Unlike conventional methods that flatten image embeddings, HIVE enables structured feature fusion across multiple layers, improving gradient flow and representation learning. To optimize this interaction, we introduce a three-stage training strategy that progressively aligns the vision encoder with the LLM, ensuring stable optimization and effective multimodal fusion. Empirical evaluations demonstrate that HIVE achieves superior performance not only in image classification but also on various vision-language tasks, outperforming self-attention-based methods in benchmarks such as MME, GQA, OK-VQA, and ScienceQA. Our results highlight the benefits of hierarchical feature integration, paving the way for more efficient and expressive vision-language models.