CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common limitation of large vision-language models, which often underperform their underlying vision encoders on vision-centric tasks such as image classification due to ineffective multimodal fusion. To overcome this, the authors propose a model-agnostic framework that introduces a visual ensemble layer coupled with a context-aware dynamic integration strategy. This approach adaptively weights visual representations and language-driven reasoning, establishing— for the first time—a context-dependent mechanism for prioritizing image features. The method is compatible with mainstream open-source architectures and consistently improves performance across image classification and diverse vision-language benchmarks, significantly enhancing cross-task generalization capabilities.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Vision-Language Models (LVLMs) have pushed them closer to becoming general-purpose assistants. Despite their strong performance, LVLMs still struggle with vision-centric tasks such as image classification, underperforming compared to their base vision encoders, which are often CLIP-based models. To address this limitation, we propose Context-Aware Image Representation Prioritization via Ensemble (CARPE), a novel, model-agnostic framework which introduces vision-integration layers and a context-aware ensemble strategy to identify when to prioritize image representations or rely on the reasoning capabilities of the language model. This design enhances the model's ability to adaptively weight visual and textual modalities and enables the model to capture various aspects of image representations, leading to consistent improvements in generalization across classification and vision-language benchmarks. Extensive experiments demonstrate that CARPE not only improves performance on image classification benchmarks but also enhances results across various vision-language benchmarks. Finally, CARPE is designed to be effectively integrated with most open-source LVLMs that consist of a vision encoder and a language model, ensuring its adaptability across diverse architectures.
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
image classification
vision-centric tasks
performance gap
visual representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

context-aware ensemble
image representation prioritization
vision-language models
model-agnostic framework
adaptive modality weighting
🔎 Similar Papers
No similar papers found.