🤖 AI Summary
This work addresses the notable performance gap between large vision-language models (LVLMs) and CLIP-based methods on zero-shot and few-shot image classification tasks, despite LVLMs commonly employing CLIP-pretrained visual encoders. To bridge this gap, the authors propose Head Ensemble Classifier (HEC), a training-free classification framework that enhances inter-class separability of visual features through prompt conditioning and automatically selects and ensembles the most discriminative vision and text attention heads within the LVLM, inspired by Gaussian discriminant analysis. Evaluated across 12 benchmark datasets, HEC achieves state-of-the-art performance in both zero-shot and few-shot settings, substantially narrowing the performance disparity between LVLMs and CLIP.
📝 Abstract
Current Large Vision Language Models (LVLMs) excel at many zero-shot tasks like image captioning, visual question answering and OCR. However, these same models suffer from poor performance at image classification tasks, underperforming against CLIP-based methods. Notably, this gap is surprising because many LVLMs use CLIP-pretrained vision encoders. Yet LVLMs are not inherently limited by CLIP's architecture with independent vision and text encoders. In CLIP, this separation biases classification toward class-name matching rather than joint visual-text reasoning. In this paper we show that, despite their poor raw performance, LVLMs can improve visual feature class separability at inference using prompt conditioning, and LVLMs' internal representations, especially attention heads, can outperform the model itself at zero-shot and few-shot classification. We introduce Head Ensemble Classifiers (HEC) to bridge the performance gap between CLIP-based and LVLM-based classification methods. Inspired by Gaussian Discriminant Analysis, HEC ranks the most discriminative vision and text heads and combines them into a training-free classifier. We show that HEC achieves state-of-the-art performance in few-shot and zero-shot classification across 12 datasets.