🤖 AI Summary
This study addresses the notable performance gap between vision-language models (VLMs) on fine-grained image classification tasks compared to their strong results on general visual question answering benchmarks. The authors systematically evaluate multiple classes of VLMs on established fine-grained classification benchmarks and conduct ablation studies to dissect the contributions of visual encoders, language models, and pretraining strategies. Their analysis reveals that high-quality visual encoders asymmetrically enhance fine-grained classification performance and that unfreezing language model weights during pretraining is critical for developing robust fine-grained visual understanding. This work provides key empirical evidence and actionable insights for improving the visually grounded capabilities of VLMs.
📝 Abstract
Vision-language models (VLMs) have made substantial progress across a wide range of visual question answering benchmarks, spanning visual reasoning, document understanding, and multimodal dialogue. These improvements are evident in a wide range of VLMs built on a variety of base models, alignment architectures, and training data. However, recent works show that these models trail behind in traditional image classification benchmarks, which test fine-grained visual knowledge. We test a large number of recent VLMs on fine-grained classification benchmarks and identify potential factors in the disconnect between fine-grained knowledge and other vision benchmarks. Through a series of ablation experiments, we find that using a better LLM improves all benchmark scores equally, while a better vision encoder disproportionately improves fine-grained classification performance. Furthermore, we find that the pretraining stage is also vital to fine-grained performance, particularly when the language model weights are unfrozen during pretraining. These insights pave the way for enhancing fine-grained visual understanding and vision-centric capabilities in VLMs.