🤖 AI Summary
This work addresses the challenge that existing vision-language models struggle with accurate object-centric reasoning for rare objects underrepresented in training data. To overcome this limitation without requiring model fine-tuning, the authors propose a plug-and-play module that constructs multimodal class embeddings by fusing priors from foundation vision models with synonym-augmented textual descriptions. A lightweight attention mechanism is introduced to refine visual features and generate object-aware prompts, effectively guiding the model to focus on relevant image regions. Evaluated on two benchmark datasets, the method significantly enhances the ability of pretrained vision-language models to recognize and reason about rare objects, demonstrating its efficacy and generalizability.
📝 Abstract
Vision language models (VLMs) have achieved remarkable success in broad visual understanding, yet they remain challenged by object-centric reasoning on rare objects due to the scarcity of such instances in pretraining data. While prior efforts alleviate this issue by retrieving additional data or introducing stronger vision encoders, these methods are still computationally intensive during finetuning VLMs and don't fully exploit the original training data. In this paper, we introduce an efficient plug-and-play module that substantially improves VLMs' reasoning over rare objects by refining visual tokens and enriching input text prompts, without VLMs finetuning. Specifically, we propose to learn multi-modal class embeddings for rare objects by leveraging prior knowledge from vision foundation models and synonym-augmented text descriptions, compensating for limited training examples. These embeddings refine the visual tokens in VLMs through a lightweight attention-based enhancement module that improves fine-grained object details. In addition, we use the learned embeddings as object-aware detectors to generate informative hints, which are injected into the text prompts to help guide the VLM's attention toward relevant image regions. Experiments on two benchmarks show consistent and substantial gains for pretrained VLMs in rare object recognition and reasoning. Further analysis reveals how our method strengthens the VLM's ability to focus on and reason about rare objects.