🤖 AI Summary
Few-shot fine-grained visual classification (FGVC) suffers from overfitting and poor generalization under limited labeled data. To address this, we propose the first training-free multimodal retrieval framework that reformulates classification as an attribute-aware image–text matching task. Leveraging a multimodal large language model (MLLM), our method generates discriminative, structured class descriptions; hallucination is mitigated via chain-of-thought prompting and reference-image guidance, yielding highly distinctive image–text pair templates. Cross-modal alignment is achieved using off-the-shelf vision and text encoders—without fine-tuning. Our core contribution is an end-to-end, training-free inference paradigm that requires no annotated data or parameter adaptation. Evaluated on 12 fine-grained benchmarks, our approach consistently outperforms existing CLIP-based few-shot methods and even surpasses several fully supervised MLLMs, demonstrating substantial gains in few-shot generalization and class discriminability.
📝 Abstract
Few-shot fine-grained visual classification (FGVC) aims to leverage limited data to enable models to discriminate subtly distinct categories. Recent works mostly finetuned the pre-trained visual language models to achieve performance gain, yet suffering from overfitting and weak generalization. To deal with this, we introduce UniFGVC, a universal training-free framework that reformulates few-shot FGVC as multimodal retrieval. First, we propose the Category-Discriminative Visual Captioner (CDV-Captioner) to exploit the open-world knowledge of multimodal large language models (MLLMs) to generate a structured text description that captures the fine-grained attribute features distinguishing closely related classes. CDV-Captioner uses chain-of-thought prompting and visually similar reference images to reduce hallucination and enhance discrimination of generated captions. Using it we can convert each image into an image-description pair, enabling more comprehensive feature representation, and construct the multimodal category templates using few-shot samples for the subsequent retrieval pipeline. Then, off-the-shelf vision and text encoders embed query and template pairs, and FGVC is accomplished by retrieving the nearest template in the joint space. UniFGVC ensures broad compatibility with diverse MLLMs and encoders, offering reliable generalization and adaptability across few-shot FGVC scenarios. Extensive experiments on 12 FGVC benchmarks demonstrate its consistent superiority over prior few-shot CLIP-based methods and even several fully-supervised MLLMs-based approaches.