🤖 AI Summary
Fine-grained image classification is fundamentally constrained by fixed vocabularies and closed-set paradigms, limiting adaptability to real-world open scenarios where novel categories continually emerge. To address this, we propose Enriched-FineR—a training-free framework that, for the first time, deeply integrates large language models (LLMs) into the visual-language model (VLM) inference pipeline. It employs context grounding for zero-shot and few-shot classification, while leveraging LLMs to perform semantic analysis, ambiguity resolution, and generative refinement of candidate class names—thereby substantially improving label reliability and interpretability. Crucially, Enriched-FineR eliminates reliance on predefined category sets and model fine-tuning. It achieves state-of-the-art performance on standard fine-grained benchmarks—including CUB, Cars, and Aircraft—and supports high-similarity subcategory tasks such as species identification and vehicle model recognition. The code is publicly available.
📝 Abstract
Fine-grained image classification, the task of distinguishing between visually similar subcategories within a broader category (e.g., bird species, car models, flower types), is a challenging computer vision problem. Traditional approaches rely heavily on fixed vocabularies and closed-set classification paradigms, limiting their scalability and adaptability in real-world settings where novel classes frequently emerge. Recent research has demonstrated that combining large language models (LLMs) with vision-language models (VLMs) makes open-set recognition possible without the need for predefined class labels. However, the existing methods are often limited in harnessing the power of LLMs at the classification phase, and also rely heavily on the guessed class names provided by an LLM without thorough analysis and refinement. To address these bottlenecks, we propose our training-free method, Enriched-FineR (or E-FineR for short), which demonstrates state-of-the-art results in fine-grained visual recognition while also offering greater interpretability, highlighting its strong potential in real-world scenarios and new domains where expert annotations are difficult to obtain. Additionally, we demonstrate the application of our proposed approach to zero-shot and few-shot classification, where it demonstrated performance on par with the existing SOTA while being training-free and not requiring human interventions. Overall, our vocabulary-free framework supports the shift in image classification from rigid label prediction to flexible, language-driven understanding, enabling scalable and generalizable systems for real-world applications. Well-documented code is available on https://github.com/demidovd98/e-finer.