You May Speak Freely: Improving the Fine-Grained Visual Recognition Capabilities of Multimodal Large Language Models with Answer Extraction

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In fine-grained visual classification (FGVC), autoregressive multimodal large language models (MLLMs) face two key challenges when performing free-form answer generation under hundreds to thousands of highly similar candidate classes: (1) difficulty in fine-grained visual discrimination, and (2) prohibitive computational cost of probability computation over large-scale option sets. This paper proposes *nlg2choice*, a two-stage method: first, open-ended visual question answering generates natural-language responses; second, constrained text decoding with early termination efficiently maps the generated output to the nearest candidate class. To our knowledge, *nlg2choice* is the first approach enabling scalable, low-overhead choice extraction for MLLMs in ultra-fine-grained, high-cardinality classification settings. Evaluated on seven FGVC benchmarks, it consistently outperforms existing state-of-the-art methods in both classification and retrieval tasks, demonstrating strong generalization and practical utility.

Technology Category

Application Category

📝 Abstract
Despite the renewed interest in zero-shot visual classification due to the rise of Multimodal Large Language Models (MLLMs), the problem of evaluating free-form responses of auto-regressive models remains a persistent challenge. Most existing works focus on language-only tasks or don't consider Multiple Choice Questions (MCQs) beyond 5-way options, both of which are critical capabilities to solve tasks in Fine-Grained Visual Classification (FGVC) where choice counts are in the hundreds to thousands and the choices are highly related. Furthermore, in this highly multi-way MCQ setting it is not clear how to extend LLM choice extraction to retrieval-based problems, where computing probabilities over the choice set is computationally costly. In this work we investigate nlg2choice, a simple two-stage method which first asks the MLLM an open-ended question for the task with minimal constraints, then uses text-only constrained decoding to predict the most likely choice. In retrieval settings, we compute the probability of the constrained response taking that choice with an early stopping method to significantly improve throughput. Our results show improvement over a suite of seven fine-grained visual datasets when evaluating in terms of classification and retrieval, and show that this performance holds over the various ways that users of LLMs can implement tasks in natural language.
Problem

Research questions and friction points this paper is trying to address.

Evaluating free-form responses of MLLMs in visual classification tasks
Extending choice extraction to high-way MCQ and retrieval settings
Improving fine-grained visual recognition with constrained decoding methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage method with open-ended question answering
Text-only constrained decoding for choice prediction
Early stopping for efficient retrieval probability computation
🔎 Similar Papers
No similar papers found.