On Large Multimodal Models as Open-World Image Classifiers

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of standardized evaluation for large multimodal models (LMMs) in open-world image classification—a task we formally define for the first time, moving beyond traditional closed-world, predefined categories to support zero-shot, fine-grained, and non-prototypical class recognition driven by natural language. We propose a systematic evaluation protocol and novel multidimensional alignment metrics that quantify semantic consistency and granularity generalization. Through empirical analysis across 13 state-of-the-art LMMs on 10 diverse benchmarks—augmented with prompt engineering, chain-of-thought reasoning, and error attribution—we identify fundamental performance degradation on fine-grained and non-prototypical classes. We further demonstrate that tailored prompting and structured reasoning effectively mitigate granularity mismatch. Our contributions include a new benchmark, evaluation methodology, and actionable insights for advancing open-world multimodal understanding.

Technology Category

Application Category

📝 Abstract
Traditional image classification requires a predefined list of semantic categories. In contrast, Large Multimodal Models (LMMs) can sidestep this requirement by classifying images directly using natural language (e.g., answering the prompt"What is the main object in the image?"). Despite this remarkable capability, most existing studies on LMM classification performance are surprisingly limited in scope, often assuming a closed-world setting with a predefined set of categories. In this work, we address this gap by thoroughly evaluating LMM classification performance in a truly open-world setting. We first formalize the task and introduce an evaluation protocol, defining various metrics to assess the alignment between predicted and ground truth classes. We then evaluate 13 models across 10 benchmarks, encompassing prototypical, non-prototypical, fine-grained, and very fine-grained classes, demonstrating the challenges LMMs face in this task. Further analyses based on the proposed metrics reveal the types of errors LMMs make, highlighting challenges related to granularity and fine-grained capabilities, showing how tailored prompting and reasoning can alleviate them.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LMMs in open-world image classification
Assessing alignment between predicted and ground truth classes
Analyzing LMM errors in granularity and fine-grained tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses natural language for image classification
Evaluates LMMs in open-world settings
Introduces tailored prompting and reasoning
🔎 Similar Papers
No similar papers found.