MLLM-CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs

📅 2024-07-23
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) lack rigorous evaluation of their capability for image-based relative attribute comparison reasoning—e.g., freshness, aesthetic appeal, quantity, or quality—leaving a critical assessment gap in fine-grained visual understanding. Method: We introduce the first systematic benchmark comprising 40K human-annotated image pairs, covering eight semantic dimensions (e.g., existence, state, emotion). We formally define and quantify MLLM comparison reasoning ability; propose a vision-driven strategy for cross-dimensional paired-image construction; and integrate CLIP-based similarity scoring with multi-source metadata filtering for high-quality pair selection. Contribution/Results: Experiments reveal that state-of-the-art models—including GPT-4V, Gemini-Pro, and LLaVA-1.6—achieve only sub-65% average accuracy, exposing fundamental limitations. Our benchmark provides a reproducible evaluation baseline and standardized protocol, enabling principled advancement of MLLMs’ nuanced visual reasoning capabilities.

Technology Category

Application Category

📝 Abstract
The ability to compare objects, scenes, or situations is crucial for effective decision-making and problem-solving in everyday life. For instance, comparing the freshness of apples enables better choices during grocery shopping while comparing sofa designs helps optimize the aesthetics of our living space. Despite its significance, the comparative capability is largely unexplored in artificial general intelligence (AGI). In this paper, we introduce MLLM-CompBench, a benchmark designed to evaluate the comparative reasoning capability of multimodal large language models (MLLMs). MLLM-CompBench mines and pairs images through visually oriented questions covering eight dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. We curate a collection of around 40K image pairs using metadata from diverse vision datasets and CLIP similarity scores. These image pairs span a broad array of visual domains, including animals, fashion, sports, and both outdoor and indoor scenes. The questions are carefully crafted to discern relative characteristics between two images and are labeled by human annotators for accuracy and relevance. We use MLLM-CompBench to evaluate recent MLLMs, including GPT-4V(ision), Gemini-Pro, and LLaVA-1.6. Our results reveal notable shortcomings in their comparative abilities. We believe MLLM-COMPBENCH not only sheds light on these limitations but also establishes a solid foundation for future enhancements in the comparative capability of MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Relative Attribute Judgement
Cognitive Function
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLM-CompBench
comparative reasoning
multimodal large language models
🔎 Similar Papers
No similar papers found.
Jihyung Kil
Jihyung Kil
Adobe Research
GUI/Computer-Using AgentAI AgentEmbodied AgentVision and Language
Zheda Mai
Zheda Mai
Ohio State University
Continual LearningParameter Efficient Fine TuningVision Foundation Models
J
Justin Lee
The Ohio State University
Z
Zihe Wang
The Ohio State University
K
Kerrie Cheng
The Ohio State University
Lemeng Wang
Lemeng Wang
Undergraduate Student, The Ohio State University
Computer VisionMachine Learning
Y
Ye Liu
The Ohio State University
A
A. Chowdhury
The Ohio State University
W
Wei-Lun Chao
The Ohio State University