🤖 AI Summary
Existing multimodal large language models (MLLMs) lack rigorous evaluation of their capability for image-based relative attribute comparison reasoning—e.g., freshness, aesthetic appeal, quantity, or quality—leaving a critical assessment gap in fine-grained visual understanding.
Method: We introduce the first systematic benchmark comprising 40K human-annotated image pairs, covering eight semantic dimensions (e.g., existence, state, emotion). We formally define and quantify MLLM comparison reasoning ability; propose a vision-driven strategy for cross-dimensional paired-image construction; and integrate CLIP-based similarity scoring with multi-source metadata filtering for high-quality pair selection.
Contribution/Results: Experiments reveal that state-of-the-art models—including GPT-4V, Gemini-Pro, and LLaVA-1.6—achieve only sub-65% average accuracy, exposing fundamental limitations. Our benchmark provides a reproducible evaluation baseline and standardized protocol, enabling principled advancement of MLLMs’ nuanced visual reasoning capabilities.
📝 Abstract
The ability to compare objects, scenes, or situations is crucial for effective decision-making and problem-solving in everyday life. For instance, comparing the freshness of apples enables better choices during grocery shopping while comparing sofa designs helps optimize the aesthetics of our living space. Despite its significance, the comparative capability is largely unexplored in artificial general intelligence (AGI). In this paper, we introduce MLLM-CompBench, a benchmark designed to evaluate the comparative reasoning capability of multimodal large language models (MLLMs). MLLM-CompBench mines and pairs images through visually oriented questions covering eight dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. We curate a collection of around 40K image pairs using metadata from diverse vision datasets and CLIP similarity scores. These image pairs span a broad array of visual domains, including animals, fashion, sports, and both outdoor and indoor scenes. The questions are carefully crafted to discern relative characteristics between two images and are labeled by human annotators for accuracy and relevance. We use MLLM-CompBench to evaluate recent MLLMs, including GPT-4V(ision), Gemini-Pro, and LLaVA-1.6. Our results reveal notable shortcomings in their comparative abilities. We believe MLLM-COMPBENCH not only sheds light on these limitations but also establishes a solid foundation for future enhancements in the comparative capability of MLLMs.