🤖 AI Summary
Multimodal large language models (MLLMs) exhibit severe limitations in object counting under realistic, complex scenarios; existing benchmarks inadequately assess this fundamental cognitive capability due to low image density and narrow domain coverage. Method: We introduce CountQA—the first real-world counting benchmark targeting high-density scenes with occlusion and cluttered backgrounds—comprising over 1,500 high-quality question-answer pairs. Contribution/Results: Using CountQA, we systematically evaluate 15 state-of-the-art MLLMs and find the best-performing model achieves only 42.9% accuracy, with performance deteriorating sharply as object count increases. This work is the first to empirically expose intrinsic deficiencies in MLLMs’ numerical perception and spatial reasoning, providing both a critical evaluation tool and concrete evidence to advance foundational visual reasoning capabilities.
📝 Abstract
Multimodal Large Language Models (MLLMs) demonstrate remarkable fluency in understanding visual scenes, yet they exhibit a critical lack in a fundamental cognitive skill: object counting. This blind spot severely limits their reliability in real-world applications. To date, this capability has been largely unevaluated in complex scenarios, as existing benchmarks either feature sparse object densities or are confined to specific visual domains, failing to test models under realistic conditions. Addressing this gap, we introduce CountQA, a challenging new benchmark designed to probe this deficiency. Comprising over 1,500 question-answer pairs, CountQA features real-world images with high object density, clutter, and occlusion. We investigate this weakness by evaluating 15 prominent MLLMs on the CountQA benchmark and reveal that the top-performing model achieves a mere 42.9% accuracy, with performance declining as object counts rise. By providing a dedicated benchmark to diagnose and rectify this core weakness, CountQA paves the way for a new generation of MLLMs that are not only descriptively fluent but also numerically grounded and spatially aware. We will open-source the dataset and code upon paper acceptance to foster further research.