🤖 AI Summary
This study systematically evaluates vision-language models (VLMs) against specialized counting models on open-set visual counting. To address the coarse granularity and limited controllability of existing benchmarks, we introduce a fine-grained, configurable counting benchmark and a prompt engineering framework integrating object localization and label generation—enabling zero-shot and few-shot evaluation across diverse models. Key findings are: (1) most VLMs match or surpass specialized counting models without fine-tuning; (2) explicit intermediate representations—jointly encoding object locations and textual labels—significantly improve counting accuracy; and (3) VLMs still exhibit robustness bottlenecks in complex, real-world scenes. This work is the first to empirically uncover the inherent counting capability of pre-trained VLMs, proposing an interpretable and scalable intermediate representation paradigm. It advances foundational research in general visual understanding and embodied reasoning by establishing a principled approach to numerical cognition in vision-language systems.
📝 Abstract
Counting the number of items in a visual scene remains a fundamental yet challenging task in computer vision. Traditional approaches to solving this problem rely on domain-specific counting architectures, which are trained using datasets annotated with a predefined set of object categories. However, recent progress in creating large-scale multimodal vision-language models (VLMs) suggests that these domain-general architectures may offer a flexible alternative for open-set object counting. In this study, we therefore systematically compare the performance of state-of-the-art specialized counting architectures against VLMs on two popular counting datasets, as well as on a novel benchmark specifically created to have a finer-grained control over the visual properties of test images. Our findings show that most VLMs can approximately enumerate the number of items in a visual scene, matching or even surpassing the performance of specialized computer vision architectures. Notably, enumeration accuracy significantly improves when VLMs are prompted to generate intermediate representations (i.e., locations and verbal labels) of each object to be counted. Nevertheless, none of the models can reliably count the number of objects in complex visual scenes, showing that further research is still needed to create AI systems that can reliably deploy counting procedures in realistic environments.