🤖 AI Summary
This work addresses the open-vocabulary, zero-shot, multi-class object counting problem by proposing the first class-agnostic, single-forward-pass general counting framework. The method synergistically integrates CLIP’s semantic priors with SAM’s geometric awareness, enabling end-to-end generation of precise object masks and aggregation-based counting solely from interactive prompts—such as text, points, or bounding boxes—without category-specific training or manual exemplars. Key contributions include: (1) the first counting architecture supporting multi-label input and single-pass inference; (2) OmniCount-191, the first multi-label counting benchmark featuring point, box, and VQA annotations; and (3) state-of-the-art performance across OmniCount-191 and multiple established benchmarks, significantly outperforming prior zero-shot counting methods in accuracy, speed, and zero-shot multi-class capability.
📝 Abstract
Object counting is pivotal for understanding the composition of scenes. Previously, this task was dominated by class-specific methods, which have gradually evolved into more adaptable class-agnostic strategies. However, these strategies come with their own set of limitations, such as the need for manual exemplar input and multiple passes for multiple categories, resulting in significant inefficiencies. This paper introduces a more practical approach enabling simultaneous counting of multiple object categories using an open-vocabulary framework. Our solution, OmniCount, stands out by using semantic and geometric insights (priors) from pre-trained models to count multiple categories of objects as specified by users, all without additional training. OmniCount distinguishes itself by generating precise object masks and leveraging varied interactive prompts via the Segment Anything Model for efficient counting. To evaluate OmniCount, we created the OmniCount-191 benchmark, a first-of-its-kind dataset with multi-label object counts, including points, bounding boxes, and VQA annotations. Our comprehensive evaluation in OmniCount-191, alongside other leading benchmarks, demonstrates OmniCount's exceptional performance, significantly outpacing existing solutions. The project webpage is available at https://mondalanindya.github.io/OmniCount.