🤖 AI Summary
Zero-shot object counting suffers from poor generalization to unseen categories due to insufficient alignment between textual and visual features. To address this, we propose RichCount—a novel two-stage training framework that decouples text feature enrichment from the counting task for the first time. RichCount introduces a scalable text encoder composed of adapters and feed-forward networks, jointly optimized via text–image similarity supervision and CLIP-style multimodal alignment to achieve precise cross-category semantic–visual matching. Evaluated on three standard benchmarks, RichCount establishes new state-of-the-art zero-shot counting performance, improving average accuracy on unseen categories by 12.6% over prior methods. The framework significantly enhances robustness and generalization in open-world scenarios, enabling reliable counting across diverse, previously unobserved object categories without fine-tuning.
📝 Abstract
Expanding pre-trained zero-shot counting models to handle unseen categories requires more than simply adding new prompts, as this approach does not achieve the necessary alignment between text and visual features for accurate counting. We introduce RichCount, the first framework to address these limitations, employing a two-stage training strategy that enhances text encoding and strengthens the model's association with objects in images. RichCount improves zero-shot counting for unseen categories through two key objectives: (1) enriching text features with a feed-forward network and adapter trained on text-image similarity, thereby creating robust, aligned representations; and (2) applying this refined encoder to counting tasks, enabling effective generalization across diverse prompts and complex images. In this manner, RichCount goes beyond simple prompt expansion to establish meaningful feature alignment that supports accurate counting across novel categories. Extensive experiments on three benchmark datasets demonstrate the effectiveness of RichCount, achieving state-of-the-art performance in zero-shot counting and significantly enhancing generalization to unseen categories in open-world scenarios.