Expanding Zero-Shot Object Counting with Rich Prompts

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Zero-shot object counting suffers from poor generalization to unseen categories due to insufficient alignment between textual and visual features. To address this, we propose RichCount—a novel two-stage training framework that decouples text feature enrichment from the counting task for the first time. RichCount introduces a scalable text encoder composed of adapters and feed-forward networks, jointly optimized via text–image similarity supervision and CLIP-style multimodal alignment to achieve precise cross-category semantic–visual matching. Evaluated on three standard benchmarks, RichCount establishes new state-of-the-art zero-shot counting performance, improving average accuracy on unseen categories by 12.6% over prior methods. The framework significantly enhances robustness and generalization in open-world scenarios, enabling reliable counting across diverse, previously unobserved object categories without fine-tuning.

Technology Category

Application Category

📝 Abstract
Expanding pre-trained zero-shot counting models to handle unseen categories requires more than simply adding new prompts, as this approach does not achieve the necessary alignment between text and visual features for accurate counting. We introduce RichCount, the first framework to address these limitations, employing a two-stage training strategy that enhances text encoding and strengthens the model's association with objects in images. RichCount improves zero-shot counting for unseen categories through two key objectives: (1) enriching text features with a feed-forward network and adapter trained on text-image similarity, thereby creating robust, aligned representations; and (2) applying this refined encoder to counting tasks, enabling effective generalization across diverse prompts and complex images. In this manner, RichCount goes beyond simple prompt expansion to establish meaningful feature alignment that supports accurate counting across novel categories. Extensive experiments on three benchmark datasets demonstrate the effectiveness of RichCount, achieving state-of-the-art performance in zero-shot counting and significantly enhancing generalization to unseen categories in open-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhancing zero-shot counting for unseen object categories
Improving text-visual feature alignment for accurate counting
Generalizing counting models across diverse prompts and images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training enhances text-image alignment
Feed-forward network enriches text features
Refined encoder generalizes across diverse prompts
🔎 Similar Papers
No similar papers found.
H
Huilin Zhu
Hubei Key Laboratory of Transportation Internet of Things, Wuhan University of Technology; School of Computing and Information Systems, Singapore Management University
S
Senyao Li
Hubei Key Laboratory of Transportation Internet of Things, Wuhan University of Technology
J
Jingling Yuan
Hubei Key Laboratory of Transportation Internet of Things, Wuhan University of Technology
Zhengwei Yang
Zhengwei Yang
Wuhan University | A*STAR
Computer VisionCausal InferenceRe-identification
Y
Yu Guo
Hubei Key Laboratory of Inland Shipping Technology, Wuhan University of Technology; School of Computing and Information Systems, Singapore Management University
W
Wenxuan Liu
Hubei Key Laboratory of Inland Shipping Technology, Wuhan University of Technology; School of Computer Science, Peking University
X
Xian Zhong
Hubei Key Laboratory of Transportation Internet of Things, Wuhan University of Technology
Shengfeng He
Shengfeng He
Singapore Management University
Visual ComputingGenerative ModelsComputer VisionComputational PhotographyComputer Graphics