🤖 AI Summary
This work addresses the challenge of precise open-vocabulary object counting control in text-to-image generation—particularly where varying object scales and spatial distributions degrade count estimation accuracy. We propose the first differentiable, open-vocabulary counting model. Methodologically, we introduce a *cardinality map* as a continuous regression target, integrated with cross-modal representation alignment and a hybrid strong-weak supervision strategy, enabling end-to-end gradient-based optimization of the counting module jointly with the diffusion generator. This constitutes the first unified modeling paradigm bridging open-vocabulary counting and generative control. Experiments demonstrate state-of-the-art counting accuracy across multiple benchmarks and significantly improved robustness and controllability over target object quantities in text-to-image synthesis, establishing a novel paradigm for controllable generation.
📝 Abstract
We propose YOLO-Count, a differentiable open-vocabulary object counting model that tackles both general counting challenges and enables precise quantity control for text-to-image (T2I) generation. A core contribution is the 'cardinality' map, a novel regression target that accounts for variations in object size and spatial distribution. Leveraging representation alignment and a hybrid strong-weak supervision scheme, YOLO-Count bridges the gap between open-vocabulary counting and T2I generation control. Its fully differentiable architecture facilitates gradient-based optimization, enabling accurate object count estimation and fine-grained guidance for generative models. Extensive experiments demonstrate that YOLO-Count achieves state-of-the-art counting accuracy while providing robust and effective quantity control for T2I systems.