🤖 AI Summary
This work systematically evaluates the numerical reasoning capabilities of text-to-image (T2I) models, focusing on precise object counting, quantifier comprehension (e.g., “few”, “zero”), zero-concept recognition, and fractional expression (e.g., “half”). To this end, we introduce GeckoNum—the first dedicated, open-source benchmark for numerical understanding in T2I generation—along with a hierarchical prompt templating strategy, human-annotated verification, and a controlled numerical generation evaluation protocol. Experimental results reveal that state-of-the-art T2I models reliably generate only up to three objects; accuracy on core numerical semantics remains below 40% across tasks; and numerical fidelity degrades significantly as target counts increase. GeckoNum is the first benchmark to quantitatively expose these systemic limitations, and has already been adopted by multiple research groups to enable measurable, targeted improvement of numerical reasoning in generative vision-language models.
📝 Abstract
Text-to-image generative models are capable of producing high-quality images that often faithfully depict concepts described using natural language. In this work, we comprehensively evaluate a range of text-to-image models on numerical reasoning tasks of varying difficulty, and show that even the most advanced models have only rudimentary numerical skills. Specifically, their ability to correctly generate an exact number of objects in an image is limited to small numbers, it is highly dependent on the context the number term appears in, and it deteriorates quickly with each successive number. We also demonstrate that models have poor understanding of linguistic quantifiers (such as"a few"or"as many as"), the concept of zero, and struggle with more advanced concepts such as partial quantities and fractional representations. We bundle prompts, generated images and human annotations into GeckoNum, a novel benchmark for evaluation of numerical reasoning.