🤖 AI Summary
Current text-to-video (T2V) models exhibit systematic failure in adhering to basic numerical constraints—e.g., generating exactly 1–9 objects—with accuracy consistently below 12%. Method: We introduce T2VCountBench, the first dedicated benchmark for evaluating counting capability in T2V generation, covering multilingual, multi-style, and long-duration scenarios; we further design controllable and task-decomposition prompting strategies, and propose a multidimensional human evaluation framework for fine-grained quantification of numerical adherence. Contribution/Results: Ablation studies reveal that existing prompting optimizations—including decomposition, style control, and temporal adjustment—fail to meaningfully improve counting performance, thereby identifying the fundamental architectural limitation in current T2V models. This work establishes a new benchmark, evaluation methodology, and conceptual understanding for advancing controllable T2V generation.
📝 Abstract
Generative models have driven significant progress in a variety of AI tasks, including text-to-video generation, where models like Video LDM and Stable Video Diffusion can produce realistic, movie-level videos from textual instructions. Despite these advances, current text-to-video models still face fundamental challenges in reliably following human commands, particularly in adhering to simple numerical constraints. In this work, we present T2VCountBench, a specialized benchmark aiming at evaluating the counting capability of SOTA text-to-video models as of 2025. Our benchmark employs rigorous human evaluations to measure the number of generated objects and covers a diverse range of generators, covering both open-source and commercial models. Extensive experiments reveal that all existing models struggle with basic numerical tasks, almost always failing to generate videos with an object count of 9 or fewer. Furthermore, our comprehensive ablation studies explore how factors like video style, temporal dynamics, and multilingual inputs may influence counting performance. We also explore prompt refinement techniques and demonstrate that decomposing the task into smaller subtasks does not easily alleviate these limitations. Our findings highlight important challenges in current text-to-video generation and provide insights for future research aimed at improving adherence to basic numerical constraints.