🤖 AI Summary
Existing text-to-video diffusion models struggle to accurately generate the specified number of objects described in prompts. This work proposes NUMINA, a novel framework that achieves numerical alignment without requiring additional training. NUMINA identifies discrepancies between the input prompt and the generated spatial layout, then extracts countable latent layouts using self-attention and cross-attention heads. Through a conservative optimization step, it modulates cross-attention to guide video regeneration. The method significantly improves object counting accuracy while preserving temporal consistency. On CountBench, it boosts counting accuracy by 7.4%, 4.9%, and 5.5% for Wan2.1-1.3B, 5B, and 14B models, respectively, and concurrently enhances CLIP-based semantic alignment.
📝 Abstract
Text-to-video diffusion models have enabled open-ended video synthesis, but often struggle with generating the correct number of objects specified in a prompt. We introduce NUMINA , a training-free identify-then-guide framework for improved numerical alignment. NUMINA identifies prompt-layout inconsistencies by selecting discriminative self- and cross-attention heads to derive a countable latent layout. It then refines this layout conservatively and modulates cross-attention to guide regeneration. On the introduced CountBench, NUMINA improves counting accuracy by up to 7.4% on Wan2.1-1.3B, and by 4.9% and 5.5% on 5B and 14B models, respectively. Furthermore, CLIP alignment is improved while maintaining temporal consistency. These results demonstrate that structural guidance complements seed search and prompt enhancement, offering a practical path toward count-accurate text-to-video diffusion. The code is available at https://github.com/H-EmbodVis/NUMINA.