🤖 AI Summary
Current text-to-video generation models lack fine-grained temporal control, making it difficult to specify the exact onset time of visual elements in generated videos. To address this, we propose a training-free cross-attention modulation method that dynamically refines the temporal alignment between visual concepts and textual prompts in diffusion-based video generation. Grounded in three principles—correlation, energy, and entropy—our approach operates solely via post-hoc manipulation of cross-attention maps, requiring no additional annotations, model fine-tuning, or architectural modifications. It enables multi-object temporal reordering and synchronized audio-visual generation. Experiments demonstrate substantial improvements in action timing accuracy and temporal logical consistency, while preserving high visual fidelity and diversity. To our knowledge, this is the first method achieving fine-grained, temporally controllable video generation without model retraining.
📝 Abstract
Recent advances in generative video models have enabled the creation of high-quality videos based on natural language prompts. However, these models frequently lack fine-grained temporal control, meaning they do not allow users to specify when particular visual elements should appear within a generated sequence. In this work, we introduce TempoControl, a method that allows for temporal alignment of visual concepts during inference, without requiring retraining or additional supervision. TempoControl utilizes cross-attention maps, a key component of text-to-video diffusion models, to guide the timing of concepts through a novel optimization approach. Our method steers attention using three complementary principles: aligning its temporal shape with a control signal (via correlation), amplifying it where visibility is needed (via energy), and maintaining spatial focus (via entropy). TempoControl allows precise control over timing while ensuring high video quality and diversity. We demonstrate its effectiveness across various video generation applications, including temporal reordering for single and multiple objects, as well as action and audio-aligned generation.