Infinity-RoPE: Action-Controllable Infinite Video Generation Emerges From Autoregressive Self-Rollout

πŸ“… 2025-11-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses three key limitations of autoregressive video diffusion models: restricted temporal extent, delayed action control responsiveness in long sequences, and inability to generate discontinuous, cinematic scene transitions in a single pass. To overcome these, we propose three training-free mechanisms: Block-Relativistic RoPE, KV Flush, and RoPE Cut. Block-Relativistic RoPE introduces block-level relative spatiotemporal positional encoding to break the temporal length limitation inherent in standard 3D-RoPE. KV Flush dynamically refreshes the key-value cache to ensure immediate prompt responsiveness, while RoPE Cut truncates RoPE coordinates to enable abrupt, shot-to-shot transitions. Collectively, these techniques enable, for the first time, continuous video generation with theoretically unlimited duration, fine-grained action controllability, and seamless multi-shot editingβ€”all within a single inference pass. On VBench, our approach comprehensively outperforms state-of-the-art methods, delivering significant improvements in temporal coherence, semantic controllability, and visual fidelity for long videos.

Technology Category

Application Category

πŸ“ Abstract
Current autoregressive video diffusion models are constrained by three core bottlenecks: (i) the finite temporal horizon imposed by the base model's 3D Rotary Positional Embedding (3D-RoPE), (ii) slow prompt responsiveness in maintaining fine-grained action control during long-form rollouts, and (iii) the inability to realize discontinuous cinematic transitions within a single generation stream. We introduce $infty$-RoPE, a unified inference-time framework that addresses all three limitations through three interconnected components: Block-Relativistic RoPE, KV Flush, and RoPE Cut. Block-Relativistic RoPE reformulates temporal encoding as a moving local reference frame, where each newly generated latent block is rotated relative to the base model's maximum frame horizon while earlier blocks are rotated backward to preserve relative temporal geometry. This relativistic formulation eliminates fixed temporal positions, enabling continuous video generation far beyond the base positional limits. To obtain fine-grained action control without re-encoding, KV Flush renews the KV cache by retaining only two latent frames, the global sink and the last generated latent frame, thereby ensuring immediate prompt responsiveness. Finally, RoPE Cut introduces controlled discontinuities in temporal RoPE coordinates, enabling multi-cut scene transitions within a single continuous rollout. Together, these components establish $infty$-RoPE as a training-free foundation for infinite-horizon, controllable, and cinematic video diffusion. Comprehensive experiments show that $infty$-RoPE consistently surpasses previous autoregressive models in overall VBench scores.
Problem

Research questions and friction points this paper is trying to address.

Overcoming finite temporal horizon limits in autoregressive video diffusion models
Improving prompt responsiveness for fine-grained action control in long videos
Enabling discontinuous cinematic transitions within single generation streams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block-Relativistic RoPE enables infinite video generation
KV Flush maintains prompt responsiveness via cache renewal
RoPE Cut introduces controlled cinematic scene transitions