๐ค AI Summary
In video generation, long-sequence diffusion models often suffer from motion repetition and deceleration due to frequency mismatch in positional encoding. This work first identifies that the dominant intrinsic frequency of positional encoding governs temporal extrapolation behavior. Building on this insight, we propose a training-free inference-time frequency scaling methodโreducing this dominant frequency enables high-fidelity sequence-length extrapolation without retraining. We further introduce a frequency-domain-driven positional encoding modulation scheme coupled with lightweight fine-tuning, enhancing dynamic fidelity while preserving motion consistency and diversity. Evaluated on state-of-the-art video diffusion models, our approach achieves 2ร zero-shot temporal extrapolation and, with minimal fine-tuning, supports up to 3ร extrapolation. It effectively suppresses temporal artifacts (e.g., looping and slowdown) and retains high-fidelity motion details. The method establishes an efficient, general-purpose paradigm for temporal extension in long-video generation.
๐ Abstract
Recent advancements in video generation have enabled models to synthesize high-quality, minute-long videos. However, generating even longer videos with temporal coherence remains a major challenge, and existing length extrapolation methods lead to temporal repetition or motion deceleration. In this work, we systematically analyze the role of frequency components in positional embeddings and identify an intrinsic frequency that primarily governs extrapolation behavior. Based on this insight, we propose RIFLEx, a minimal yet effective approach that reduces the intrinsic frequency to suppress repetition while preserving motion consistency, without requiring any additional modifications. RIFLEx offers a true free lunch--achieving high-quality $2 imes$ extrapolation on state-of-the-art video diffusion transformers in a completely training-free manner. Moreover, it enhances quality and enables $3 imes$ extrapolation by minimal fine-tuning without long videos. Project page and codes: href{https://riflex-video.github.io/}{https://riflex-video.github.io/.}