Stable Video Infinity: Infinite-Length Video Generation with Error Recycling

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing long-video generation methods face two key bottlenecks: (1) reliance on single-prompt extrapolation, leading to scene homogeneity and motion repetition; and (2) an unrealistic training assumption of clean data, while autoregressive inference depends on noisy self-generated outputsโ€”causing error accumulation and quality degradation. This paper proposes an infinite-length video generation framework based on the Diffusion Transformer, centered on a closed-loop error recycling fine-tuning mechanism. It explicitly bridges the train-inference gap by computing residual errors, performing one-step bidirectional integral prediction, and dynamically buffering discrete-time-step errors for recurrent injection into training. The method achieves high temporal consistency, natural scene transitions, and controllable narrative flow. It sets new state-of-the-art results across multiple benchmarks, supports generation from seconds to arbitrarily long durations, accommodates multimodal conditioning (e.g., audio, skeletal poses, text), and incurs no additional inference overhead.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose Stable Video Infinity (SVI) that is able to generate infinite-length videos with high temporal consistency, plausible scene transitions, and controllable streaming storylines. While existing long-video methods attempt to mitigate accumulated errors via handcrafted anti-drifting (e.g., modified noise scheduler, frame anchoring), they remain limited to single-prompt extrapolation, producing homogeneous scenes with repetitive motions. We identify that the fundamental challenge extends beyond error accumulation to a critical discrepancy between the training assumption (seeing clean data) and the test-time autoregressive reality (conditioning on self-generated, error-prone outputs). To bridge this hypothesis gap, SVI incorporates Error-Recycling Fine-Tuning, a new type of efficient training that recycles the Diffusion Transformer (DiT)'s self-generated errors into supervisory prompts, thereby encouraging DiT to actively identify and correct its own errors. This is achieved by injecting, collecting, and banking errors through closed-loop recycling, autoregressively learning from error-injected feedback. Specifically, we (i) inject historical errors made by DiT to intervene on clean inputs, simulating error-accumulated trajectories in flow matching; (ii) efficiently approximate predictions with one-step bidirectional integration and calculate errors with residuals; (iii) dynamically bank errors into replay memory across discretized timesteps, which are resampled for new input. SVI is able to scale videos from seconds to infinite durations with no additional inference cost, while remaining compatible with diverse conditions (e.g., audio, skeleton, and text streams). We evaluate SVI on three benchmarks, including consistent, creative, and conditional settings, thoroughly verifying its versatility and state-of-the-art role.
Problem

Research questions and friction points this paper is trying to address.

Generating infinite-length videos with high temporal consistency
Addressing error accumulation in autoregressive video generation
Bridging training-testing gap using self-generated error recycling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Error-Recycling Fine-Tuning recycles self-generated errors for training
Closed-loop recycling injects and banks errors for correction learning
One-step bidirectional integration efficiently approximates prediction errors
๐Ÿ”Ž Similar Papers
No similar papers found.