Towards Holistic Modeling for Video Frame Interpolation with Auto-regressive Diffusion Transformers

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes LDF-VFI, a novel video frame interpolation framework that departs from conventional frame-centric short-clip processing, which often suffers from temporal inconsistency and motion artifacts. LDF-VFI introduces a video-level holistic modeling paradigm by leveraging an autoregressive diffusion Transformer to model the entire sequence, thereby ensuring long-range temporal coherence. To mitigate error accumulation, the method employs skip-connected sampling and integrates sparse local attention, patch-based VAE encoding, and a multi-scale conditional VAE decoder, enabling inference at arbitrary resolutions—including 4K—without retraining. Evaluated on long-sequence benchmarks, LDF-VFI achieves state-of-the-art performance, significantly improving both per-frame quality and temporal consistency, with particularly notable gains in scenes involving large motions.

Technology Category

Application Category

📝 Abstract
Existing video frame interpolation (VFI) methods often adopt a frame-centric approach, processing videos as independent short segments (e.g., triplets), which leads to temporal inconsistencies and motion artifacts. To overcome this, we propose a holistic, video-centric paradigm named \textbf{L}ocal \textbf{D}iffusion \textbf{F}orcing for \textbf{V}ideo \textbf{F}rame \textbf{I}nterpolation (LDF-VFI). Our framework is built upon an auto-regressive diffusion transformer that models the entire video sequence to ensure long-range temporal coherence. To mitigate error accumulation inherent in auto-regressive generation, we introduce a novel skip-concatenate sampling strategy that effectively maintains temporal stability. Furthermore, LDF-VFI incorporates sparse, local attention and tiled VAE encoding, a combination that not only enables efficient processing of long sequences but also allows generalization to arbitrary spatial resolutions (e.g., 4K) at inference without retraining. An enhanced conditional VAE decoder, which leverages multi-scale features from the input video, further improves reconstruction fidelity. Empirically, LDF-VFI achieves state-of-the-art performance on challenging long-sequence benchmarks, demonstrating superior per-frame quality and temporal consistency, especially in scenes with large motion. The source code is available at https://github.com/xypeng9903/LDF-VFI.
Problem

Research questions and friction points this paper is trying to address.

video frame interpolation
temporal inconsistency
motion artifacts
long-range temporal coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auto-regressive Diffusion Transformer
Video Frame Interpolation
Temporal Coherence
Skip-Concatenate Sampling
Tiled VAE Encoding
🔎 Similar Papers
No similar papers found.