π€ AI Summary
Existing video frame interpolation methods often suffer from motion drift, directional ambiguity, and boundary misalignment due to unidirectional generation, and they lack temporal consistency over long sequences. This work proposes a bidirectionally cycle-consistent video diffusion interpolation framework that employs learnable directional tokens to guide a shared backbone network, jointly optimizing forward synthesis and backward reconstruction within a unified architecture to achieve logically invertible motion trajectories. During training, bidirectional cycle consistency is enforced as a regularizer, complemented by a curriculum learning strategy that progressively optimizes from short to long sequences. At inference, the model requires only a single forward pass. The proposed method significantly outperforms strong baselines on 37- and 73-frame interpolation tasks, achieving state-of-the-art performance in image quality, motion smoothness, and dynamic control without incurring additional computational overhead.
π Abstract
Video frame interpolation aims to synthesize realistic intermediate frames between given endpoints while adhering to specific motion semantics. While recent generative models have improved visual fidelity, they predominantly operate in a unidirectional manner, lacking mechanisms to self-verify temporal consistency. This often leads to motion drift, directional ambiguity, and boundary misalignment, especially in long-range sequences. Inspired by the principle of temporal cycle-consistency in self-supervised learning, we propose a novel bidirectional framework that enforces symmetry between forward and backward generation trajectories. Our approach introduces learnable directional tokens to explicitly condition a shared backbone on temporal orientation, enabling the model to jointly optimize forward synthesis and backward reconstruction within a single unified architecture. This cycle-consistent supervision acts as a powerful regularizer, ensuring that generated motion paths are logically reversible. Furthermore, we employ a curriculum learning strategy that progressively trains the model from short to long sequences, stabilizing dynamics across varying durations. Crucially, our cyclic constraints are applied only during training; inference requires a single forward pass, maintaining the high efficiency of the base model. Extensive experiments show that our method achieves state-of-the-art performance in imaging quality, motion smoothness, and dynamic control on both 37-frame and 73-frame tasks, outperforming strong baselines while incurring no additional computational overhead.