🤖 AI Summary
This work explores how to achieve video frame interpolation using only an image foundation model with spatial editing capabilities, without introducing explicit temporal modeling or motion estimation modules. By applying parameter-efficient fine-tuning via Low-Rank Adaptation (LoRA) to the pre-trained Qwen-Image-Edit model, the method activates its latent temporal reasoning ability with merely 64–256 training samples. This study is the first to demonstrate that static image editing models inherently possess transferable temporal understanding, enabling cross-modal generalization from spatial editing to video interpolation. Notably, this approach achieves data-efficient video synthesis without any architectural modifications, offering a novel paradigm particularly suitable for resource-constrained scenarios.
📝 Abstract
Pre-trained image editing models exhibit strong spatial reasoning and object-aware transformation capabilities acquired from billions of image-text pairs, yet they possess no explicit temporal modeling. This paper demonstrates that these spatial priors can be repurposed to unlock temporal synthesis capabilities through minimal adaptation - without introducing any video-specific architecture or motion estimation modules. We show that a large image editing model (Qwen-Image-Edit), originally designed solely for static instruction-based edits, can be adapted for Video Frame Interpolation (VFI) using only 64-256 training samples via Low-Rank Adaptation (LoRA). Our core contribution is revealing that the model's inherent understanding of "how objects transform" in static scenes contains latent temporal reasoning that can be activated through few-shot fine-tuning. While the baseline model completely fails at producing coherent intermediate frames, our parameter-efficient adaptation successfully unlocks its interpolation capability. Rather than competing with task-specific VFI methods trained from scratch on massive datasets, our work establishes that foundation image editing models possess untapped potential for temporal tasks, offering a data-efficient pathway for video synthesis in resource-constrained scenarios. This bridges the gap between image manipulation and video understanding, suggesting that spatial and temporal reasoning may be more intertwined in foundation models than previously recognized