Edit2Interp: Adapting Image Foundation Models from Spatial Editing to Video Frame Interpolation with Few-Shot Learning

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work explores how to achieve video frame interpolation using only an image foundation model with spatial editing capabilities, without introducing explicit temporal modeling or motion estimation modules. By applying parameter-efficient fine-tuning via Low-Rank Adaptation (LoRA) to the pre-trained Qwen-Image-Edit model, the method activates its latent temporal reasoning ability with merely 64–256 training samples. This study is the first to demonstrate that static image editing models inherently possess transferable temporal understanding, enabling cross-modal generalization from spatial editing to video interpolation. Notably, this approach achieves data-efficient video synthesis without any architectural modifications, offering a novel paradigm particularly suitable for resource-constrained scenarios.

Technology Category

Application Category

📝 Abstract
Pre-trained image editing models exhibit strong spatial reasoning and object-aware transformation capabilities acquired from billions of image-text pairs, yet they possess no explicit temporal modeling. This paper demonstrates that these spatial priors can be repurposed to unlock temporal synthesis capabilities through minimal adaptation - without introducing any video-specific architecture or motion estimation modules. We show that a large image editing model (Qwen-Image-Edit), originally designed solely for static instruction-based edits, can be adapted for Video Frame Interpolation (VFI) using only 64-256 training samples via Low-Rank Adaptation (LoRA). Our core contribution is revealing that the model's inherent understanding of "how objects transform" in static scenes contains latent temporal reasoning that can be activated through few-shot fine-tuning. While the baseline model completely fails at producing coherent intermediate frames, our parameter-efficient adaptation successfully unlocks its interpolation capability. Rather than competing with task-specific VFI methods trained from scratch on massive datasets, our work establishes that foundation image editing models possess untapped potential for temporal tasks, offering a data-efficient pathway for video synthesis in resource-constrained scenarios. This bridges the gap between image manipulation and video understanding, suggesting that spatial and temporal reasoning may be more intertwined in foundation models than previously recognized
Problem

Research questions and friction points this paper is trying to address.

Video Frame Interpolation
Foundation Models
Few-Shot Learning
Temporal Synthesis
Image Editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Frame Interpolation
Foundation Models
Few-Shot Learning
Low-Rank Adaptation
Spatial-Temporal Reasoning
🔎 Similar Papers
2024-07-11Neural Information Processing SystemsCitations: 0
N
Nasrin Rahimi
Codeway AI Research
Mısra Yavuz
Mısra Yavuz
Koç University
B
Burak Can Biner
Codeway AI Research
Y
Yunus Bilge Kurt
Codeway AI Research
A
Ahmet Rasim Emirdağı
Codeway AI Research
S
Süleyman Aslan
Codeway AI Research
Görkay Aydemir
Görkay Aydemir
Koc University
Computer Vision
M
M. Akın Yılmaz
Codeway AI Research
A
A. Murat Tekalp
Dept. of Electrical & Electronics Engineering, Koç University, Istanbul, Türkiye