🤖 AI Summary
Existing video editing methods rely heavily on large-scale pretraining and lack fine-grained controllability; first-frame guidance approaches struggle to maintain temporal consistency across subsequent frames. To address this, we propose a mask-aware LoRA fine-tuning framework—the first to enable controllable video editing under first-frame guidance. Our method introduces a mask-driven dual-source learning mechanism that jointly models the spatiotemporal structure of the input video and the appearance prior from a reference image. Additionally, we incorporate multi-view or multi-state references as visual anchors to enhance inter-frame consistency. Built upon image-to-video diffusion models, our approach performs lightweight adaptation via low-rank adaptation (LoRA) and spatial mask modulation—without altering the original architecture. Extensive experiments demonstrate significant improvements over state-of-the-art methods across multiple benchmarks, achieving superior editing accuracy, temporal coherence, and region-level controllability.
📝 Abstract
Video editing using diffusion models has achieved remarkable results in generating high-quality edits for videos. However, current methods often rely on large-scale pretraining, limiting flexibility for specific edits. First-frame-guided editing provides control over the first frame, but lacks flexibility over subsequent frames. To address this, we propose a mask-based LoRA (Low-Rank Adaptation) tuning method that adapts pretrained Image-to-Video (I2V) models for flexible video editing. Our approach preserves background regions while enabling controllable edits propagation. This solution offers efficient and adaptable video editing without altering the model architecture. To better steer this process, we incorporate additional references, such as alternate viewpoints or representative scene states, which serve as visual anchors for how content should unfold. We address the control challenge using a mask-driven LoRA tuning strategy that adapts a pre-trained image-to-video model to the editing context. The model must learn from two distinct sources: the input video provides spatial structure and motion cues, while reference images offer appearance guidance. A spatial mask enables region-specific learning by dynamically modulating what the model attends to, ensuring that each area draws from the appropriate source. Experimental results show our method achieves superior video editing performance compared to state-of-the-art methods.