🤖 AI Summary
Existing generative video models struggle to enable fine-grained, script-driven editing of recorded talking-head videos while preserving speaker identity, temporal coherence, and precise lip-sync. This work proposes the first diffusion transformer (DiT)-based video-to-video editing framework that supports text-level manipulations—such as inserting, deleting, or re-timing spoken content—guided by audio conditions and enhanced through region-aware training. By integrating spatiotemporal inpainting, the method synthesizes natural facial dynamics and lip movements that align accurately with the edited speech. The approach achieves high-fidelity identity preservation and long-range temporal consistency alongside accurate audio-visual synchronization, offering a powerful and controllable tool for professional post-production editing of talking-head videos.
📝 Abstract
Current generative video models excel at producing novel content from text and image prompts, but leave a critical gap in editing existing pre-recorded videos, where minor alterations to the spoken script require preserving motion, temporal coherence, speaker identity, and accurate lip synchronization. We introduce EditYourself, a DiT-based framework for audio-driven video-to-video (V2V) editing that enables transcript-based modification of talking head videos, including the seamless addition, removal, and retiming of visually spoken content. Building on a general-purpose video diffusion model, EditYourself augments its V2V capabilities with audio conditioning and region-aware, edit-focused training extensions. This enables precise lip synchronization and temporally coherent restructuring of existing performances via spatiotemporal inpainting, including the synthesis of realistic human motion in newly added segments, while maintaining visual fidelity and identity consistency over long durations. This work represents a foundational step toward generative video models as practical tools for professional video post-production.