V-LASIK: Consistent Glasses-Removal from Videos Using Synthetic Data

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video-based localized attribute editing (e.g., eyeglasses removal) remains challenged by temporal inconsistency and generation artifacts, particularly in preserving identity fidelity across frames. To address this, we propose the first weakly supervised synthetic data paradigm: fine-tuned pre-trained diffusion models generate semantically valid yet noisy training samples, eliminating reliance on costly real-world annotations. We further introduce a diffusion-based video editing framework that jointly optimizes temporal consistency constraints and identity preservation losses. Crucially, our method operates without paired data or explicit temporal modeling. Experimental results demonstrate substantial improvements in inter-frame coherence and identity fidelity—achieving state-of-the-art performance on eyeglasses removal—and show strong generalization to other localized editing tasks, such as sticker removal.

Technology Category

Application Category

📝 Abstract
Diffusion-based generative models have recently shown remarkable image and video editing capabilities. However, local video editing, particularly removal of small attributes like glasses, remains a challenge. Existing methods either alter the videos excessively, generate unrealistic artifacts, or fail to perform the requested edit consistently throughout the video. In this work, we focus on consistent and identity-preserving removal of glasses in videos, using it as a case study for consistent local attribute removal in videos. Due to the lack of paired data, we adopt a weakly supervised approach and generate synthetic imperfect data, using an adjusted pretrained diffusion model. We show that despite data imperfection, by learning from our generated data and leveraging the prior of pretrained diffusion models, our model is able to perform the desired edit consistently while preserving the original video content. Furthermore, we exemplify the generalization ability of our method to other local video editing tasks by applying it successfully to facial sticker-removal. Our approach demonstrates significant improvement over existing methods, showcasing the potential of leveraging synthetic data and strong video priors for local video editing tasks.
Problem

Research questions and friction points this paper is trying to address.

Consistent glasses-removal in videos using synthetic data
Addressing unrealistic artifacts in local video editing
Generalizing method to other local edits like sticker-removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses synthetic data for glasses-removal
Leverages pretrained diffusion model priors
Generalizes to other local edits
🔎 Similar Papers
No similar papers found.