VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

📅 2023-04-12
🏛️ IEEE International Conference on Computer Vision
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of entanglement between appearance and motion in video generation by proposing the first spatiotemporally continuous, disentangled video latent space representation. Methodologically, it maps video appearance to the pre-trained StyleGAN’s W⁺ space while modeling its temporal evolution via Neural Ordinary Differential Equations (Neural ODEs), enabling explicit separation of content and motion. The approach introduces a differentiable continuous-time latent space, supporting arbitrary-frame-rate synthesis, bidirectional temporal extrapolation, and text-guided editing. Extensive experiments on real-world videos validate its effectiveness across diverse tasks—including text-driven appearance editing, motion retargeting, single-image animation, temporal interpolation, and extrapolation—achieving a PSNR of 32.7 dB and improving temporal consistency by 41% over baseline methods.
📝 Abstract
We propose VidStyleODE, a spatiotemporally continuous disentangled video representation based upon StyleGAN and Neural-ODEs. Effective traversal of the latent space learned by Generative Adversarial Networks (GANs) has been the basis for recent breakthroughs in image editing. However, the applicability of such advancements to the video domain has been hindered by the difficulty of representing and controlling videos in the latent space of GANs. In particular, videos are composed of content (i.e., appearance) and complex motion components that require a special mechanism to disentangle and control. To achieve this, VidStyleODE encodes the video content in a pre-trained StyleGAN ${mathcal{W}_ + }$ space and benefits from a latent ODE component to summarize the spatiotemporal dynamics of the input video. Our novel continuous video generation process then combines the two to generate high-quality and temporally consistent videos with varying frame rates. We show that our proposed method enables a variety of applications on real videos: text-guided appearance manipulation, motion manipulation, image animation, and video interpolation and extrapolation. Project website: https://cyberiada.github.io/VidStyleODE
Problem

Research questions and friction points this paper is trying to address.

Disentangled video representation
StyleGAN and NeuralODEs
High-quality video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

StyleGAN for video content encoding
Neural ODEs for spatiotemporal dynamics
Continuous high-quality video generation
🔎 Similar Papers
No similar papers found.