UniPaint: Unified Space-time Video Inpainting via Mixture-of-Experts

📅 2024-12-09
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods treat video inpainting and frame interpolation as disjoint tasks, overlooking their intrinsic spatiotemporal coupling. This paper proposes UniPaint, the first unified spatiotemporal video restoration framework that jointly models both tasks as a single generative problem. Its core contributions are: (1) a spatiotemporal joint masking training strategy that explicitly captures spatiotemporal dependencies to enable mutual task enhancement; (2) a multi-task-aware Mixture-of-Experts (MoE) attention mechanism that dynamically routes task-specific features within a shared backbone; and (3) a plug-and-play spatiotemporal adapter enabling flexible integration with both diffusion and autoregressive architectures. Evaluated on multiple video inpainting and frame interpolation benchmarks, UniPaint achieves significant improvements over state-of-the-art methods, simultaneously enhancing reconstruction fidelity and perceptual quality.

Technology Category

Application Category

📝 Abstract
In this paper, we present UniPaint, a unified generative space-time video inpainting framework that enables spatial-temporal inpainting and interpolation. Different from existing methods that treat video inpainting and video interpolation as two distinct tasks, we leverage a unified inpainting framework to tackle them and observe that these two tasks can mutually enhance synthesis performance. Specifically, we first introduce a plug-and-play space-time video inpainting adapter, which can be employed in various personalized models. The key insight is to propose a Mixture of Experts (MoE) attention to cover various tasks. Then, we design a spatial-temporal masking strategy during the training stage to mutually enhance each other and improve performance. UniPaint produces high-quality and aesthetically pleasing results, achieving the best quantitative results across various tasks and scale setups. The code and checkpoints will be available soon.
Problem

Research questions and friction points this paper is trying to address.

Unified framework for video inpainting and interpolation
Mutual enhancement of synthesis performance via MoE attention
Spatial-temporal masking strategy to improve training results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified space-time video inpainting framework
Plug-and-play adapter for personalized models
Mixture of Experts attention mechanism
🔎 Similar Papers
No similar papers found.