TACO: Taming Diffusion for in-the-wild Video Amodal Completion

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses video amodal completion (VAC)—the cross-frame consistent, modality-agnostic reconstruction of occluded objects in unconstrained real-world videos. Methodologically, we propose the first progressive fine-tuning framework for video diffusion models tailored to realistic scenarios: (i) we construct a multi-level synthetic occlusion dataset with varying difficulty; (ii) we design a conditional video diffusion model incorporating physics-inspired spatiotemporal consistency constraints; and (iii) we introduce a multi-stage progressive fine-tuning strategy to effectively transfer pre-trained video diffusion manifolds to the modality-agnostic completion task. Our approach achieves state-of-the-art performance on diverse real-world benchmarks—including internet-sourced wild videos, autonomous driving sequences, and robotic manipulation scenes—demonstrating substantial improvements in downstream tasks such as object reconstruction fidelity and 6D pose estimation accuracy.

Technology Category

Application Category

📝 Abstract
Humans can infer complete shapes and appearances of objects from limited visual cues, relying on extensive prior knowledge of the physical world. However, completing partially observable objects while ensuring consistency across video frames remains challenging for existing models, especially for unstructured, in-the-wild videos. This paper tackles the task of Video Amodal Completion (VAC), which aims to generate the complete object consistently throughout the video given a visual prompt specifying the object of interest. Leveraging the rich, consistent manifolds learned by pre-trained video diffusion models, we propose a conditional diffusion model, TACO, that repurposes these manifolds for VAC. To enable its effective and robust generalization to challenging in-the-wild scenarios, we curate a large-scale synthetic dataset with multiple difficulty levels by systematically imposing occlusions onto un-occluded videos. Building on this, we devise a progressive fine-tuning paradigm that starts with simpler recovery tasks and gradually advances to more complex ones. We demonstrate TACO's versatility on a wide range of in-the-wild videos from Internet, as well as on diverse, unseen datasets commonly used in autonomous driving, robotic manipulation, and scene understanding. Moreover, we show that TACO can be effectively applied to various downstream tasks like object reconstruction and pose estimation, highlighting its potential to facilitate physical world understanding and reasoning. Our project page is available at https://jason-aplp.github.io/TACO.
Problem

Research questions and friction points this paper is trying to address.

Completing partially observable objects in videos
Ensuring consistency across video frames
Generalizing to challenging in-the-wild scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pre-trained video diffusion models
Creates synthetic dataset with occlusions
Progressive fine-tuning for complex tasks
🔎 Similar Papers
No similar papers found.
Ruijie Lu
Ruijie Lu
Peking University
computer vision
Y
Yixin Chen
State Key Laboratory of General Artificial Intelligence, BIGAI
Y
Yu Liu
State Key Laboratory of General Artificial Intelligence, BIGAI, Tsinghua University
Jiaxiang Tang
Jiaxiang Tang
NVIDIA; Peking University
Computer ScienceComputer Vision
Junfeng Ni
Junfeng Ni
Tsinghua University
Computer Vision3D Reconstruction
Diwen Wan
Diwen Wan
AIRCAS,PKU
Computer Vision
Gang Zeng
Gang Zeng
Peking University
Computer VisionPattern RecognitionComputer Graphics
S
Siyuan Huang
State Key Laboratory of General Artificial Intelligence, BIGAI