Gaussian See, Gaussian Do: Semantic 3D Motion Transfer from Multiview Video

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces the first semantic-level 3D motion transfer framework, addressing skeleton-free, semantically meaningful motion transfer across object categories in multi-view videos. Methodologically, it proposes an anchor-guided, view-aware motion embedding mechanism to ensure cross-view motion consistency; employs conditional inversion to extract semantic motion features that drive dynamic 3D Gaussian splatting reconstruction of static targets; and incorporates noisy video supervision to enhance robustness. Key contributions include: (1) establishing the first benchmark for semantic 3D motion transfer; (2) introducing a novel 4D reconstruction pipeline that synergistically integrates implicit motion transfer with dynamic Gaussian rendering; and (3) achieving significant performance gains over state-of-the-art methods on the new benchmark—demonstrating superior motion fidelity and structural stability simultaneously.

Technology Category

Application Category

📝 Abstract
We present Gaussian See, Gaussian Do, a novel approach for semantic 3D motion transfer from multiview video. Our method enables rig-free, cross-category motion transfer between objects with semantically meaningful correspondence. Building on implicit motion transfer techniques, we extract motion embeddings from source videos via condition inversion, apply them to rendered frames of static target shapes, and use the resulting videos to supervise dynamic 3D Gaussian Splatting reconstruction. Our approach introduces an anchor-based view-aware motion embedding mechanism, ensuring cross-view consistency and accelerating convergence, along with a robust 4D reconstruction pipeline that consolidates noisy supervision videos. We establish the first benchmark for semantic 3D motion transfer and demonstrate superior motion fidelity and structural consistency compared to adapted baselines. Code and data for this paper available at https://gsgd-motiontransfer.github.io/
Problem

Research questions and friction points this paper is trying to address.

Transferring 3D motion between different object categories semantically
Creating rig-free motion transfer from multiview video sources
Ensuring cross-view consistency in dynamic 3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anchor-based view-aware motion embedding for consistency
4D reconstruction pipeline consolidating noisy supervision videos
Dynamic 3D Gaussian Splatting for motion transfer
🔎 Similar Papers
No similar papers found.
Y
Yarin Bekor
Technion - Israel Institute of Technology
G
Gal Michael Harari
Technion - Israel Institute of Technology
Or Perel
Or Perel
NVidia AI Labs
Deep LearningComputer GraphicsComputer Vision
O
O. Litany
Technion - Israel Institute of Technology and NVIDIA