🤖 AI Summary
This paper introduces the first semantic-level 3D motion transfer framework, addressing skeleton-free, semantically meaningful motion transfer across object categories in multi-view videos. Methodologically, it proposes an anchor-guided, view-aware motion embedding mechanism to ensure cross-view motion consistency; employs conditional inversion to extract semantic motion features that drive dynamic 3D Gaussian splatting reconstruction of static targets; and incorporates noisy video supervision to enhance robustness. Key contributions include: (1) establishing the first benchmark for semantic 3D motion transfer; (2) introducing a novel 4D reconstruction pipeline that synergistically integrates implicit motion transfer with dynamic Gaussian rendering; and (3) achieving significant performance gains over state-of-the-art methods on the new benchmark—demonstrating superior motion fidelity and structural stability simultaneously.
📝 Abstract
We present Gaussian See, Gaussian Do, a novel approach for semantic 3D motion transfer from multiview video. Our method enables rig-free, cross-category motion transfer between objects with semantically meaningful correspondence. Building on implicit motion transfer techniques, we extract motion embeddings from source videos via condition inversion, apply them to rendered frames of static target shapes, and use the resulting videos to supervise dynamic 3D Gaussian Splatting reconstruction. Our approach introduces an anchor-based view-aware motion embedding mechanism, ensuring cross-view consistency and accelerating convergence, along with a robust 4D reconstruction pipeline that consolidates noisy supervision videos. We establish the first benchmark for semantic 3D motion transfer and demonstrate superior motion fidelity and structural consistency compared to adapted baselines. Code and data for this paper available at https://gsgd-motiontransfer.github.io/