🤖 AI Summary
This work addresses the challenge of transferring motion from monocular 2D videos to arbitrary 3D models, a task hindered by pose ambiguity and shape diversity, with existing approaches often relying on category-specific templates or 3D supervision. We propose the first general-purpose motion transfer framework that operates without category priors or 3D annotations. Our method jointly optimizes shape and pose through a deformable articulated 3D Gaussian splatting model, coupled with dense semantic correspondence matching to effectively disentangle the shape-pose coupling ambiguity. This approach achieves high-fidelity, visually coherent, and efficient motion transfer across diverse object categories and in-the-wild video sequences, significantly outperforming current state-of-the-art methods.
📝 Abstract
Motion transfer from 2D videos to 3D assets is a challenging problem, due to inherent pose ambiguities and diverse object shapes, often requiring category-specific parametric templates. We propose CAMO, a category-agnostic framework that transfers motion to diverse target meshes directly from monocular 2D videos without relying on predefined templates or explicit 3D supervision. The core of CAMO is a morphology-parameterized articulated 3D Gaussian splatting model combined with dense semantic correspondences to jointly adapt shape and pose through optimization. This approach effectively alleviates shape-pose ambiguities, enabling visually faithful motion transfer for diverse categories. Experimental results demonstrate superior motion accuracy, efficiency, and visual coherence compared to existing methods, significantly advancing motion transfer in varied object categories and casual video scenarios.