Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of motion transfer across skeletons with inconsistent source/target bone topologies, where establishing one-to-one correspondences is infeasible. We propose Motion2Motion, a training-free cross-topology motion transfer framework requiring only sparse skeletal correspondences and a small set of example motions from the target skeleton. Our method integrates geometric constraints with kinematic modeling, achieving robust motion reconstruction via sparse correspondence matching and motion retargeting. Unlike existing approaches that rely on large-scale paired data or pre-trained models, ours is the first to enable generalizable, zero-shot motion transfer across species and structural classes—without assuming topological alignment. Experiments demonstrate high-fidelity animation generation on heterogeneous skeletons—including human, quadrupedal, and robotic arm configurations—and the framework has been successfully integrated into an industrial-grade animation production system.

Technology Category

Application Category

📝 Abstract
This work studies the challenge of transfer animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transfer motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches. To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios. The practical utility of our approach is further evidenced by its successful integration in downstream applications and user interfaces, highlighting its potential for industrial applications. Code and data are available at https://lhchen.top/Motion2Motion.
Problem

Research questions and friction points this paper is trying to address.

Transfer animations between characters with different skeletal topologies
Overcome topological inconsistency in source and target skeletons
Address lack of paired motion datasets for diverse topologies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free framework for motion transfer
Sparse bone correspondences between skeletons
Works with minimal example motions
🔎 Similar Papers
No similar papers found.
Ling-Hao Chen
Ling-Hao Chen
Ph.D. Student, Tsinghua University, IDEA Research
Computer GraphicsComputer VisionCharacter Animation
Y
Yuhong Zhang
Tsinghua University, China
Z
Zixin Yin
The Hong Kong University of Science and Technology, China
Z
Zhiyang Dou
The University of Hong Kong, China
X
Xin Chen
ByteDance, United States of America
J
Jingbo Wang
Shanghai Artificial Intelligence Laboratory, China
Taku Komura
Taku Komura
The University of Hong Kong
Character AnimationComputer GraphicsRobotics
L
Lei Zhang
International Digital Economy Academy, China