🤖 AI Summary
Existing animal motion transfer methods primarily focus on human motions and struggle to preserve species-specific behavioral patterns, leading to unnatural and behaviorally inconsistent results. This paper introduces the first generative framework for cross-species animal motion transfer. Our approach addresses this challenge through two key innovations: (1) a Habit Preservation Module coupled with a class-specific Habit Encoder, which explicitly models and enforces typical behavioral patterns of target species; and (2) a hybrid representation integrating skeletal binding with large language model (LLM)-guided semantics, enabling semantic-driven motion alignment and generalization to unseen species. We evaluate our method on DeformingThings4D-skl, a novel quadruped motion dataset we curate. Both qualitative and quantitative evaluations demonstrate significant improvements over state-of-the-art methods, achieving superior motion naturalness and behavioral consistency across species.
📝 Abstract
Animal motion embodies species-specific behavioral habits, making the transfer of motion across categories a critical yet complex task for applications in animation and virtual reality. Existing motion transfer methods, primarily focused on human motion, emphasize skeletal alignment (motion retargeting) or stylistic consistency (motion style transfer), often neglecting the preservation of distinct habitual behaviors in animals. To bridge this gap, we propose a novel habit-preserved motion transfer framework for cross-category animal motion. Built upon a generative framework, our model introduces a habit-preservation module with category-specific habit encoder, allowing it to learn motion priors that capture distinctive habitual characteristics. Furthermore, we integrate a large language model (LLM) to facilitate the motion transfer to previously unobserved species. To evaluate the effectiveness of our approach, we introduce the DeformingThings4D-skl dataset, a quadruped dataset with skeletal bindings, and conduct extensive experiments and quantitative analyses, which validate the superiority of our proposed model.