🤖 AI Summary
Existing methods for motion interpolation between keyframes in animation rely on complex multi-module architectures, resulting in high computational overhead and cumbersome training. Method: We propose a lightweight single-encoder Transformer framework that abandons skeleton-aware designs and redundant components, shifting focus to data-centric modeling optimization. Contribution/Results: We systematically demonstrate— for the first time—the decisive impact of dataset scale, pose representation (joint-relative coordinates), and dynamic features (angular velocity) on interpolation quality, establishing a new “data-driven over model-stacking” paradigm. Trained on large-scale motion-capture data, our approach achieves smooth, physically plausible motion transitions comparable to or surpassing those of state-of-the-art complex models across multiple benchmarks, while significantly accelerating inference and substantially simplifying training.
📝 Abstract
Motion in-betweening is a crucial tool for animators, enabling intricate control over pose-level details in each keyframe. Recent machine learning solutions for motion in-betweening rely on complex models, incorporating skeleton-aware architectures or requiring multiple modules and training steps. In this work, we introduce a simple yet effective Transformer-based framework, employing a single Transformer encoder to synthesize realistic motions for motion in-betweening tasks. We find that data modeling choices play a significant role in improving in-betweening performance. Among others, we show that increasing data volume can yield equivalent or improved motion transitions, that the choice of pose representation is vital for achieving high-quality results, and that incorporating velocity input features enhances animation performance. These findings challenge the assumption that model complexity is the primary determinant of animation quality and provide insights into a more data-centric approach to motion interpolation. Additional videos and supplementary material are available at https://silk-paper.github.io.