Controllable Single-shot Animation Blending with Temporal Conditioning

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing single-shot motion generation models lack explicit temporal control for multi-action fusion. This paper introduces the first controllable temporal motion blending framework tailored for few-shot scenarios, enabling seamless and user-controllable mixing of multiple actions within a single skeletal sequence—without requiring predefined kinematic trees or skeleton constraints. Our method comprises: (1) a time-conditioned generative architecture that supports user-specified fusion timing and transition styles; (2) a skeleton-aware normalization mechanism ensuring consistent representation across diverse skeletons and motion styles; and (3) a few-shot motion representation learning strategy. We validate the framework’s generality, efficiency, and high-fidelity synthesis capability across varied skeletal configurations and animation styles. Experimental results demonstrate significant improvements in both controllability—e.g., precise temporal alignment and customizable transitions—and naturalness of blended motions, establishing a new benchmark for few-shot controllable motion synthesis.

Technology Category

Application Category

📝 Abstract
Training a generative model on a single human skeletal motion sequence without being bound to a specific kinematic tree has drawn significant attention from the animation community. Unlike text-to-motion generation, single-shot models allow animators to controllably generate variations of existing motion patterns without requiring additional data or extensive retraining. However, existing single-shot methods do not explicitly offer a controllable framework for blending two or more motions within a single generative pass. In this paper, we present the first single-shot motion blending framework that enables seamless blending by temporally conditioning the generation process. Our method introduces a skeleton-aware normalization mechanism to guide the transition between motions, allowing smooth, data-driven control over when and how motions blend. We perform extensive quantitative and qualitative evaluations across various animation styles and different kinematic skeletons, demonstrating that our approach produces plausible, smooth, and controllable motion blends in a unified and efficient manner.
Problem

Research questions and friction points this paper is trying to address.

Enabling controllable blending of multiple motions in single generative pass
Providing seamless transition between motions with temporal conditioning
Achieving smooth data-driven motion blending across various animation styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal conditioning for seamless motion blending
Skeleton-aware normalization for smooth transitions
Single-shot framework for unified motion generation
🔎 Similar Papers
No similar papers found.