🤖 AI Summary
This work addresses the challenge of high coupling between style and content in human motion data, which hinders high-quality style transfer. The authors propose a multi-level representation framework based on a Residual Vector Quantized Variational Autoencoder (RVQ-VAE), where content is modeled as coarse-grained action semantics and style as fine-grained expressive details. To enhance disentanglement across codebooks, they incorporate contrastive learning and an information leakage loss. Notably, they introduce a novel quantized code-swapping mechanism that enables plug-and-play style transfer to unseen styles without fine-tuning. The method demonstrates superior generalization and generation quality across multiple tasks, including style transfer, style removal, and motion blending.
📝 Abstract
Human motion data is inherently rich and complex, containing both semantic content and subtle stylistic features that are challenging to model. We propose a novel method for effective disentanglement of the style and content in human motion data to facilitate style transfer. Our approach is guided by the insight that content corresponds to coarse motion attributes while style captures the finer, expressive details. To model this hierarchy, we employ Residual Vector Quantized Variational Autoencoders (RVQ-VAEs) to learn a coarse-to-fine representation of motion. We further enhance the disentanglement by integrating contrastive learning and a novel information leakage loss with codebook learning to organize the content and the style across different codebooks. We harness this disentangled representation using our simple and effective inference-time technique Quantized Code Swapping, which enables motion style transfer without requiring any fine-tuning for unseen styles. Our framework demonstrates strong versatility across multiple inference applications, including style transfer, style removal, and motion blending.