🤖 AI Summary
This work addresses the challenging problem of high-fidelity human motion generation conditioned on time-varying input signals—such as audio, text, or control commands. We propose Temporal-Conditional Mamba (TC-Mamba), the first method to introduce state-space models (SSMs) into conditional motion generation. Unlike prevailing approaches relying on cross-attention, TC-Mamba embeds conditioning information directly into the recurrent state updates of Mamba via selective state-space dynamics, enabling fine-grained, stepwise temporal alignment and motion control. By leveraging the SSM’s inherent long-range modeling capacity and linear-time inference, TC-Mamba significantly improves motion smoothness, temporal alignment accuracy, and condition fidelity—especially for extended sequences. It achieves state-of-the-art performance across multiple benchmark tasks, demonstrating both the effectiveness and superiority of state-space architectures for complex, temporally conditioned generative modeling.
📝 Abstract
Learning human motion based on a time-dependent input signal presents a challenging yet impactful task with various applications. The goal of this task is to generate or estimate human movement that consistently reflects the temporal patterns of conditioning inputs. Existing methods typically rely on cross-attention mechanisms to fuse the condition with motion. However, this approach primarily captures global interactions and struggles to maintain step-by-step temporal alignment. To address this limitation, we introduce Temporally Conditional Mamba, a new mamba-based model for human motion generation. Our approach integrates conditional information into the recurrent dynamics of the Mamba block, enabling better temporally aligned motion. To validate the effectiveness of our method, we evaluate it on a variety of human motion tasks. Extensive experiments demonstrate that our model significantly improves temporal alignment, motion realism, and condition consistency over state-of-the-art approaches. Our project page is available at https://zquang2202.github.io/TCM.