🤖 AI Summary
To address insufficient spatiotemporal modeling—particularly weak capture of inter-channel temporal dependencies—in skeleton-based action recognition, this paper proposes a novel Transformer-Mamba hybrid architecture. Spatial Transformer models joint-level spatial relationships, while an enhanced Mamba module captures long-range temporal dynamics. Crucially, we introduce the Temporal Dynamic Modeling (TDM) module, featuring a Multi-scale Temporal Interaction (MTI) mechanism that explicitly models cross-channel temporal interactions via multi-scale Cycle operators—overcoming Mamba’s inherent limitation of single-channel state-space modeling. The entire framework is end-to-end differentiable, balancing expressive power and computational efficiency. Extensive experiments demonstrate state-of-the-art accuracy on four major benchmarks—NTU-RGB+D 60/120, NW-UCLA, and UAV-Human—while significantly reducing inference latency, thereby achieving a superior trade-off between accuracy and efficiency.
📝 Abstract
Skeleton-based action recognition has garnered significant attention in the computer vision community. Inspired by the recent success of the selective state-space model (SSM) Mamba in modeling 1D temporal sequences, we propose TSkel-Mamba, a hybrid Transformer-Mamba framework that effectively captures both spatial and temporal dynamics. In particular, our approach leverages Spatial Transformer for spatial feature learning while utilizing Mamba for temporal modeling. Mamba, however, employs separate SSM blocks for individual channels, which inherently limits its ability to model inter-channel dependencies. To better adapt Mamba for skeleton data and enhance Mamba`s ability to model temporal dependencies, we introduce a Temporal Dynamic Modeling (TDM) block, which is a versatile plug-and-play component that integrates a novel Multi-scale Temporal Interaction (MTI) module. The MTI module employs multi-scale Cycle operators to capture cross-channel temporal interactions, a critical factor in action recognition. Extensive experiments on NTU-RGB+D 60, NTU-RGB+D 120, NW-UCLA and UAV-Human datasets demonstrate that TSkel-Mamba achieves state-of-the-art performance while maintaining low inference time, making it both efficient and highly effective.