🤖 AI Summary
To address the challenges of excessive parameter count, high computational cost, and poor sensitivity to subtle motions in Transformer-based skeleton action recognition under resource-constrained settings, this paper proposes the lightweight Frequency-Aware Hybrid Transformer (FAT). We introduce a novel frequency-domain analysis paradigm specifically designed for skeleton sequences, incorporating a lightweight frequency operator and a simplified frequency-aware attention module to explicitly model both low- and high-frequency dynamic features while compressing redundant parameters. By integrating frequency-domain feature modeling, optimized spatio-temporal encoding, and a hybrid attention mechanism, FAT achieves state-of-the-art accuracy on NTU RGB+D, NTU RGB+D 120, and NW-UCLA benchmarks. Notably, it reduces model parameters to only 60% of those in the current best-performing methods, significantly improving deployment efficiency and energy efficiency.
📝 Abstract
Transformer-based human skeleton action recognition has been developed for years. However, the complexity and high parameter count demands of these models hinder their practical applications, especially in resource-constrained environments. In this work, we propose FreqMixForemrV2, which was built upon the Frequency-aware Mixed Transformer (FreqMixFormer) for identifying subtle and discriminative actions with pioneered frequency-domain analysis. We design a lightweight architecture that maintains robust performance while significantly reducing the model complexity. This is achieved through a redesigned frequency operator that optimizes high-frequency and low-frequency parameter adjustments, and a simplified frequency-aware attention module. These improvements result in a substantial reduction in model parameters, enabling efficient deployment with only a minimal sacrifice in accuracy. Comprehensive evaluations of standard datasets (NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets) demonstrate that the proposed model achieves a superior balance between efficiency and accuracy, outperforming state-of-the-art methods with only 60% of the parameters.