FreqMixFormerV2: Lightweight Frequency-aware Mixed Transformer for Human Skeleton Action Recognition

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of excessive parameter count, high computational cost, and poor sensitivity to subtle motions in Transformer-based skeleton action recognition under resource-constrained settings, this paper proposes the lightweight Frequency-Aware Hybrid Transformer (FAT). We introduce a novel frequency-domain analysis paradigm specifically designed for skeleton sequences, incorporating a lightweight frequency operator and a simplified frequency-aware attention module to explicitly model both low- and high-frequency dynamic features while compressing redundant parameters. By integrating frequency-domain feature modeling, optimized spatio-temporal encoding, and a hybrid attention mechanism, FAT achieves state-of-the-art accuracy on NTU RGB+D, NTU RGB+D 120, and NW-UCLA benchmarks. Notably, it reduces model parameters to only 60% of those in the current best-performing methods, significantly improving deployment efficiency and energy efficiency.

Technology Category

Application Category

📝 Abstract
Transformer-based human skeleton action recognition has been developed for years. However, the complexity and high parameter count demands of these models hinder their practical applications, especially in resource-constrained environments. In this work, we propose FreqMixForemrV2, which was built upon the Frequency-aware Mixed Transformer (FreqMixFormer) for identifying subtle and discriminative actions with pioneered frequency-domain analysis. We design a lightweight architecture that maintains robust performance while significantly reducing the model complexity. This is achieved through a redesigned frequency operator that optimizes high-frequency and low-frequency parameter adjustments, and a simplified frequency-aware attention module. These improvements result in a substantial reduction in model parameters, enabling efficient deployment with only a minimal sacrifice in accuracy. Comprehensive evaluations of standard datasets (NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets) demonstrate that the proposed model achieves a superior balance between efficiency and accuracy, outperforming state-of-the-art methods with only 60% of the parameters.
Problem

Research questions and friction points this paper is trying to address.

Transformer-based Human Pose Recognition
Resource-constrained Environment
Fine-grained Motion Identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

FreqMixFormerV2
optimized frequency domain analysis
efficient parameter reduction
🔎 Similar Papers
No similar papers found.
Wenhan Wu
Wenhan Wu
The University of North Carolina at Charlotte
Human Action RecognitionHuman Behavior AnalysisComputer Vision
P
Pengfei Wang
Independent Researcher
C
Chen Chen
Center for Research in Computer Vision, University of Central Florida, Orlando, USA
Aidong Lu
Aidong Lu
University of North Carolina at Charlotte