🤖 AI Summary
Existing MAE-based skeleton action recognition methods predominantly reconstruct raw joint coordinates, resulting in weak semantic representation and computational redundancy. To address this, we propose the Generalized Feature Prediction (GFP) framework, which replaces low-level coordinate reconstruction with high-level semantic feature prediction to enhance both representational capacity and efficiency. GFP introduces a lightweight dynamic target generation network that constructs multi-level supervision signals in real time, coupled with a constraint-optimization mechanism that ensures feature diversity and prevents representation collapse during end-to-end training. Built upon a spatiotemporal hierarchical masking autoencoding paradigm, GFP eliminates the need for offline precomputation. Evaluated on NTU-60, NTU-120, and PKU-MMD benchmarks, GFP achieves state-of-the-art accuracy while accelerating training by 6.2×, significantly improving downstream task performance and computational efficiency.
📝 Abstract
Recent advances in the masked autoencoder (MAE) paradigm have significantly propelled self-supervised skeleton-based action recognition. However, most existing approaches limit reconstruction targets to raw joint coordinates or their simple variants, resulting in computational redundancy and limited semantic representation. To address this, we propose a novel General Feature Prediction framework (GFP) for efficient mask skeleton modeling. Our key innovation is replacing conventional low-level reconstruction with high-level feature prediction that spans from local motion patterns to global semantic representations. Specifically, we introduce a collaborative learning framework where a lightweight target generation network dynamically produces diversified supervision signals across spatial-temporal hierarchies, avoiding reliance on pre-computed offline features. The framework incorporates constrained optimization to ensure feature diversity while preventing model collapse. Experiments on NTU RGB+D 60, NTU RGB+D 120 and PKU-MMD demonstrate the benefits of our approach: Computational efficiency (with 6.2$ imes$ faster training than standard masked skeleton modeling methods) and superior representation quality, achieving state-of-the-art performance in various downstream tasks.