STGFormer: Spatio-Temporal GraphFormer for 3D Human Pose Estimation in Video

📅 2024-07-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address depth ambiguity in monocular video-based 3D human pose estimation caused by insufficient spatiotemporal modeling granularity, this paper proposes a novel framework integrating graph-structured priors with long-range spatiotemporal dependency modeling. Our method introduces two key innovations: (1) an STG cross-attention mechanism enabling parallel, fine-grained feature interaction across spatial and temporal dimensions; and (2) a dual-path modulated hop-wise regularized GCN, the first to jointly optimize hop-wise skip connections and structural regularization of graph convolutions. Built upon Spatio-Temporal Graph Attention and Modulated Hop-wise Regular GCN architectures, our approach establishes a criss-cross spatiotemporal modeling paradigm. Evaluated on Human3.6M and MPI-INF-3DHP, it achieves state-of-the-art performance, significantly reducing MPJPE. Results demonstrate that high-order spatiotemporal structural modeling effectively mitigates depth ambiguity.

Technology Category

Application Category

📝 Abstract
The current methods of video-based 3D human pose estimation have achieved significant progress.However, they still face pressing challenges, such as the underutilization of spatiotemporal bodystructure features in transformers and the inadequate granularity of spatiotemporal interaction modeling in graph convolutional networks, which leads to pervasive depth ambiguity in monocular 3D human pose estimation. To address these limitations, this paper presents the Spatio-Temporal GraphFormer framework (STGFormer) for 3D human pose estimation in videos. First, we introduce a Spatio-Temporal criss-cross Graph (STG) attention mechanism, designed to more effectively leverage the inherent graph priors of the human body within continuous sequence distributions while capturing spatiotemporal long-range dependencies. Next, we present a dual-path Modulated Hop-wise Regular GCN (MHR-GCN) to independently process temporal and spatial dimensions in parallel, preserving features rich in temporal dynamics and the original or high-dimensional representations of spatial structures. Furthermore, the module leverages modulation to optimize parameter efficiency and incorporates spatiotemporal hop-wise skip connections to capture higher-order information. Finally, we demonstrate that our method achieves state-of-the-art performance on the Human3.6M and MPIINF-3DHP datasets.
Problem

Research questions and friction points this paper is trying to address.

Underutilized spatiotemporal features in transformers
Inadequate granularity in graph networks
Depth ambiguity in monocular pose estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-Temporal Graph attention mechanism
Dual-path Modulated Hop-wise GCN
Optimized parameter efficiency with skip connections
🔎 Similar Papers
No similar papers found.
Y
Yang Liu
School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
Z
Zhiyong Zhang
School of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China