🤖 AI Summary
Multi-view 3D human pose estimation suffers from poor generalization to unseen camera configurations, primarily because existing attention mechanisms fail to explicitly model the geometric structure of keypoints and tend to overfit to training-specific camera layouts and occlusion patterns. To address this, we propose the Projection State-Space Module (PSSM) and the Grid-Token-guided Bidirectional Scanning mechanism (GTBS), which jointly model multi-view feature correlations and joint spatial sequence dependencies, thereby enhancing robustness to arbitrary, previously unseen camera geometries. Our approach integrates state-space modeling, learnable projection-sequence encoding, and collaborative multi-view feature optimization. On the CMU Panoptic dataset with a three-camera setup, our method achieves a +10.8 AP₂₅ improvement (+24% relative gain); on the cross-dataset Campus A1 benchmark, it attains a +15.3 PCP gain (+38% relative improvement), significantly outperforming state-of-the-art methods.
📝 Abstract
While significant progress has been made in single-view 3D human pose estimation, multi-view 3D human pose estimation remains challenging, particularly in terms of generalizing to new camera configurations. Existing attention-based transformers often struggle to accurately model the spatial arrangement of keypoints, especially in occluded scenarios. Additionally, they tend to overfit specific camera arrangements and visual scenes from training data, resulting in substantial performance drops in new settings. In this study, we introduce a novel Multi-View State Space Modeling framework, named MV-SSM, for robustly estimating 3D human keypoints. We explicitly model the joint spatial sequence at two distinct levels: the feature level from multi-view images and the person keypoint level. We propose a Projective State Space (PSS) block to learn a generalized representation of joint spatial arrangements using state space modeling. Moreover, we modify Mamba's traditional scanning into an effective Grid Token-guided Bidirectional Scanning (GTBS), which is integral to the PSS block. Multiple experiments demonstrate that MV-SSM achieves strong generalization, outperforming state-of-the-art methods: +10.8 on AP25 (+24%) on the challenging three-camera setting in CMU Panoptic, +7.0 on AP25 (+13%) on varying camera arrangements, and +15.3 PCP (+38%) on Campus A1 in cross-dataset evaluations. Project Website: https://aviralchharia.github.io/MV-SSM