🤖 AI Summary
Monocular 3D human pose estimation suffers from inherent 2D-to-3D ambiguity, while reliance on costly 3D ground-truth annotations limits generalization. To address this, we propose a self-supervised framework requiring only unsynchronized, uncalibrated dual-view image pairs—without any 3D supervision. Our method enforces geometric consistency across views via rigid alignment, introducing a cross-view pose consistency loss that constrains the 3D predictions to satisfy multi-view rigidity constraints. To our knowledge, this is the first approach to enhance monocular pose estimation solely through multi-view weak supervision, enabling 3D-annotation-free domain adaptation. Crucially, it operates without camera parameters or 3D labels. Evaluated on multiple benchmarks, it achieves state-of-the-art performance; significantly improves transferability in zero-3D-annotation settings; and supports plug-and-play dual-camera acquisition—eliminating the need for calibration—thus advancing practical deployment of monocular 3D pose estimation.
📝 Abstract
Deducing a 3D human pose from a single 2D image is inherently challenging because multiple 3D poses can correspond to the same 2D representation. 3D data can resolve this pose ambiguity, but it is expensive to record and requires an intricate setup that is often restricted to controlled lab environments. We propose a method that improves the performance of deep learning-based monocular 3D human pose estimation models by using multiview data only during training, but not during inference. We introduce a novel loss function, consistency loss, which operates on two synchronized views. This approach is simpler than previous models that require 3D ground truth or intrinsic and extrinsic camera parameters. Our consistency loss penalizes differences in two pose sequences after rigid alignment. We also demonstrate that our consistency loss substantially improves performance for fine-tuning without requiring 3D data. Furthermore, we show that using our consistency loss can yield state-of-the-art performance when training models from scratch in a semi-supervised manner. Our findings provide a simple way to capture new data, e.g in a new domain. This data can be added using off-the-shelf cameras with no calibration requirements. We make all our code and data publicly available.