🤖 AI Summary
Video-based 3D human pose estimation in unconstrained environments suffers significant accuracy degradation during self-contact (e.g., hand-to-face). To address this, we propose BioTUCH—a novel framework that introduces bioimpedance sensing into pose reconstruction for the first time. It employs miniature wearable sensors to detect skin contact events in real time. We further construct the first synchronized dataset comprising video, bioimpedance signals, and high-fidelity motion-capture ground truth, explicitly annotated for contact. Our contact-aware optimization algorithm initializes from vision-based poses and jointly minimizes reprojection error and mesh vertex proximity constraints, dynamically activating contact constraints guided by impedance signals. Evaluated on three mainstream pose estimators, BioTUCH achieves an average 11.7% improvement in reconstruction accuracy. It substantially enhances the fidelity of pseudo-ground-truth under self-contact, enabling robust motion generation and high-quality training data for downstream models.
📝 Abstract
Capturing accurate 3D human pose in the wild would provide valuable data for training pose estimation and motion generation methods. While video-based estimation approaches have become increasingly accurate, they often fail in common scenarios involving self-contact, such as a hand touching the face. In contrast, wearable bioimpedance sensing can cheaply and unobtrusively measure ground-truth skin-to-skin contact. Consequently, we propose a novel framework that combines visual pose estimators with bioimpedance sensing to capture the 3D pose of people by taking self-contact into account. Our method, BioTUCH, initializes the pose using an off-the-shelf estimator and introduces contact-aware pose optimization during measured self-contact: reprojection error and deviations from the input estimate are minimized while enforcing vertex proximity constraints. We validate our approach using a new dataset of synchronized RGB video, bioimpedance measurements, and 3D motion capture. Testing with three input pose estimators, we demonstrate an average of 11.7% improvement in reconstruction accuracy. We also present a miniature wearable bioimpedance sensor that enables efficient large-scale collection of contact-aware training data for improving pose estimation and generation using BioTUCH. Code and data are available at biotuch.is.tue.mpg.de