🤖 AI Summary
This work addresses the challenge of accurate, comfortable, and privacy-preserving lower-limb motion capture under loose-fitting clothing. We propose a non-contact textile capacitive sensing system: conductive fabric electrodes are integrated into relaxed-fit trousers, coupled with a low-power signal acquisition unit and a lightweight Transformer model to enable end-to-end, real-time mapping from capacitance measurements to joint angles. To our knowledge, this is the first capacitive textile-based lower-limb motion capture system that operates without subject-specific calibration while ensuring privacy (no cameras or biosignals), comfort (non-restrictive wear), and strong cross-subject generalizability. Evaluated on an 11-subject dataset, the system achieves mean joint position error of 11.96 cm and joint angle error of 12.3°. The model reduces parameter count by 22× versus baseline architectures and runs at 42 FPS—enabling deployment on resource-constrained edge devices such as smartwatches.
📝 Abstract
We present VersaPants, the first loose-fitting, textile-based capacitive sensing system for lower-body motion capture, built on the open-hardware VersaSens platform. By integrating conductive textile patches and a compact acquisition unit into a pair of pants, the system reconstructs lower-body pose without compromising comfort. Unlike IMU-based systems that require user-specific fitting or camera-based methods that compromise privacy, our approach operates without fitting adjustments and preserves user privacy. VersaPants is a custom-designed smart garment featuring 6 capacitive channels per leg. We employ a lightweight Transformer-based deep learning model that maps capacitance signals to joint angles, enabling embedded implementation on edge platforms. To test our system, we collected approximately 3.7 hours of motion data from 11 participants performing 16 daily and exercise-based movements. The model achieves a mean per-joint position error (MPJPE) of 11.96 cm and a mean per-joint angle error (MPJAE) of 12.3 degrees across the hip, knee, and ankle joints, indicating the model's ability to generalize to unseen users and movements. A comparative analysis of existing textile-based deep learning architectures reveals that our model achieves competitive reconstruction performance with up to 22 times fewer parameters and 18 times fewer FLOPs, enabling real-time inference at 42 FPS on a commercial smartwatch without quantization. These results position VersaPants as a promising step toward scalable, comfortable, and embedded motion-capture solutions for fitness, healthcare, and wellbeing applications.