FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation in joint pose estimation accuracy caused by sensor displacement in loose-fitting daily apparel, this paper proposes the Flexible Inertial Pose Estimator (FIP). FIP introduces a novel displacement latent-variable diffusion model that jointly models and compensates for positional discrepancies between bend sensors and IMUs. It incorporates a physics-guided calibrator to enforce kinematic consistency constraints and designs a multimodal temporal fusion predictor that integrates bend-sensor signals, IMU measurements, and latent-variable representations. On standard benchmarks, FIP achieves state-of-the-art improvements of 19.5%, 26.4%, and 30.1% in overall angular error, elbow angular error, and end-effector position error, respectively. Moreover, FIP demonstrates strong robustness across diverse body morphologies and dynamic motion sequences.

Technology Category

Application Category

📝 Abstract
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). To address the inevitable sensor displacements in loose wearables which degrade joint tracking accuracy significantly, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. We also introduce a Pose Fusion Predictor to enhance multimodal sensor fusion. Extensive experiments demonstrate that our method achieves robust performance across varying body shapes and motions, significantly outperforming SOTA IMU approaches with a 19.5% improvement in angular error, a 26.4% improvement in elbow angular error, and a 30.1% improvement in positional error. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis.
Problem

Research questions and friction points this paper is trying to address.

Enhance motion capture accuracy
Compensate sensor displacements in wearables
Improve multimodal sensor fusion efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses flex and inertial sensors
Develops Displacement Latent Diffusion Model
Introduces Pose Fusion Predictor
🔎 Similar Papers
No similar papers found.
J
Jiawei Fang
Xiamen University, Xiamen, China
R
Ruonan Zheng
Xiamen University, Xiamen, China
Y
Yuan Yao
Xiamen University, Xiamen, China
X
Xiaoxia Gao
Xiamen University, Xiamen, China
C
Chengxu Zuo
Xiamen University, Xiamen, China
Shihui Guo
Shihui Guo
School of Informatics, Xiamen University
Human-Computer InteractionVirtual/Augmented RealityComputer Animation
Yiyue Luo
Yiyue Luo
Assistant Professor, University of Washington
Intelligent TextilesDigital FabricationHCIApplied Machine Learning