🤖 AI Summary
This work proposes a closed-loop CPR training glove that addresses the limitations of traditional audiovisual instruction, which often causes visual distraction and hinders self-directed practice. The system integrates a high-resolution tactile sensor array with a vibrotactile feedback module and employs a lightweight statistical model capable of sub-millisecond inference to estimate compression frequency, force, and hand posture in real time. Tactile cues are then used to guide the user without requiring visual attention. The glove achieves a force sensitivity of approximately 0.85 over a 0–600 N range, with force and posture estimation accuracy exceeding 92%. User studies demonstrate that this approach significantly reduces visual interference, enabling, for the first time, fully autonomous CPR training based on tactile perception and feedback, thereby minimizing reliance on external displays and enhancing both immersion and safety during training.
📝 Abstract
Cardiopulmonary resuscitation (CPR) is a critical life-saving procedure, and effective training benefits from self-directed practice beyond instructor-led sessions. In this paper, we propose a closed-loop CPR training glove that integrates a high-resolution tactile sensing array and vibrotactile actuators for self-directed practice. The tactile sensing array measures distributed pressures across the palm and dorsum to enable real-time estimation of compression rate, force, and hand pose. Based on these estimations, the glove delivers immediate haptic feedback to guide the user for proper CPR, reducing reliance on external audio-visual displays. We quantified the tactile sensor performance by measuring wide-range sensitivity (~0.85 over 0-600 N), computing hysteresis (56.04%), testing stability (11.05% drift over 300 cycles), and estimating global signal-to-noise ratio (18.90 +/- 2.41 dB at 600 N). Our closed-loop pipeline provides continuous modeling and feedback of key performance metrics essential for high-quality CPR. Our lightweight statistical models achieves>92% accuracy for force estimation and hand pose classification within sub-millisecond inference time. Our user study (N=8) showed that haptic feedback reduced visual distraction compared to audio-visual cues, though simplified patterns were required for reliable perception under dynamic load. These results highlight the feasibility of the proposed system and offer design insights for future haptic CPR self-training system.