🤖 AI Summary
To address the sensitivity and durability bottlenecks of flexible wearable gloves, this work proposes an anatomically adaptive, modular capacitive-sensing glove. Stretchable capacitive sensors are constructed using wire-shaped electrodes and liquid metal (EGaIn), with independently tailored sensing modules aligned to hand anatomy for high-precision dynamic measurement of single-joint flexion and inter-finger distances. A novel hybrid architecture integrates a CNN-MLP classifier with a Transformer-driven point cloud reconstruction network, enabling simultaneous gesture recognition and 3D hand pose estimation. Experiments demonstrate 99.15% accuracy on a 30-class gesture recognition benchmark; dynamic hand reconstruction achieves a mean distance error of 2.076 ± 3.231 mm, with keypoint accuracy improvements of 9.7%–64.9% over state-of-the-art methods. This advances high-fidelity, real-time human-hand interaction sensing.
📝 Abstract
With the increasing demand for human-computer interaction (HCI), flexible wearable gloves have emerged as a promising solution in virtual reality, medical rehabilitation, and industrial automation. However, the current technology still has problems like insufficient sensitivity and limited durability, which hinder its wide application. This paper presents a highly sensitive, modular, and flexible capacitive sensor based on line-shaped electrodes and liquid metal (EGaIn), integrated into a sensor module tailored to the human hand's anatomy. The proposed system independently captures bending information from each finger joint, while additional measurements between adjacent fingers enable the recording of subtle variations in inter-finger spacing. This design enables accurate gesture recognition and dynamic hand morphological reconstruction of complex movements using point clouds. Experimental results demonstrate that our classifier based on Convolution Neural Network (CNN) and Multilayer Perceptron (MLP) achieves an accuracy of 99.15% across 30 gestures. Meanwhile, a transformer-based Deep Neural Network (DNN) accurately reconstructs dynamic hand shapes with an Average Distance (AD) of 2.076pm3.231 mm, with the reconstruction accuracy at individual key points surpassing SOTA benchmarks by 9.7% to 64.9%. The proposed glove shows excellent accuracy, robustness and scalability in gesture recognition and hand reconstruction, making it a promising solution for next-generation HCI systems.