Towards Biosignals-Free Autonomous Prosthetic Hand Control via Imitation Learning

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional myoelectric (sEMG)-based prosthetic control imposes high cognitive and motor loads on users. To address this, we propose a fully sEMG-free, vision-driven autonomous prosthetic hand control system. The system employs a single wrist-mounted RGB camera to perceive object geometry in real time and leverages a lightweight imitation learning framework trained exclusively on minimal, user-specific teleoperation demonstrations—enabling end-to-end learning of grasp/release policies. Our approach supports cross-user transfer and generalization to unseen objects, while incorporating adaptive force control and natural, responsive behavior. Experimental results demonstrate high success rates across diverse object shapes, significantly reducing both physiological and psychological burdens. To our knowledge, this is the first demonstration of a general-purpose, vision-based autonomous control paradigm for prosthetic hands that eliminates reliance on biological signals entirely.

Technology Category

Application Category

📝 Abstract
Limb loss affects millions globally, impairing physical function and reducing quality of life. Most traditional surface electromyographic (sEMG) and semi-autonomous methods require users to generate myoelectric signals for each control, imposing physically and mentally taxing demands. This study aims to develop a fully autonomous control system that enables a prosthetic hand to automatically grasp and release objects of various shapes using only a camera attached to the wrist. By placing the hand near an object, the system will automatically execute grasping actions with a proper grip force in response to the hand's movements and the environment. To release the object being grasped, just naturally place the object close to the table and the system will automatically open the hand. Such a system would provide individuals with limb loss with a very easy-to-use prosthetic control interface and greatly reduce mental effort while using. To achieve this goal, we developed a teleoperation system to collect human demonstration data for training the prosthetic hand control model using imitation learning, which mimics the prosthetic hand actions from human. Through training the model using only a few objects' data from one single participant, we have shown that the imitation learning algorithm can achieve high success rates, generalizing to more individuals and unseen objects with a variation of weights. The demonstrations are available at href{https://sites.google.com/view/autonomous-prosthetic-hand}{https://sites.google.com/view/autonomous-prosthetic-hand}
Problem

Research questions and friction points this paper is trying to address.

Develop autonomous prosthetic hand control without biosignals
Enable automatic grasping and releasing using wrist camera
Reduce mental effort with imitation learning from human demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Camera-based autonomous prosthetic hand control
Imitation learning for grasp and release actions
Generalizes to unseen objects and users
🔎 Similar Papers
No similar papers found.
K
Kaijie Shi
Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada, and also with College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325000, China.
W
Wanglong Lu
Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada, and also with College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325000, China.
H
Hanli Zhao
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325000, China.
Vinicius Prado da Fonseca
Vinicius Prado da Fonseca
Memorial University of Newfoundland
Robotic ManipulationTactile sensingArtificial IntelligenceApplied Machine Learning
Ting Zou
Ting Zou
Department of Mechanical and Mechatronics Engineering, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada.
X
Xianta Jiang
Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada.