Point Cloud-based Grasping for Soft Hand Exoskeleton

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of dexterous grasping for individuals with hand impairments in complex, unstructured environments, this paper proposes a point-cloud-based visual predictive control framework. The framework integrates depth sensing with 3D geometric modeling to enable environmental context understanding, real-time target grasp prediction, and closed-loop control of a soft robotic hand exoskeleton. Its core innovation lies in a geometry-modeling-driven grasping paradigm that eliminates reliance on large-scale annotated datasets, thereby significantly improving generalization across diverse objects and scenes as well as robustness to environmental variations. We introduce the Grasping Ability Score (GAS) as a quantitative evaluation metric; the method achieves a GAS of 91% across 15 object categories and maintains high reconstruction success rates on previously unseen objects—setting a new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Grasping is a fundamental skill for interacting with and manipulating objects in the environment. However, this ability can be challenging for individuals with hand impairments. Soft hand exoskeletons designed to assist grasping can enhance or restore essential hand functions, yet controlling these soft exoskeletons to support users effectively remains difficult due to the complexity of understanding the environment. This study presents a vision-based predictive control framework that leverages contextual awareness from depth perception to predict the grasping target and determine the next control state for activation. Unlike data-driven approaches that require extensive labelled datasets and struggle with generalizability, our method is grounded in geometric modelling, enabling robust adaptation across diverse grasping scenarios. The Grasping Ability Score (GAS) was used to evaluate performance, with our system achieving a state-of-the-art GAS of 91% across 15 objects and healthy participants, demonstrating its effectiveness across different object types. The proposed approach maintained reconstruction success for unseen objects, underscoring its enhanced generalizability compared to learning-based models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing grasping for hand-impaired individuals using soft exoskeletons
Predicting grasping targets via vision-based control and depth perception
Improving generalizability in grasping with geometric modeling over data-driven methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-based predictive control framework
Geometric modelling for robust adaptation
Depth perception for contextual awareness
🔎 Similar Papers
No similar papers found.
Chen Hu
Chen Hu
School of Artificial Intelligence and Computer Science, Jiangnan University
Geometric Deep LearningMachine Learning
E
Enrica Tricomi
Institut für Technische Informatik (ZITI), Heidelberg University, 69120 Heidelberg, Deutschland
E
Eojin Rho
School of Computing, KAIST, Daejeon 34141, South Korea
Daekyum Kim
Daekyum Kim
Assistant Professor, Korea University
RoboticsArtificial IntelligenceComputer VisionWearablesSoft Robotics
L
L. Masia
Munich Institute for Robotics and Machine Intelligence, Technical University of Munich, 80333 Munich, Deutschland
Shan Luo
Shan Luo
Reader (Associate Professor), King's College London
RoboticsRobot PerceptionTactile SensingComputer VisionMachine Learning
L
Letizia Gionfrida
King’s College London, London, WC2R 2LS, UK