🤖 AI Summary
Traditional semi-autonomous prosthetic hands rely on multi-stage perception pipelines (e.g., object detection, pose estimation, grasp planning), which are computationally heavy and lack end-to-end coherence. Method: This work investigates vision-language models (VLMs) as unified embodied perception modules for myoelectric hand control. We introduce the first VLM benchmark framework tailored to prosthetic control, enabling joint end-to-end inference of object attributes (name, shape, orientation, size) and grasp parameters (grasp type, wrist rotation, finger aperture). Evaluation involves eight state-of-the-art VLMs with structured JSON prompting on a dataset of 34 everyday objects. Contribution/Results: VLMs achieve high accuracy in object recognition and shape classification but exhibit significant limitations in metric size estimation and fine-grained grasp parameter prediction. This study provides the first systematic empirical validation of VLMs’ capabilities and constraints in embodied perception for prosthetics, establishing a novel paradigm for lightweight, intelligent, and integrated perceptual architectures in assistive robotics.
📝 Abstract
This study examines the potential of utilizing Vision Language Models (VLMs) to improve the perceptual capabilities of semi-autonomous prosthetic hands. We introduce a unified benchmark for end-to-end perception and grasp inference, evaluating a single VLM to perform tasks that traditionally require complex pipelines with separate modules for object detection, pose estimation, and grasp planning. To establish the feasibility and current limitations of this approach, we benchmark eight contemporary VLMs on their ability to perform a unified task essential for bionic grasping. From a single static image, they should (1) identify common objects and their key properties (name, shape, orientation, and dimensions), and (2) infer appropriate grasp parameters (grasp type, wrist rotation, hand aperture, and number of fingers). A corresponding prompt requesting a structured JSON output was employed with a dataset of 34 snapshots of common objects. Key performance metrics, including accuracy for categorical attributes (e.g., object name, shape) and errors in numerical estimates (e.g., dimensions, hand aperture), along with latency and cost, were analyzed. The results demonstrated that most models exhibited high performance in object identification and shape recognition, while accuracy in estimating dimensions and inferring optimal grasp parameters, particularly hand rotation and aperture, varied more significantly. This work highlights the current capabilities and limitations of VLMs as advanced perceptual modules for semi-autonomous control of bionic limbs, demonstrating their potential for effective prosthetic applications.