🤖 AI Summary
Individuals with upper-limb impairments (e.g., post-stroke patients) face significant challenges in performing daily bimanual tasks independently due to high reliance on manual dexterity and coordination.
Method: This study proposes a head-mounted laser-guided human–robot interface integrating vision-based servoing with a lightweight neural network for real-time object detection, combined with laser-projected spatial localization, 6-degree-of-freedom (6DOF) robotic arm motion planning, and 1-degree-of-freedom (1DOF) beak-inspired gripper control.
Contribution/Results: We introduce the novel “head-mounted laser + dual-mode interaction” paradigm—enabling spatial point selection and virtual paper-keyboard input—to support single-handed execution of bimanual activities. The system emphasizes intuitiveness, low cost, and accessibility. Experimental evaluation demonstrates high task accuracy and low latency in grasp, transport, and placement tasks, significantly improving user independence and task completion rates. This work establishes a scalable, clinically viable technical pathway for rehabilitation-assistive human–robot interaction.
📝 Abstract
Robotics has shown significant potential in assisting people with disabilities to enhance their independence and involvement in daily activities. Indeed, a societal long-term impact is expected in home-care assistance with the deployment of intelligent robotic interfaces. This work presents a human-robot interface developed to help people with upper limbs impairments, such as those affected by stroke injuries, in activities of everyday life. The proposed interface leverages on a visual servoing guidance component, which utilizes an inexpensive but effective laser emitter device. By projecting the laser on a surface within the workspace of the robot, the user is able to guide the robotic manipulator to desired locations, to reach, grasp and manipulate objects. Considering the targeted users, the laser emitter is worn on the head, enabling to intuitively control the robot motions with head movements that point the laser in the environment, which projection is detected with a neural network based perception module. The interface implements two control modalities: the first allows the user to select specific locations directly, commanding the robot to reach those points; the second employs a paper keyboard with buttons that can be virtually pressed by pointing the laser at them. These buttons enable a more direct control of the Cartesian velocity of the end-effector and provides additional functionalities such as commanding the action of the gripper. The proposed interface is evaluated in a series of manipulation tasks involving a 6DOF assistive robot manipulator equipped with 1DOF beak-like gripper. The two interface modalities are combined to successfully accomplish tasks requiring bimanual capacity that is usually affected in people with upper limbs impairments.