π€ AI Summary
This study addresses the challenges of personalizing rehabilitation robots to individual anatomical structures and ensuring safe humanβrobot interaction by proposing a rehabilitation training framework based on RGB-D video demonstration. The approach encodes therapist demonstrations into body-centered 6-degree-of-freedom trajectories and leverages Cartesian dynamic movement primitives (DMPs) combined with Gaussian mixture regression (GMR) to achieve anatomy-agnostic, precise motion reproduction. A decoupled hybrid control architecture integrates a virtual compliant tunnel with a tangential-force-based temporal scaling mechanism, enabling seamless transitions among passive, active-assistive, and resistive training modes while supporting adaptive rhythm modulation and real-time anomalous force detection. Experimental results demonstrate a mean trajectory reproduction error of 3.7 cm and a joint range-of-motion error of 5.5Β°, with the system maintaining path accuracy and dynamically adjusting training intensity even under deliberate external disturbances.
π Abstract
In this paper, we propose a novel framework that allows therapists to teach robot-assisted rehabilitation exercises remotely via RGB-D video. Our system encodes demonstrations as 6-DoF body-centric trajectories using Cartesian Dynamic Movement Primitives (DMPs), ensuring accurate posture-independent spatial generalization across diverse patient anatomies. Crucially, we execute these trajectories through a decoupled hybrid control architecture that constructs a spatially compliant virtual tunnel, paired with an effort-based temporal dilation mechanism. This architecture is applied to three distinct rehabilitation modalities: Passive, Active-Assisted, and Active-Resistive, by dynamically linking the exercise's execution phase to the patient's tangential force contribution. To guarantee safety, a Gaussian Mixture Regression (GMR) model is learned on-the-fly from the patient's own limb. This allows the detection of abnormal interaction forces and, if necessary, reverses the trajectory to prevent injury. Experimental validation demonstrates the system's precision, achieving an average trajectory reproduction error of 3.7cm and a range of motion (ROM) error of 5.5 degrees. Furthermore, dynamic interaction trials confirm that the controller successfully enforces effort-based progression while maintaining strict spatial path adherence against human disturbances.