π€ AI Summary
This study addresses the challenge of accurately recognizing user locomotion modes in lightweight exoskeletons, which is hindered by limited sensing capabilities. To overcome this, the authors propose a machine learning approach that relies solely on two inertial sensors and incorporates a bidirectional temporal perception module capable of simultaneously estimating both past and future gait statesβsuch as level-ground walking and stair ascent or descent. The method integrates an online self-labeling mechanism with a user-specific learning framework, enabling personalized model adaptation and real-time updates without manual annotation. Validated on real-world datasets and through single-subject online closed-loop experiments, the system was successfully deployed on an exoskeleton platform, demonstrating its efficiency, lightweight design, and rapid adaptability to new users.
π Abstract
Assistive robotic devices, like soft lower-limb exoskeletons or exosuits, are widely spreading with the promise of helping people in everyday life. To make such systems adaptive to the variety of users wearing them, it is desirable to endow exosuits with advanced perception systems. However, exosuits have little sensory equipment because they need to be light and easy to wear. This paper presents a perception module based on machine learning that aims at estimating 3 walking modes (i.e., ascending or descending stairs and walking on level ground) of users wearing an exosuit. We tackle this perception problem using only inertial data from two sensors. Our approach provides an estimate for both future and past timesteps that supports control and enables a self-labeling procedure for online model adaptation. Indeed, we show that our estimate can label data acquired online and refine the model for new users. A thorough analysis carried out on real-life datasets shows the effectiveness of our user-tailored perception module. Finally, we integrate our system with the exosuit in a closed-loop controller, validating its performance in an online single-subject experiment.