🤖 AI Summary
Surface electromyography (sEMG)-based human–machine interfaces suffer from poor cross-subject generalization, reliance on time-consuming calibration, and high response latency. Method: This paper proposes a zero-shot, low-latency real-time intent detection framework. It introduces a self-supervised masked modeling strategy tailored for sEMG time-series signals, integrated with an online sequence segmentation mechanism to dynamically model muscle activation and achieve fine-grained alignment with user intent—enabling rapid onset detection and stable continuous tracking even during incomplete gesture execution. Results: Experiments demonstrate that, without subject-specific calibration, the method significantly outperforms existing zero-shot transfer approaches in cross-subject gesture recognition, achieving both higher classification accuracy (+8.2% average accuracy) and substantially reduced control jitter (−37% variance). This work establishes a practical, plug-and-play paradigm for intent decoding in wearable robotics and intelligent prosthetics.
📝 Abstract
Surface electromyography (sEMG) signals show promise for effective human-computer interfaces, particularly in rehabilitation and prosthetics. However, challenges remain in developing systems that respond quickly and reliably to user intent, across different subjects and without requiring time-consuming calibration. In this work, we propose a framework for EMG-based intent detection that addresses these challenges. Unlike traditional gesture recognition models that wait until a gesture is completed before classifying it, our approach uses a segmentation strategy to assign intent labels at every timestep as the gesture unfolds. We introduce a novel masked modeling strategy that aligns muscle activations with their corresponding user intents, enabling rapid onset detection and stable tracking of ongoing gestures. In evaluations against baseline methods, considering both accuracy and stability for device control, our approach surpasses state-of-the-art performance in zero-shot transfer conditions, demonstrating its potential for wearable robotics and next-generation prosthetic systems. Our project page is available at: https://reactemg.github.io