Robotic Grasping and Placement Controlled by EEG-Based Hybrid Visual and Motor Imagery

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a dual-channel EEG-based intent interface that enables prompt-free, purely imagination-driven control of robotic grasping and placing tasks by integrating visual imagery and motor imagery. The system uniquely deploys an offline pre-trained neural decoder in an online setting without fine-tuning, directly mapping high-level visual cognition to robotic actions and supporting complex scenarios such as object occlusion and multiple poses. Experimental results demonstrate online decoding accuracies of 40.23% for visual imagery and 62.59% for motor imagery, with an end-to-end task success rate of 20.88%, thereby validating the feasibility and potential of purely imagination-based brain–computer interfaces in human–robot collaboration.

Technology Category

Application Category

📝 Abstract
We present a framework that integrates EEG-based visual and motor imagery (VI/MI) with robotic control to enable real-time, intention-driven grasping and placement. Motivated by the promise of BCI-driven robotics to enhance human-robot interaction, this system bridges neural signals with physical control by deploying offline-pretrained decoders in a zero-shot manner within an online streaming pipeline. This establishes a dual-channel intent interface that translates visual intent into robotic actions, with VI identifying objects for grasping and MI determining placement poses, enabling intuitive control over both what to grasp and where to place. The system operates solely on EEG via a cue-free imagery protocol, achieving integration and online validation. Implemented on a Base robotic platform and evaluated across diverse scenarios, including occluded targets or varying participant postures, the system achieves online decoding accuracies of 40.23% (VI) and 62.59% (MI), with an end-to-end task success rate of 20.88%. These results demonstrate that high-level visual cognition can be decoded in real time and translated into executable robot commands, bridging the gap between neural signals and physical interaction, and validating the flexibility of a purely imagery-based BCI paradigm for practical human-robot collaboration.
Problem

Research questions and friction points this paper is trying to address.

EEG
robotic grasping
visual imagery
motor imagery
brain-computer interface
Innovation

Methods, ideas, or system contributions that make the work stand out.

EEG-based BCI
visual imagery
motor imagery
zero-shot decoding
robotic grasping and placement
🔎 Similar Papers
No similar papers found.