🤖 AI Summary
This study addresses the challenge of “mental image reconstruction” in non-invasive brain–computer interfaces (BCIs).
Method: We propose a single-channel EEG-driven, AI-augmented human–machine symbiosis framework that integrates SSVEP decoding, Gabor-inspired dynamic visual probe placement optimization, and Stable Diffusion-based generative modeling to achieve adaptive visual stimulus encoding and end-to-end reconstruction of imagined images.
Contribution/Results: To our knowledge, this is the first work to introduce AI-driven, dynamic spatial exploration of visual probes into SSVEP-BCI systems, boosting information transfer rate by over fivefold. Using only one EEG channel, the framework reconstructs simple graphical stimuli with high fidelity within two minutes. Experimental validation demonstrates significant advances in reconstruction efficiency, accuracy, and practical feasibility—establishing a novel paradigm for lightweight, intelligent non-invasive BCIs.
📝 Abstract
Brain-computer interfaces (BCIs) are evolving from research prototypes into clinical, assistive, and performance enhancement technologies. Despite the rapid rise and promise of implantable technologies, there is a need for better and more capable wearable and non-invasive approaches whilst also minimising hardware requirements. We present a non-invasive BCI for mind-drawing that iteratively infers a subject's internal visual intent by adaptively presenting visual stimuli (probes) on a screen encoded at different flicker-frequencies and analyses the steady-state visual evoked potentials (SSVEPs). A Gabor-inspired or machine-learned policies dynamically update the spatial placement of the visual probes on the screen to explore the image space and reconstruct simple imagined shapes within approximately two minutes or less using just single-channel EEG data. Additionally, by leveraging stable diffusion models, reconstructed mental images can be transformed into realistic and detailed visual representations. Whilst we expect that similar results might be achievable with e.g. eye-tracking techniques, our work shows that symbiotic human-AI interaction can significantly increase BCI bit-rates by more than a factor 5x, providing a platform for future development of AI-augmented BCI.