🤖 AI Summary
This work addresses the challenge of label-free brain–computer interfaces (BCIs) by proposing CURSOR—the first fully self-calibrating brain decoding framework that requires neither labeled data nor pre-trained models. Given only a single-trial EEG response elicited by natural face images, CURSOR autonomously infers the user’s mental target (e.g., a specific face) through joint alignment of neural and semantic spaces. The method integrates contrastive representation learning, unsupervised similarity modeling, and iterative stimulus optimization. Experimental results demonstrate: (1) unsupervised image similarity predictions strongly correlate with human perceptual judgments (Pearson’s *r* > 0.82); (2) high-accuracy ranking of candidate images; and (3) generation of target-level matching stimuli—validated in a 53-participant user study where identification accuracy was statistically indistinguishable from chance (*p* = 0.49), confirming perceptual equivalence between generated stimuli and true targets.
📝 Abstract
We consider the problem of recovering a mental target (e.g., an image of a face) that a participant has in mind from paired EEG (i.e., brain responses) and image (i.e., perceived faces) data collected during interactive sessions without access to labeled information. The problem has been previously explored with labeled data but not via self-calibration, where labeled data is unavailable. Here, we present the first framework and an algorithm, CURSOR, that learns to recover unknown mental targets without access to labeled data or pre-trained decoders. Our experiments on naturalistic images of faces demonstrate that CURSOR can (1) predict image similarity scores that correlate with human perceptual judgments without any label information, (2) use these scores to rank stimuli against an unknown mental target, and (3) generate new stimuli indistinguishable from the unknown mental target (validated via a user study, N=53).