🤖 AI Summary
This study investigates how electroencephalography (EEG) signals can be leveraged to decode users’ cognitive load and implicit agreement during natural spoken human–AI conversations, thereby providing implicit feedback to large language models. The authors develop an end-to-end pipeline that synchronizes speech transcription, dialogue event annotation, and continuous EEG classification, enabling precise temporal alignment between word-level events and neural signals. This work represents the first successful transfer of EEG-based mental state decoding from controlled laboratory settings to real-world conversational contexts, demonstrating the cross-paradigm generalizability of cognitive load and implicit agreement signals while exposing limitations of current classifiers in handling asynchronous events. Results reveal interpretable dynamic patterns in cognitive load and high temporal precision in implicit agreement detection, offering both empirical support and critical constraints for integrating passive brain–computer interfaces into conversational AI systems.
📝 Abstract
Passive brain-computer interfaces offer a potential source of implicit feedback for alignment of large language models, but most mental state decoding has been done in controlled tasks. This paper investigates whether established EEG classifiers for mental workload and implicit agreement can be transferred to spoken human-AI dialogue. We introduce two conversational paradigms - a Spelling Bee task and a sentence completion task- and an end-to-end pipeline for transcribing, annotating, and aligning word-level conversational events with continuous EEG classifier output. In a pilot study, workload decoding showed interpretable trends during spoken interaction, supporting cross-paradigm transfer. For implicit agreement, we demonstrate continuous application and precise temporal alignment to conversational events, while identifying limitations related to construct transfer and asynchronous application of event-based classifiers. Overall, the results establish feasibility and constraints for integrating passive BCI signals into conversational AI systems.