Mapping Neural Signals to Agent Performance, A Step Towards Reinforcement Learning from Neural Feedback

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human-in-the-loop reinforcement learning (HITL-RL) heavily relies on explicit, active human interventions, imposing significant cognitive load on users. Method: We propose NEURO-LOOP, a novel implicit brain–computer interface (BCI)-driven RL paradigm. In its first stage, we establish a predictive mapping between prefrontal cortical activity—measured via functional near-infrared spectroscopy (fNIRS)—and agent performance, enabling passive, instruction-free human–agent co-training. We employ machine learning models—including SVM and random forests—to decode neural correlates of performance. Contribution/Results: Using real human fNIRS data, we demonstrate a statistically significant correlation between prefrontal fNIRS signals and agent performance (p < 0.01), achieving an average prediction accuracy of 72.3%. This work is the first to validate prefrontal fNIRS as a seamless, implicit neural feedback source for RL, eliminating dependence on explicit human guidance. It establishes a foundational framework for implicit, load-free human–agent co-learning.

Technology Category

Application Category

📝 Abstract
Implicit Human-in-the-Loop Reinforcement Learning (HITL-RL) is a methodology that integrates passive human feedback into autonomous agent training while minimizing human workload. However, existing methods often rely on active instruction, requiring participants to teach an agent through unnatural expression or gesture. We introduce NEURO-LOOP, an implicit feedback framework that utilizes the intrinsic human reward system to drive human-agent interaction. This work demonstrates the feasibility of a critical first step in the NEURO-LOOP framework: mapping brain signals to agent performance. Using functional near-infrared spectroscopy (fNIRS), we design a dataset to enable future research using passive Brain-Computer Interfaces for Human-in-the-Loop Reinforcement Learning. Participants are instructed to observe or guide a reinforcement learning agent in its environment while signals from the prefrontal cortex are collected. We conclude that a relationship between fNIRS data and agent performance exists using classical machine learning techniques. Finally, we highlight the potential that neural interfaces may offer to future applications of human-agent interaction, assistive AI, and adaptive autonomous systems.
Problem

Research questions and friction points this paper is trying to address.

Mapping brain signals to agent performance using fNIRS.
Developing passive neural feedback for human-agent interaction.
Reducing human workload in reinforcement learning with implicit feedback.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes fNIRS for neural signal mapping
Integrates passive human neural feedback
Links brain signals to agent performance
🔎 Similar Papers
No similar papers found.
J
Julia Santaniello
Department of Computer Science, Tufts University
M
Matthew Russell
Department of Computer Science, Tufts University
B
Benson Jiang
Department of Computer Science, Tufts University
D
Donatello Sassaroli
Department of Computer Science, Tufts University
R
Robert Jacob
Department of Computer Science, Tufts University
Jivko Sinapov
Jivko Sinapov
Associate Professor, Tufts University
RoboticsMachine LearningArtificial Intelligence