🤖 AI Summary
This study addresses the challenge in XR systems where multisensory feedback fails to dynamically adapt to users’ real-time cognitive and physiological states, resulting in suboptimal immersion and excessive cognitive load. To this end, we propose a neuroadaptive haptic framework that integrates real-time EEG-based neural decoding (F1 = 0.8) with proximal policy optimization (PPO) reinforcement learning, forming a closed-loop system capable of modulating multimodal feedback via either explicit user ratings or implicit EEG signals. Our key contribution is the first empirical demonstration that online EEG decoding alone—without any explicit user input—can robustly support RL-driven adaptation. Critically, under zero-explicit-feedback conditions, the system significantly enhances immersion and interaction naturalness while concurrently reducing cognitive load. This work establishes a novel paradigm for unobtrusive, high-fidelity human–XR collaboration.
📝 Abstract
Neuroadaptive haptics offers a path to more immersive extended reality (XR) experiences by dynamically tuning multisensory feedback to user preferences. We present a neuroadaptive haptics system that adapts XR feedback through reinforcement learning (RL) from explicit user ratings and brain-decoded neural signals. In a user study, participants interacted with virtual objects in VR while Electroencephalography (EEG) data were recorded. An RL agent adjusted haptic feedback based either on explicit ratings or on outputs from a neural decoder. Results show that the RL agent's performance was comparable across feedback sources, suggesting that implicit neural feedback can effectively guide personalization without requiring active user input. The EEG-based neural decoder achieved a mean F1 score of 0.8, supporting reliable classification of user experience. These findings demonstrate the feasibility of combining brain-computer interfaces (BCI) and RL to autonomously adapt XR interactions, reducing cognitive load and enhancing immersion.