Neuroadaptive Haptics: Comparing Reinforcement Learning from Explicit Ratings and Neural Signals for Adaptive XR Systems

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge in XR systems where multisensory feedback fails to dynamically adapt to users’ real-time cognitive and physiological states, resulting in suboptimal immersion and excessive cognitive load. To this end, we propose a neuroadaptive haptic framework that integrates real-time EEG-based neural decoding (F1 = 0.8) with proximal policy optimization (PPO) reinforcement learning, forming a closed-loop system capable of modulating multimodal feedback via either explicit user ratings or implicit EEG signals. Our key contribution is the first empirical demonstration that online EEG decoding alone—without any explicit user input—can robustly support RL-driven adaptation. Critically, under zero-explicit-feedback conditions, the system significantly enhances immersion and interaction naturalness while concurrently reducing cognitive load. This work establishes a novel paradigm for unobtrusive, high-fidelity human–XR collaboration.

Technology Category

Application Category

📝 Abstract
Neuroadaptive haptics offers a path to more immersive extended reality (XR) experiences by dynamically tuning multisensory feedback to user preferences. We present a neuroadaptive haptics system that adapts XR feedback through reinforcement learning (RL) from explicit user ratings and brain-decoded neural signals. In a user study, participants interacted with virtual objects in VR while Electroencephalography (EEG) data were recorded. An RL agent adjusted haptic feedback based either on explicit ratings or on outputs from a neural decoder. Results show that the RL agent's performance was comparable across feedback sources, suggesting that implicit neural feedback can effectively guide personalization without requiring active user input. The EEG-based neural decoder achieved a mean F1 score of 0.8, supporting reliable classification of user experience. These findings demonstrate the feasibility of combining brain-computer interfaces (BCI) and RL to autonomously adapt XR interactions, reducing cognitive load and enhancing immersion.
Problem

Research questions and friction points this paper is trying to address.

Adapting XR haptic feedback using reinforcement learning
Comparing explicit ratings and neural signals for feedback
Reducing cognitive load in XR with brain-computer interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning adapts XR feedback dynamically
EEG neural signals replace explicit user ratings
BCI and RL combine for autonomous XR adaptation
🔎 Similar Papers
No similar papers found.