Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of low human interpretability and trust in reinforcement learning–based robotic navigation policies due to their “black-box” nature. We propose a VR-enabled explainability enhancement method that introduces a novel XAI semantic scene projection mechanism: it maps gradient-based attribution maps (e.g., Integrated Gradients) onto a 3D virtual environment and, for the first time, fuses LiDAR point cloud data in real time within VR to achieve embodied, semantically grounded visualization of navigation decisions. Implemented using Unity VR, ROS middleware, and real-time point cloud rendering, the system targets non-expert users. User studies demonstrate that semantic projection improves objective comprehension accuracy by 37% and subjective predictability ratings by 42%; further integration of LiDAR data significantly enhances user trust and situational consistency awareness.

Technology Category

Application Category

📝 Abstract
End-to-end robot policies achieve high performance through neural networks trained via reinforcement learning (RL). Yet, their black box nature and abstract reasoning pose challenges for human-robot interaction (HRI), because humans may experience difficulty in understanding and predicting the robot's navigation decisions, hindering trust development. We present a virtual reality (VR) interface that visualizes explainable AI (XAI) outputs and the robot's lidar perception to support intuitive interpretation of RL-based navigation behavior. By visually highlighting objects based on their attribution scores, the interface grounds abstract policy explanations in the scene context. This XAI visualization bridges the gap between obscure numerical XAI attribution scores and a human-centric semantic level of explanation. A within-subjects study with 24 participants evaluated the effectiveness of our interface for four visualization conditions combining XAI and lidar. Participants ranked scene objects across navigation scenarios based on their importance to the robot, followed by a questionnaire assessing subjective understanding and predictability. Results show that semantic projection of attributions significantly enhances non-expert users' objective understanding and subjective awareness of robot behavior. In addition, lidar visualization further improves perceived predictability, underscoring the value of integrating XAI and sensor for transparent, trustworthy HRI.
Problem

Research questions and friction points this paper is trying to address.

Enhancing human understanding of robot navigation decisions
Bridging abstract XAI scores with human-centric explanations
Improving trust via XAI and sensor data visualization
Innovation

Methods, ideas, or system contributions that make the work stand out.

VR interface visualizes XAI and lidar data
Semantic projection enhances user understanding
Combines XAI and sensor for transparent HRI
🔎 Similar Papers
No similar papers found.