HRI-SA: A Multimodal Dataset for Online Assessment of Human Situational Awareness during Remote Human-Robot Teaming

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing situation awareness (SA) assessment methods struggle to capture operators’ SA fluctuations in real time during remote human-robot collaboration without disrupting task performance, and there is a lack of publicly available online SA evaluation datasets. To address this gap, this work introduces HRI-SA, a multimodal dataset collected from 30 participants collaborating with a robot in a search-and-rescue task, comprising eye-tracking, pupil, physiological signals, interaction logs, and robot state data. Ground-truth SA delays—categorized as perception and comprehension—are annotated based on predefined events. As the first public multimodal SA dataset covering entire remote human-robot teaming tasks, HRI-SA enables continuous SA delay detection using eye-tracking features alone, achieving 88.91% recall and 67.63% F1-score; incorporating contextual information further improves performance to 91.51% recall and 80.38% F1-score.

Technology Category

Application Category

📝 Abstract
Maintaining situational awareness (SA) is critical in human-robot teams. Yet, under high workload and dynamic conditions, operators often experience SA gaps. Automated detection of SA gaps could provide timely assistance for operators. However, conventional SA measures either disrupt task flow or cannot capture real-time fluctuations, limiting their operational utility. To the best of our knowledge, no publicly available dataset currently supports the systematic evaluation of online human SA assessment in human-robot teaming. To advance the development of online SA assessment tools, we introduce HRI-SA, a multimodal dataset from 30 participants in a realistic search-and-rescue human-robot teaming context, incorporating eye movements, pupil diameter, biosignals, user interactions, and robot data. The experimental protocol included predefined events requiring timely operator assistance, with ground truth SA latency of two types (perceptual and comprehension) systematically obtained by measuring the time between assistance need onset and resolution. We illustrate the utility of this dataset by evaluating standard machine learning models for detecting perceptual SA latencies using generic eye-tracking features and contextual features. Results show that eye-tracking features alone effectively classified perceptual SA latency (recall=88.91%, F1=67.63%) using leave-one-group-out cross-validation, with performance improved through contextual data fusion (recall=91.51%, F1=80.38%). This paper contributes the first public dataset supporting the systematic evaluation of SA throughout a human-robot teaming mission, while also demonstrating the potential of generic eye-tracking features for continuous perceptual SA latency detection in remote human-robot teaming.
Problem

Research questions and friction points this paper is trying to address.

situational awareness
human-robot teaming
multimodal dataset
online assessment
remote operation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal dataset
situational awareness
human-robot teaming
eye-tracking
online assessment
🔎 Similar Papers
2024-03-18IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 0
2024-07-31arXiv.orgCitations: 2