First Responders' Perceptions of Semantic Information for Situational Awareness in Robot-Assisted Emergency Response

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Emergency responders’ requirements and trust mechanisms regarding semantic-enhanced situational awareness (SA) robots remain poorly understood, hindering deployment in real-world crisis response. Method: A structured, cross-national survey was conducted with 22 frontline practitioners from eight countries, incorporating multidimensional analysis by role, experience, and education. Contribution/Results: This study empirically identifies— for the first time—three critical semantic information categories essential for SA: entity identity, spatial relations, and risk context. Their utility in incident prediction (mean = 3.9/5) and overall usefulness (3.6/5) were quantified. Users exhibit a trust threshold of 74.6% accuracy, with basic operational utility achieved at 67.8%. A significant performance gap between laboratory capabilities and field deployment was observed. The work establishes an empirically grounded framework for semantic SA requirements and a validated accuracy benchmark for trust, providing culturally informed, task-driven design principles for robot systems deployed in authentic emergency scenarios.

Technology Category

Application Category

📝 Abstract
This study investigates First Responders' (FRs) attitudes toward the use of semantic information and Situational Awareness (SA) in robotic systems during emergency operations. A structured questionnaire was administered to 22 FRs across eight countries, capturing their demographic profiles, general attitudes toward robots, and experiences with semantics-enhanced SA. Results show that most FRs expressed positive attitudes toward robots, and rated the usefulness of semantic information for building SA at an average of 3.6 out of 5. Semantic information was also valued for its role in predicting unforeseen emergencies (mean 3.9). Participants reported requiring an average of 74.6% accuracy to trust semantic outputs and 67.8% for them to be considered useful, revealing a willingness to use imperfect but informative AI support tools. To the best of our knowledge, this study offers novel insights by being one of the first to directly survey FRs on semantic-based SA in a cross-national context. It reveals the types of semantic information most valued in the field, such as object identity, spatial relationships, and risk context-and connects these preferences to the respondents' roles, experience, and education levels. The findings also expose a critical gap between lab-based robotics capabilities and the realities of field deployment, highlighting the need for more meaningful collaboration between FRs and robotics researchers. These insights contribute to the development of more user-aligned and situationally aware robotic systems for emergency response.
Problem

Research questions and friction points this paper is trying to address.

Investigating first responders' attitudes toward semantic information in robots
Assessing usefulness of semantic data for situational awareness in emergencies
Identifying gaps between lab capabilities and field deployment needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveyed first responders on semantic situational awareness
Identified valued semantic types like object identity relationships
Revealed gap between lab capabilities field deployment
🔎 Similar Papers
No similar papers found.
T
Tianshu Ruan
Extreme Robotics Lab (ERL) and National Center for Nuclear Robotics (NCNR), University of Birmingham, UK
Z
Zoe Betta
Department of Computer Science, Bioengineering, Robotics and Systems Engineering, University of Genova, Italy
G
Georgios Tzoumas
School of Engineering Mathematics and Technology, University of Bristol, UK
Rustam Stolkin
Rustam Stolkin
Chair of Robotics, UoB. Royal Society Industry Fellow. Director A.R.M Robotics Ltd.
RoboticsAIComputer VisionManipulationHuman-Robot Interaction
Manolis Chiou
Manolis Chiou
Assistant Professor in Computer Science, Queen Mary University of London
RoboticsHuman-Robot TeamingHuman-Robot InteractionVariable Autonomy