Human-Robot Red Teaming for Safety-Aware Reasoning

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ensuring safe robot operation and establishing human trust in high-risk scenarios remains a critical challenge. Method: This paper introduces a human-robot red-teaming collaboration paradigm—the first application of red-teaming to human-robot collaborative safety reasoning—integrating safety-aware reasoning, dynamic risk assessment, context-aware collaborative planning, and feedback-driven learning. The approach enables adaptive safety-policy generation and cross-domain transfer for heterogeneous robots operating across diverse environments. Contribution/Results: Its novelty lies in grounding the system on human-robot co-challenge environmental assumptions, thereby enhancing risk identification, decision transparency, and behavioral explainability. Experimental validation in lunar habitat and domestic settings demonstrates that robots autonomously acquire context-appropriate safety behaviors and generate verifiable safety reports, significantly improving trustworthy collaboration.

Technology Category

Application Category

📝 Abstract
While much research explores improving robot capabilities, there is a deficit in researching how robots are expected to perform tasks safely, especially in high-risk problem domains. Robots must earn the trust of human operators in order to be effective collaborators in safety-critical tasks, specifically those where robots operate in human environments. We propose the human-robot red teaming paradigm for safety-aware reasoning. We expect humans and robots to work together to challenge assumptions about an environment and explore the space of hazards that may arise. This exploration will enable robots to perform safety-aware reasoning, specifically hazard identification, risk assessment, risk mitigation, and safety reporting. We demonstrate that: (a) human-robot red teaming allows human-robot teams to plan to perform tasks safely in a variety of domains, and (b) robots with different embodiments can learn to operate safely in two different environments -- a lunar habitat and a household -- with varying definitions of safety. Taken together, our work on human-robot red teaming for safety-aware reasoning demonstrates the feasibility of this approach for safely operating and promoting trust on human-robot teams in safety-critical problem domains.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robot safety in high-risk environments
Building trust between humans and safety-critical robots
Developing hazard identification and risk mitigation strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-robot red teaming for safety-aware reasoning
Hazard identification and risk mitigation strategies
Cross-domain safety learning for robots
🔎 Similar Papers
No similar papers found.
Emily Sheetz
Emily Sheetz
University of Michigan and NASA Johnson Space Center
E
Emma Zemler
NASA Johnson Space Center
M
Misha Savchenko
NASA Johnson Space Center
C
Connor Rainen
NASA Johnson Space Center
E
Erik Holum
NASA Johnson Space Center
J
Jodi Graf
NASA Johnson Space Center
A
Andrew Albright
NASA Johnson Space Center
S
Shaun Azimi
NASA Johnson Space Center
Benjamin Kuipers
Benjamin Kuipers
Professor of Computer Science & Engineering, University of Michigan
Artificial IntelligenceRoboticsAutonomous Learning