🤖 AI Summary
Ensuring safe robot operation and establishing human trust in high-risk scenarios remains a critical challenge. Method: This paper introduces a human-robot red-teaming collaboration paradigm—the first application of red-teaming to human-robot collaborative safety reasoning—integrating safety-aware reasoning, dynamic risk assessment, context-aware collaborative planning, and feedback-driven learning. The approach enables adaptive safety-policy generation and cross-domain transfer for heterogeneous robots operating across diverse environments. Contribution/Results: Its novelty lies in grounding the system on human-robot co-challenge environmental assumptions, thereby enhancing risk identification, decision transparency, and behavioral explainability. Experimental validation in lunar habitat and domestic settings demonstrates that robots autonomously acquire context-appropriate safety behaviors and generate verifiable safety reports, significantly improving trustworthy collaboration.
📝 Abstract
While much research explores improving robot capabilities, there is a deficit in researching how robots are expected to perform tasks safely, especially in high-risk problem domains. Robots must earn the trust of human operators in order to be effective collaborators in safety-critical tasks, specifically those where robots operate in human environments. We propose the human-robot red teaming paradigm for safety-aware reasoning. We expect humans and robots to work together to challenge assumptions about an environment and explore the space of hazards that may arise. This exploration will enable robots to perform safety-aware reasoning, specifically hazard identification, risk assessment, risk mitigation, and safety reporting. We demonstrate that: (a) human-robot red teaming allows human-robot teams to plan to perform tasks safely in a variety of domains, and (b) robots with different embodiments can learn to operate safely in two different environments -- a lunar habitat and a household -- with varying definitions of safety. Taken together, our work on human-robot red teaming for safety-aware reasoning demonstrates the feasibility of this approach for safely operating and promoting trust on human-robot teams in safety-critical problem domains.