Dialogue based Interactive Explanations for Safety Decisions in Human Robot Collaboration

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of interpretability in robotic safety decisions during human-robot collaboration, which often leaves human partners unable to understand unexpected halts or mode switches. To bridge this gap, the authors propose a dialogue-based interactive explanation framework that unifies safety decision explanations with constraint-based safety evaluation through a shared representation of system states and constraints. This framework uniquely treats explanation as an operational interface for safe control, enabling users to pose causal, contrastive, and counterfactual queries while preserving certified safety parameters. By integrating constraint-aware dialogue mechanisms with counterfactual reasoning, the approach maintains formal safety guarantees without modification. Experimental validation in a construction robotics scenario demonstrates that the method clearly articulates the logic behind safety interventions through structured operational trajectories, significantly enhancing transparency and task recovery in human-robot teamwork.
📝 Abstract
As robots increasingly operate in shared, safety critical environments, acting safely is no longer sufficient robots must also make their safety decisions intelligible to human collaborators. In human robot collaboration (HRC), behaviours such as stopping or switching modes are often triggered by internal safety constraints that remain opaque to nearby workers. We present a dialogue based framework for interactive explanation of safety decisions in HRC. The approach tightly couples explanation with constraint based safety evaluation, grounding dialogue in the same state and constraint representations that govern behaviour selection. Explanations are derived directly from the recorded decision trace, enabling users to pose causal ("Why?"), contrastive ("Why not?"), and counterfactual ("What if?") queries about safety interventions. Counterfactual reasoning is evaluated in a bounded manner under fixed, certified safety parameters, ensuring that interactive exploration does not relax operational guarantees. We instantiate the framework in a construction robotics scenario and provide a structured operational trace illustrating how constraint aware dialogue clarifies safety interventions and supports coordinated task recovery. By treating explanation as an operational interface to safety control, this work advances a design perspective for interactive, safety aware autonomy in HRC.
Problem

Research questions and friction points this paper is trying to address.

human-robot collaboration
safety decisions
interactive explanation
intelligibility
constraint-based reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

interactive explanation
constraint-based safety
counterfactual reasoning
human-robot collaboration
dialogue framework
🔎 Similar Papers
No similar papers found.
Yifan Xu
Yifan Xu
The university of Manchester
Explainable AIHuman-agent InteractionNeuro-Symbolic AI
Xiao Zhan
Xiao Zhan
King's College London
Security & PrivacyHuman-Computer InteractionArtificial Intelligence
A
Akilu Yunusa Kaltungo
Department of Mechanical and Aerospace Engineering, Faculty of Science and Engineering, The University of Manchester, Manchester, United Kingdom
M
Ming Shan Ng
Center for the Possible Futures, Kyoto Institute of Technology, Kyoto, Japan
T
Tsukasa Ishizawa
Institute of Industrial Science, The University of Tokyo, Japan
K
Kota Fujimoto
Graduate School of Frontier Sciences, The University of Tokyo, Japan
C
Clara Cheung
Department of Civil Engineering and Management, Faculty of Science and Engineering, The University of Manchester, Manchester, United Kingdom