🤖 AI Summary
This work addresses the lack of interpretability in robotic safety decisions during human-robot collaboration, which often leaves human partners unable to understand unexpected halts or mode switches. To bridge this gap, the authors propose a dialogue-based interactive explanation framework that unifies safety decision explanations with constraint-based safety evaluation through a shared representation of system states and constraints. This framework uniquely treats explanation as an operational interface for safe control, enabling users to pose causal, contrastive, and counterfactual queries while preserving certified safety parameters. By integrating constraint-aware dialogue mechanisms with counterfactual reasoning, the approach maintains formal safety guarantees without modification. Experimental validation in a construction robotics scenario demonstrates that the method clearly articulates the logic behind safety interventions through structured operational trajectories, significantly enhancing transparency and task recovery in human-robot teamwork.
📝 Abstract
As robots increasingly operate in shared, safety critical environments, acting safely is no longer sufficient robots must also make their safety decisions intelligible to human collaborators. In human robot collaboration (HRC), behaviours such as stopping or switching modes are often triggered by internal safety constraints that remain opaque to nearby workers. We present a dialogue based framework for interactive explanation of safety decisions in HRC. The approach tightly couples explanation with constraint based safety evaluation, grounding dialogue in the same state and constraint representations that govern behaviour selection. Explanations are derived directly from the recorded decision trace, enabling users to pose causal ("Why?"), contrastive ("Why not?"), and counterfactual ("What if?") queries about safety interventions. Counterfactual reasoning is evaluated in a bounded manner under fixed, certified safety parameters, ensuring that interactive exploration does not relax operational guarantees. We instantiate the framework in a construction robotics scenario and provide a structured operational trace illustrating how constraint aware dialogue clarifies safety interventions and supports coordinated task recovery. By treating explanation as an operational interface to safety control, this work advances a design perspective for interactive, safety aware autonomy in HRC.