🤖 AI Summary
This work addresses the lack of causal interpretability in behavior tree (BT)-driven robotic systems. We propose a novel method for automatically generating counterfactual explanations. Our core contribution is the first automated construction of a causal model directly from BT structure, integrating domain knowledge with time-aware causal reasoning. Leveraging customized query formulations and search algorithms, the method generates diverse, semantically coherent temporal counterfactual scenarios—enabling contrastive causal queries such as “Why was this action executed instead of that one?” Experiments demonstrate robust performance across diverse, complex BT topologies and dynamic state configurations, consistently producing accurate and human-understandable explanations. The approach significantly enhances decision transparency and user trust, establishing the first systematic, scalable causal explanation framework for BT-based autonomous systems.
📝 Abstract
Explainability is a critical tool in helping stakeholders understand robots. In particular, the ability for robots to explain why they have made a particular decision or behaved in a certain way is useful in this regard. Behaviour trees are a popular framework for controlling the decision-making of robots and other software systems, and thus a natural question to ask is whether or not a system driven by a behaviour tree is capable of answering "why" questions. While explainability for behaviour trees has seen some prior attention, no existing methods are capable of generating causal, counterfactual explanations which detail the reasons for robot decisions and behaviour. Therefore, in this work, we introduce a novel approach which automatically generates counterfactual explanations in response to contrastive "why" questions. Our method achieves this by first automatically building a causal model from the structure of the behaviour tree as well as domain knowledge about the state and individual behaviour tree nodes. The resultant causal model is then queried and searched to find a set of diverse counterfactual explanations. We demonstrate that our approach is able to correctly explain the behaviour of a wide range of behaviour tree structures and states. By being able to answer a wide range of causal queries, our approach represents a step towards more transparent, understandable and ultimately trustworthy robotic systems.