HEXAR: a Hierarchical Explainability Architecture for Robots

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing high-level behavioral understanding with modular architecture in robot explainability, a trade-off that often confines existing methods to either overly localized or excessively holistic explanations. To overcome this limitation, the authors propose a plug-and-play hierarchical explainability framework that equips individual robot modules with dedicated explainers and employs an explainer selector to dynamically orchestrate query-specific optimal explanations. For the first time, this framework integrates modularity, hierarchy, and diverse explanation techniques—including large language model (LLM) reasoning, causal models, and feature importance—enabling precise, efficient, and customizable interpretations of complex robotic behaviors. Evaluated on TIAGo assistive tasks for elderly care, the approach significantly outperforms end-to-end and aggregated baselines in root-cause identification, erroneous information filtering, and computational efficiency.

Technology Category

Application Category

📝 Abstract
As robotic systems become increasingly complex, the need for explainable decision-making becomes critical. Existing explainability approaches in robotics typically either focus on individual modules, which can be difficult to query from the perspective of high-level behaviour, or employ monolithic approaches, which do not exploit the modularity of robotic architectures. We present HEXAR (Hierarchical EXplainability Architecture for Robots), a novel framework that provides a plug-in, hierarchical approach to generate explanations about robotic systems. HEXAR consists of specialised component explainers using diverse explanation techniques (e.g., LLM-based reasoning, causal models, feature importance, etc) tailored to specific robot modules, orchestrated by an explainer selector that chooses the most appropriate one for a given query. We implement and evaluate HEXAR on a TIAGo robot performing assistive tasks in a home environment, comparing it against end-to-end and aggregated baseline approaches across 180 scenario-query variations. We observe that HEXAR significantly outperforms baselines in root cause identification, incorrect information exclusion, and runtime, offering a promising direction for transparent autonomous systems.
Problem

Research questions and friction points this paper is trying to address.

explainability
robotics
modular architecture
hierarchical explanation
autonomous systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Explainability
Modular Explanation
Explainer Selector
Robot Transparency
Explainable AI for Robotics
🔎 Similar Papers
No similar papers found.