A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation

📅 2024-05-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-party human-robot dialogues, inaccurate addressee identification and opaque robot behavior undermine human trust. Method: This paper proposes an endogenously explainable addressee estimation model that intrinsically embeds interpretability into the modeling process—leveraging multimodal (speech, gaze, posture) fusion and attention mechanisms to generate human-understandable decision rationales. The model is modularly integrated into the iCub robot’s cognitive architecture to enable real-time, online multimodal interaction explanation. Results: Experiments demonstrate state-of-the-art accuracy in addressee recognition; on the iCub platform, the system achieves millisecond-scale explainable inference. A user study confirms that structured explanations significantly improve subjective perceptions of robot trustworthiness (+32.7%) and naturalness (+28.4%).

Technology Category

Application Category

📝 Abstract
The addressee estimation (understanding to whom somebody is talking) is a fundamental task for human activity recognition in multi-party conversation scenarios. Specifically, in the field of human-robot interaction, it becomes even more crucial to enable social robots to participate in such interactive contexts. However, it is usually implemented as a binary classification task, restricting the robot's capability to estimate whether it was addressed eview{or not, which} limits its interactive skills. For a social robot to gain the trust of humans, it is also important to manifest a certain level of transparency and explainability. Explainable artificial intelligence thus plays a significant role in the current machine learning applications and models, to provide explanations for their decisions besides excellent performance. In our work, we a) present an addressee estimation model with improved performance in comparison with the previous state-of-the-art; b) further modify this model to include inherently explainable attention-based segments; c) implement the explainable addressee estimation as part of a modular cognitive architecture for multi-party conversation in an iCub robot; d) validate the real-time performance of the explainable model in multi-party human-robot interaction; e) propose several ways to incorporate explainability and transparency in the aforementioned architecture; and f) perform an online user study to analyze the effect of various explanations on how human participants perceive the robot.
Problem

Research questions and friction points this paper is trying to address.

Multi-agent Chat
Human-like Understanding
Explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced Interaction
Explainable AI
Social Adaptability
🔎 Similar Papers
No similar papers found.
I
Iveta Becková
Faculty of Mathematics, Physics and Informatics, Comenius University Bratislava, Bratislava, 842 48, Slovak Republic
S
Stefan Pócos
Faculty of Mathematics, Physics and Informatics, Comenius University Bratislava, Bratislava, 842 48, Slovak Republic
G
G. Belgiovine
CONTACT Unit, Italian Institute of Technology, Genova, 16152, Italy
Marco Matarese
Marco Matarese
CONTACT Unit, Italian Institute of Technology, Genova, 16152, Italy
A
A. Sciutti
CONTACT Unit, Italian Institute of Technology, Genova, 16152, Italy
Carlo Mazzola
Carlo Mazzola
Istituto Italiano di Tecnologia (CONTACT)
Human-Robot InteractionCognitive RoboticsSocial RoboticsPerceptionDeep Learning