Social AI Agents Too Need to Explain Themselves

📅 2025-01-19
🏛️ International Conference on Intelligent Tutoring Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited explainability of social AI in multi-agent interactions and its consequent difficulty in establishing user trust, this paper proposes a socially situated dynamic explanation generation framework. Methodologically, it pioneers the integration of social cognition theories—including Theory of Mind and Face Theory—into AI explanation mechanisms, synergizing large language models, social relationship graph modeling, and an intent-driven explanation planning module to generate adaptive natural-language explanations conditioned on user roles, relational context, and interaction intent. The framework supports real-time, role-aware explanation delivery and is evaluated via a multi-turn dialogue explainability assessment protocol. On the SocialExplain benchmark, it achieves a 37% improvement in explanation relevance, alongside 29% and 22% gains in user trust and collaborative efficiency, respectively—marking dual advances in social adaptability and interaction consistency.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Social AI
Self-introduction
User Trust
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Self-introduction Method
Trust Enhancement
🔎 Similar Papers
No similar papers found.
Rhea Basappa
Rhea Basappa
Clemson University, Georgia Institute of Technology
Artificial IntelligenceHuman-AI InteractionHuman-Agent TeamingEducational Technology
Mustafa Tekman
Mustafa Tekman
Georgia Institute of Technology
H
Hong Lu
Tufts University
B
Benjamin Faught
Georgia Institute of Technology, Atlanta GA 30332, USA
S
Sandeep Kakar
Georgia Institute of Technology, Atlanta GA 30332, USA
A
Ashok K. Goel
Georgia Institute of Technology, Atlanta GA 30332, USA