🤖 AI Summary
To address the limited explainability of social AI in multi-agent interactions and its consequent difficulty in establishing user trust, this paper proposes a socially situated dynamic explanation generation framework. Methodologically, it pioneers the integration of social cognition theories—including Theory of Mind and Face Theory—into AI explanation mechanisms, synergizing large language models, social relationship graph modeling, and an intent-driven explanation planning module to generate adaptive natural-language explanations conditioned on user roles, relational context, and interaction intent. The framework supports real-time, role-aware explanation delivery and is evaluated via a multi-turn dialogue explainability assessment protocol. On the SocialExplain benchmark, it achieves a 37% improvement in explanation relevance, alongside 29% and 22% gains in user trust and collaborative efficiency, respectively—marking dual advances in social adaptability and interaction consistency.