🤖 AI Summary
This paper addresses core challenges in human-AI-robot collaboration—namely, asymmetric knowledge understanding, dynamic governance difficulty, and weak cross-modal knowledge exchange. Methodologically, it proposes the first systematic three-dimensional theoretical model of human-AI mutual trust understanding (Shared Understanding–Knowledge Exchange–Dynamic Governance) and establishes a synergistic cognitive framework integrating neurosymbolic AI with knowledge graphs. By coupling symbolic reasoning’s interpretability with deep learning’s representation capacity—and incorporating multi-agent knowledge interaction modeling—the approach enables dynamic knowledge alignment and trustworthy evolution. Empirical validation across diverse scenarios confirms improved knowledge consistency and interaction reliability among humans, AI agents, and robots, while exposing critical bottlenecks in dynamic knowledge governance and cross-modal semantic alignment. The primary contributions are: (1) the first formal articulation of a three-dimensional paradigm for human-AI mutual trust understanding; and (2) a novel, interpretable, and evolvable neurosymbolic knowledge governance mechanism.
📝 Abstract
This chapter investigates the concept of mutual understanding between humans and systems, positing that Neuro-symbolic Artificial Intelligence (NeSy AI) methods can significantly enhance this mutual understanding by leveraging explicit symbolic knowledge representations with data-driven learning models. We start by introducing three critical dimensions to characterize mutual understanding: sharing knowledge, exchanging knowledge, and governing knowledge. Sharing knowledge involves aligning the conceptual models of different agents to enable a shared understanding of the domain of interest. Exchanging knowledge relates to ensuring the effective and accurate communication between agents. Governing knowledge concerns establishing rules and processes to regulate the interaction between agents. Then, we present several different use case scenarios that demonstrate the application of NeSy AI and Knowledge Graphs to aid meaningful exchanges between human, artificial, and robotic agents. These scenarios highlight both the potential and the challenges of combining top-down symbolic reasoning with bottom-up neural learning, guiding the discussion of the coverage provided by current solutions along the dimensions of sharing, exchanging, and governing knowledge. Concurrently, this analysis facilitates the identification of gaps and less developed aspects in mutual understanding to address in future research.