Mutual Understanding between People and Systems via Neurosymbolic AI and Knowledge Graphs

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses core challenges in human-AI-robot collaboration—namely, asymmetric knowledge understanding, dynamic governance difficulty, and weak cross-modal knowledge exchange. Methodologically, it proposes the first systematic three-dimensional theoretical model of human-AI mutual trust understanding (Shared Understanding–Knowledge Exchange–Dynamic Governance) and establishes a synergistic cognitive framework integrating neurosymbolic AI with knowledge graphs. By coupling symbolic reasoning’s interpretability with deep learning’s representation capacity—and incorporating multi-agent knowledge interaction modeling—the approach enables dynamic knowledge alignment and trustworthy evolution. Empirical validation across diverse scenarios confirms improved knowledge consistency and interaction reliability among humans, AI agents, and robots, while exposing critical bottlenecks in dynamic knowledge governance and cross-modal semantic alignment. The primary contributions are: (1) the first formal articulation of a three-dimensional paradigm for human-AI mutual trust understanding; and (2) a novel, interpretable, and evolvable neurosymbolic knowledge governance mechanism.

Technology Category

Application Category

📝 Abstract
This chapter investigates the concept of mutual understanding between humans and systems, positing that Neuro-symbolic Artificial Intelligence (NeSy AI) methods can significantly enhance this mutual understanding by leveraging explicit symbolic knowledge representations with data-driven learning models. We start by introducing three critical dimensions to characterize mutual understanding: sharing knowledge, exchanging knowledge, and governing knowledge. Sharing knowledge involves aligning the conceptual models of different agents to enable a shared understanding of the domain of interest. Exchanging knowledge relates to ensuring the effective and accurate communication between agents. Governing knowledge concerns establishing rules and processes to regulate the interaction between agents. Then, we present several different use case scenarios that demonstrate the application of NeSy AI and Knowledge Graphs to aid meaningful exchanges between human, artificial, and robotic agents. These scenarios highlight both the potential and the challenges of combining top-down symbolic reasoning with bottom-up neural learning, guiding the discussion of the coverage provided by current solutions along the dimensions of sharing, exchanging, and governing knowledge. Concurrently, this analysis facilitates the identification of gaps and less developed aspects in mutual understanding to address in future research.
Problem

Research questions and friction points this paper is trying to address.

Enhancing human-system mutual understanding via Neuro-symbolic AI and Knowledge Graphs.
Characterizing mutual understanding through sharing, exchanging, and governing knowledge.
Addressing gaps in combining symbolic reasoning with neural learning for interaction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neurosymbolic AI enhances human-system mutual understanding
Knowledge Graphs enable shared and governed knowledge exchange
Combines symbolic reasoning with neural learning for interactions