Neuro-Conceptual Artificial Intelligence: Integrating OPM with Deep Learning to Enhance Question Answering Quality

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the opacity and lack of traceability in knowledge representation and reasoning within explainable AI. Methodologically, it introduces a neuro-symbolic question-answering framework that—uniquely—embeds the Object-Process Methodology (OPM) conceptual model from the ISO 19450:2024 standard into neural QA pipelines, integrating large language model context learning, structured knowledge distillation, and joint neuro-symbolic inference. A key contribution is the design of a transparency quantification metric grounded in OPM’s logical consistency. Experiments demonstrate significant improvements in answer accuracy and reasoning traceability across multi-turn, complex QA tasks; furthermore, the proposed metric empirically validates high alignment between generated reasoning paths and domain-level conceptual logic. This work establishes a novel paradigm for explainable, knowledge-driven AI.

Technology Category

Application Category

📝 Abstract
Knowledge representation and reasoning are critical challenges in Artificial Intelligence (AI), particularly in integrating neural and symbolic approaches to achieve explainable and transparent AI systems. Traditional knowledge representation methods often fall short of capturing complex processes and state changes. We introduce Neuro-Conceptual Artificial Intelligence (NCAI), a specialization of the neuro-symbolic AI approach that integrates conceptual modeling using Object-Process Methodology (OPM) ISO 19450:2024 with deep learning to enhance question-answering (QA) quality. By converting natural language text into OPM models using in-context learning, NCAI leverages the expressive power of OPM to represent complex OPM elements-processes, objects, and states-beyond what traditional triplet-based knowledge graphs can easily capture. This rich structured knowledge representation improves reasoning transparency and answer accuracy in an OPM-QA system. We further propose transparency evaluation metrics to quantitatively measure how faithfully the predicted reasoning aligns with OPM-based conceptual logic. Our experiments demonstrate that NCAI outperforms traditional methods, highlighting its potential for advancing neuro-symbolic AI by providing rich knowledge representations, measurable transparency, and improved reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhance question-answering quality through AI integration.
Address challenges in knowledge representation and reasoning.
Improve AI explainability with neuro-symbolic approaches.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates OPM with deep learning
Enhances QA using NCAI
Measures reasoning transparency quantitatively
🔎 Similar Papers
No similar papers found.
Xin Kang
Xin Kang
Associate Professor, Tokushima Univerisity
Natural Language ProcessingAffective computingAI Transparency and Trustworthiness.
V
Veronika Shteingardt
Technion – Israel Institute of Technology, Haifa, Israel
Y
Yuhan Wang
Tokushima University, Tokushima, Japan
D
D. Dori
Technion – Israel Institute of Technology, Haifa, Israel