🤖 AI Summary
This study addresses the opacity and lack of traceability in knowledge representation and reasoning within explainable AI. Methodologically, it introduces a neuro-symbolic question-answering framework that—uniquely—embeds the Object-Process Methodology (OPM) conceptual model from the ISO 19450:2024 standard into neural QA pipelines, integrating large language model context learning, structured knowledge distillation, and joint neuro-symbolic inference. A key contribution is the design of a transparency quantification metric grounded in OPM’s logical consistency. Experiments demonstrate significant improvements in answer accuracy and reasoning traceability across multi-turn, complex QA tasks; furthermore, the proposed metric empirically validates high alignment between generated reasoning paths and domain-level conceptual logic. This work establishes a novel paradigm for explainable, knowledge-driven AI.
📝 Abstract
Knowledge representation and reasoning are critical challenges in Artificial Intelligence (AI), particularly in integrating neural and symbolic approaches to achieve explainable and transparent AI systems. Traditional knowledge representation methods often fall short of capturing complex processes and state changes. We introduce Neuro-Conceptual Artificial Intelligence (NCAI), a specialization of the neuro-symbolic AI approach that integrates conceptual modeling using Object-Process Methodology (OPM) ISO 19450:2024 with deep learning to enhance question-answering (QA) quality. By converting natural language text into OPM models using in-context learning, NCAI leverages the expressive power of OPM to represent complex OPM elements-processes, objects, and states-beyond what traditional triplet-based knowledge graphs can easily capture. This rich structured knowledge representation improves reasoning transparency and answer accuracy in an OPM-QA system. We further propose transparency evaluation metrics to quantitatively measure how faithfully the predicted reasoning aligns with OPM-based conceptual logic. Our experiments demonstrate that NCAI outperforms traditional methods, highlighting its potential for advancing neuro-symbolic AI by providing rich knowledge representations, measurable transparency, and improved reasoning.