Verified Language Processing with Hybrid Explainability: A Technical Report

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current NLP models lack interpretability in text similarity assessment and logical relation classification (entailment, contradiction, neutral), hindering reliable semantic structure modeling and logical reasoning. To address this, we propose the first hybrid interpretable framework integrating Montague semantics, graph embedding, and first-order logic. Our method synergistically combines generative language models with logical prompting to explicitly encode syntactic structure, logical connectives, and spatiotemporal constraints. It is the first approach to precisely distinguish all three logical relations in text classification tasks. Evaluated on three independently annotated datasets, it significantly outperforms state-of-the-art baselines—particularly excelling in sentence structural equivalence detection, sensitivity to logical connectives, and spatiotemporal reasoning. The framework substantially enhances transparency and trustworthiness in information retrieval systems.

Technology Category

Application Category

📝 Abstract
The volume and diversity of digital information have led to a growing reliance on Machine Learning techniques, such as Natural Language Processing, for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to determine similarity for given full texts accurately. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic representations, creating machine- and human-readable representations through Montague Grammar. Preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval from extensive textual data.
Problem

Research questions and friction points this paper is trying to address.

Lack of guaranteed explainability in NLP similarity tasks
Failure to distinguish logical implication, indifference, inconsistency
Need for transparent and reliable Information Retrieval methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid explainability combining graphs and logic
First-Order Logic representations with Montague Grammar
Differentiates implication, inconsistency, and indifference
🔎 Similar Papers
No similar papers found.
O
Oliver Robert Fox
School of Computing, Faculty of Science, Agriculture and Engineering, Newcastle University, Newcastle Upon Tyne NE4 5TG, UK
G
Giacomo Bergami
School of Computing, Faculty of Science, Agriculture and Engineering, Newcastle University, Newcastle Upon Tyne NE4 5TG, UK
Graham Morgan
Graham Morgan
Computing Science, Newcastle University
Distributed SystemsVideo Gamesncl-cs-sys