Bridging LLMs and Symbolic Reasoning in Educational QA Systems: Insights from the XAI Challenge at IJCNN 2025

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of large language models (LLMs) in educational question-answering systems, this work proposes a lightweight neuro-symbolic architecture that integrates a fine-tuned, parameter-efficient LLM with a logic-template-based symbolic reasoning module. The Z3 theorem prover is employed to formally encode and automatically verify institutional policy rules, ensuring logical consistency and factual reliability of generated answers. Crucially, symbolic reasoning is embedded directly into the LLM’s response generation pipeline, enabling natural-language explanations tailored to university policy queries. As part of this effort, we organized an international hackathon and released the first high-quality, explanation-annotated dataset for education-policy QA. Experiments demonstrate significant improvements in answer transparency and user trust. This work establishes a reproducible technical pathway and practical paradigm for explainable AI (XAI) in educational AI applications.

Technology Category

Application Category

📝 Abstract
The growing integration of Artificial Intelligence (AI) into education has intensified the need for transparency and interpretability. While hackathons have long served as agile environments for rapid AI prototyping, few have directly addressed eXplainable AI (XAI) in real-world educational contexts. This paper presents a comprehensive analysis of the XAI Challenge 2025, a hackathon-style competition jointly organized by Ho Chi Minh City University of Technology (HCMUT) and the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI (TRNS-AI), held as part of the International Joint Conference on Neural Networks (IJCNN 2025). The challenge tasked participants with building Question-Answering (QA) systems capable of answering student queries about university policies while generating clear, logic-based natural language explanations. To promote transparency and trustworthiness, solutions were required to use lightweight Large Language Models (LLMs) or hybrid LLM-symbolic systems. A high-quality dataset was provided, constructed via logic-based templates with Z3 validation and refined through expert student review to ensure alignment with real-world academic scenarios. We describe the challenge's motivation, structure, dataset construction, and evaluation protocol. Situating the competition within the broader evolution of AI hackathons, we argue that it represents a novel effort to bridge LLMs and symbolic reasoning in service of explainability. Our findings offer actionable insights for future XAI-centered educational systems and competitive research initiatives.
Problem

Research questions and friction points this paper is trying to address.

Bridging LLMs and symbolic reasoning for educational QA systems
Enhancing transparency in AI for real-world educational contexts
Developing explainable AI solutions for university policy queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid LLM-symbolic systems for explainability
Logic-based templates with Z3 validation
Lightweight LLMs in educational QA
L
Long S. T. Nguyen
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam
K
Khang H. N. Vo
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam
T
Thu H. A. Nguyen
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam
T
Tuan C. Bui
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Duc Q. Nguyen
Duc Q. Nguyen
National University of Singapore
Generative ModelsGraph Representation LearningProbabilistic Machine Learning
T
Thanh-Tung Tran
Ho Chi Minh City International University (HCMIU), Vietnam
A
Anh D. Nguyen
University of South-Eastern Norway, Norway
Minh L. Nguyen
Minh L. Nguyen
Purdue University
cryptographybioinformatics
F
Fabien Baldacci
Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400 Talence, France
T
Thang H. Bui
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Emanuel Di Nardo
Emanuel Di Nardo
University of Naples, Parthenope
Computer ScienceComputer VisionArtificial Intelligence
Angelo Ciaramella
Angelo Ciaramella
Full Professor - DiST - University of Naples "Parthenope"
Computational IntelligenceMachine LearningData MiningMultimedia SystemsBioinformatics
S
Son H. Le
VNU Information Technology Institute, Vietnam National University, Vietnam
Ihsan Ullah
Ihsan Ullah
University of Balochistan, Quetta, Pakistan
P2P video streamingP2P IPTVIPTV User BehaviorIoTMultimedia Communication
L
Lorenzo Di Rocco
Sapienza University of Rome, Italy
T
Tho T. Quan
URA Research Group, Ho Chi Minh City University of Technology (HCMUT), Vietnam