iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback

📅 2024-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI explanations in educational settings often lack interpretability for teachers and students. Method: This study proposes LLM-XAI, a theory-driven zero-shot chain-of-thought prompting framework that systematically integrates Miller’s cognitive model of explanation and eight social-scientific causal explanation theories (e.g., abnormal conditions, contrastive explanation) into XAI workflows. It synergistically combines LIME, counterfactual explanations, and MC-LIME with GPT-4o, Gemma2-9B, and Llama3-70B across a three-stage prompting chain—causal linking, explanation selection, and presentation optimization—to ensure theoretical alignment and simulate actionable guidance. Contribution/Results: Evaluated across three online courses, the framework generated 21,915 natural-language feedback instances; student preference reached 89.52%, significantly improving theoretical consistency, comprehensibility, and actionability of AI explanations.

Technology Category

Application Category

📝 Abstract
Recent advances in eXplainable AI (XAI) for education have highlighted a critical challenge: ensuring that explanations for state-of-the-art AI models are understandable for non-technical users such as educators and students. In response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI pipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is designed to deliver theory-driven, actionable feedback to students in online courses. iLLuMinaTE navigates three main stages - causal connection, explanation selection, and explanation presentation - with variations drawing from eight social science theories (e.g. Abnormal Conditions, Pearl's Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation). We extensively evaluate 21,915 natural language explanations of iLLuMinaTE extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from three diverse online courses. Our evaluation involves analyses of explanation alignment to the social science theory, understandability of the explanation, and a real-world user preference study with 114 university students containing a novel actionability simulation. We find that students prefer iLLuMinaTE explanations over traditional explainers 89.52% of the time. Our work provides a robust, ready-to-use framework for effectively communicating hybrid XAI-driven insights in education, with significant generalization potential for other human-centric fields.
Problem

Research questions and friction points this paper is trying to address.

AI interpretability
Educational application
Student performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Artificial Intelligence in Education
Interpretable Machine Learning
Social Science Theories Integration
🔎 Similar Papers
No similar papers found.
Vinitra Swamy
Vinitra Swamy
EPFL, UC Berkeley, Microsoft AI
Explainable AIAI for education
D
Davide Romano
EPFL, Switzerland
B
Bhargav Srinivasa Desikan
Institute for Public Policy Research, UK
Oana-Maria Camburu
Oana-Maria Camburu
Assistant Professor at Imperial College London
ExplainabilityMLAIAlignmentSafe AI
T
Tanja Kaser
EPFL, Switzerland