🤖 AI Summary
Contemporary AI-driven adaptive learning systems suffer from opaque decision-making (“black-box” problem), and existing eXplainable AI (XAI) methods prioritize technical outputs over the diverse cognitive needs and role-specific interpretation requirements of stakeholders—such as teachers and students—in educational contexts.
Method: We propose a user-centered, multimodal XAI framework that redefines explainability as a role-aware, dynamic communication process. Integrating generative AI, fine-grained user modeling, and established XAI techniques, the framework enables personalized explanation generation and multimodal delivery (e.g., natural language and interactive visualizations).
Contribution/Results: The framework preserves explanation fidelity and algorithmic fairness while substantially enhancing system transparency and stakeholder trust. Empirical evaluation demonstrates improved interpretability across user roles and task contexts. This work establishes a novel paradigm for building intelligible, trustworthy intelligent educational systems grounded in human-centered design principles.
📝 Abstract
Artificial intelligence-driven adaptive learning systems are reshaping education through data-driven adaptation of learning experiences. Yet many of these systems lack transparency, offering limited insight into how decisions are made. Most explainable AI (XAI) techniques focus on technical outputs but neglect user roles and comprehension. This paper proposes a hybrid framework that integrates traditional XAI techniques with generative AI models and user personalisation to generate multimodal, personalised explanations tailored to user needs. We redefine explainability as a dynamic communication process tailored to user roles and learning goals. We outline the framework's design, key XAI limitations in education, and research directions on accuracy, fairness, and personalisation. Our aim is to move towards explainable AI that enhances transparency while supporting user-centred experiences.