Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary AI-driven adaptive learning systems suffer from opaque decision-making (“black-box” problem), and existing eXplainable AI (XAI) methods prioritize technical outputs over the diverse cognitive needs and role-specific interpretation requirements of stakeholders—such as teachers and students—in educational contexts. Method: We propose a user-centered, multimodal XAI framework that redefines explainability as a role-aware, dynamic communication process. Integrating generative AI, fine-grained user modeling, and established XAI techniques, the framework enables personalized explanation generation and multimodal delivery (e.g., natural language and interactive visualizations). Contribution/Results: The framework preserves explanation fidelity and algorithmic fairness while substantially enhancing system transparency and stakeholder trust. Empirical evaluation demonstrates improved interpretability across user roles and task contexts. This work establishes a novel paradigm for building intelligible, trustworthy intelligent educational systems grounded in human-centered design principles.

Technology Category

Application Category

📝 Abstract
Artificial intelligence-driven adaptive learning systems are reshaping education through data-driven adaptation of learning experiences. Yet many of these systems lack transparency, offering limited insight into how decisions are made. Most explainable AI (XAI) techniques focus on technical outputs but neglect user roles and comprehension. This paper proposes a hybrid framework that integrates traditional XAI techniques with generative AI models and user personalisation to generate multimodal, personalised explanations tailored to user needs. We redefine explainability as a dynamic communication process tailored to user roles and learning goals. We outline the framework's design, key XAI limitations in education, and research directions on accuracy, fairness, and personalisation. Our aim is to move towards explainable AI that enhances transparency while supporting user-centred experiences.
Problem

Research questions and friction points this paper is trying to address.

Lack of transparency in AI-driven adaptive learning systems
Neglect of user roles in traditional explainable AI techniques
Need for personalized multimodal explanations in education
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework combines XAI and generative AI
Multimodal explanations tailored to user needs
Dynamic explainability based on user roles
🔎 Similar Papers
No similar papers found.