🤖 AI Summary
Educational AI systems face heightened privacy risks due to centralized collection of sensitive student data, and existing privacy-preserving approaches struggle to balance utility and security. Method: This work presents the first systematic empirical validation of federated learning (FL) for learning analytics prediction in educational settings. We integrate differential privacy mechanisms with rigorous adversarial evaluation—including model inversion and membership inference attacks—to establish a privacy-preserving paradigm that jointly optimizes predictive accuracy and robustness. Contribution/Results: Experiments demonstrate that our approach achieves prediction accuracy comparable to centralized training while exhibiting significantly smaller accuracy degradation under diverse adversarial attacks—outperforming baseline methods in privacy–utility trade-off. The framework is reproducible, scalable, and provides a principled technical pathway toward trustworthy, privacy-aware educational AI.
📝 Abstract
The increasing adoption of data-driven applications in education such as in learning analytics and AI in education has raised significant privacy and data protection concerns. While these challenges have been widely discussed in previous works, there are still limited practical solutions. Federated learning has recently been discoursed as a promising privacy-preserving technique, yet its application in education remains scarce. This paper presents an experimental evaluation of federated learning for educational data prediction, comparing its performance to traditional non-federated approaches. Our findings indicate that federated learning achieves comparable predictive accuracy. Furthermore, under adversarial attacks, federated learning demonstrates greater resilience compared to non-federated settings. We summarise that our results reinforce the value of federated learning as a potential approach for balancing predictive performance and privacy in educational contexts.