🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated, educationally unsafe, or ineffective personalized self-regulated learning (SRL) scaffolds, undermining pedagogical reliability and ethical compliance.
Method: We propose a multi-agent collaborative verification framework integrating a dual-path “LLM-as-a-Judge” quality control mechanism. Our approach introduces pre-generation reliability assessment and multi-agent cross-verification using a domain-specific evaluation dataset.
Contribution/Results: By embedding evaluation *before* generation and leveraging interpretable, education-aligned judgment criteria, our framework significantly suppresses hallucinations and enhances fidelity to learner needs. Compared to single-agent and conventional ML baselines, it achieves near-expert human performance in hallucination detection and content appropriateness (Cohen’s κ = 0.89). The core innovation lies in shifting quality assurance upstream in the generation pipeline—enabling trustworthy, personalized, ethically grounded SRL scaffolding with markedly improved accuracy, adaptability, and pedagogical validity.
📝 Abstract
Generative Artificial Intelligence (GenAI) holds a potential to advance existing educational technologies with capabilities to automatically generate personalised scaffolds that support students' self-regulated learning (SRL). While advancements in large language models (LLMs) promise improvements in the adaptability and quality of educational technologies for SRL, there remain concerns about the hallucinations in content generated by LLMs, which can compromise both the learning experience and ethical standards. To address these challenges, we proposed GenAI-enabled approaches for evaluating personalised SRL scaffolds before they are presented to students, aiming for reducing hallucinations and improving the overall quality of LLM-generated personalised scaffolds. Specifically, two approaches are investigated. The first approach involved developing a multi-agent system approach for reliability evaluation to assess the extent to which LLM-generated scaffolds accurately target relevant SRL processes. The second approach utilised the "LLM-as-a-Judge" technique for quality evaluation that evaluates LLM-generated scaffolds for their helpfulness in supporting students. We constructed evaluation datasets, and compared our results with single-agent LLM systems and machine learning approach baselines. Our findings indicate that the reliability evaluation approach is highly effective and outperforms the baselines, showing almost perfect alignment with human experts' evaluations. Moreover, both proposed evaluation approaches can be harnessed to effectively reduce hallucinations. Additionally, we identified and discussed bias limitations of the "LLM-as-a-Judge" technique in evaluating LLM-generated scaffolds. We suggest incorporating these approaches into GenAI-powered personalised SRL scaffolding systems to mitigate hallucination issues and improve the overall scaffolding quality.