🤖 AI Summary
To address context drift and weak pedagogical alignment in automated question generation for education, this paper proposes the first education-oriented hybrid framework integrating In-Context Learning (ICL) and Retrieval-Augmented Generation (RAG). Methodologically, it synergizes GPT-4’s ICL capability with a BART-based retriever coupled with FAISS, augmented by a pedagogy-aware few-shot prompting template and curriculum-knowledge-enhanced retrieval. Its key contribution is a teaching-guided hybrid generation architecture that jointly optimizes semantic relevance, factual accuracy, and pedagogical soundness. Experiments demonstrate average improvements of 23.6% over baselines on BLEU-4, ROUGE-L, and human evaluation metrics (relevance and teachability). Moreover, 92% of generated questions were validated as usable by frontline educators—significantly outperforming existing approaches.
📝 Abstract
Question generation in education is a time-consuming and cognitively demanding task, as it requires creating questions that are both contextually relevant and pedagogically sound. Current automated question generation methods often generate questions that are out of context. In this work, we explore advanced techniques for automated question generation in educational contexts, focusing on In-Context Learning (ICL), Retrieval-Augmented Generation (RAG), and a novel Hybrid Model that merges both methods. We implement GPT-4 for ICL using few-shot examples and BART with a retrieval module for RAG. The Hybrid Model combines RAG and ICL to address these issues and improve question quality. Evaluation is conducted using automated metrics, followed by human evaluation metrics. Our results show that both the ICL approach and the Hybrid Model consistently outperform other methods, including baseline models, by generating more contextually accurate and relevant questions.