🤖 AI Summary
Existing question generation (QG) methods for English learners in kindergarten through second grade (K–2) lack adaptability and diversity, failing to align with early literacy development stages. Method: We propose a multilingual-model-based adaptive QG framework that integrates the FairytaleQA dataset with fine-grained difficulty modeling. It supports core early reading competencies—including word recognition, sentence comprehension, and discourse-level inference—and jointly controls question format (e.g., fill-in-the-blank, true/false, short-answer) and cognitive difficulty. Contribution/Results: Experiments show our method significantly outperforms baselines in relevance, diversity, and readability; on the FairytaleQA test set, human evaluators rated 87.3% of generated questions as acceptable. To our knowledge, this is the first end-to-end automated QG framework explicitly designed for assessing foundational literacy skills in young English learners, demonstrating strong potential for integration into AI-enhanced educational systems.
📝 Abstract
Assessment of reading comprehension through content-based interactions plays an important role in the reading acquisition process. In this paper, we propose a novel approach for generating comprehension questions geared to K-2 English learners. Our method ensures complete coverage of the underlying material and adaptation to the learner's specific proficiencies, and can generate a large diversity of question types at various difficulty levels to ensure a thorough evaluation. We evaluate the performance of various language models in this framework using the FairytaleQA dataset as the source material. Eventually, the proposed approach has the potential to become an important part of autonomous AI-driven English instructors.