Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

📅 2023-09-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Learnersourced multiple-choice explanation generation often suffers from low explanatory quality and limited pedagogical effectiveness due to students’ domain knowledge constraints. Method: We propose ILearner-LLM, a novel framework for learnersourcing that integrates generative models (LLaMA2-13B/GPT-4) with an automated evaluator in a closed-loop iterative optimization pipeline. It employs instruction tuning to align the generator with student cognition, dynamically quantifies explanation quality via an automatic assessment module, and injects evaluation feedback into prompts for successive refinement. Contribution/Results: ILearner-LLM overcomes the limitations of single-shot generation. Evaluated on five PeerWise datasets, it significantly outperforms baselines; human evaluation shows a 27% improvement in inter-rater consistency. Generated explanations better reflect students’ linguistic style and conceptual depth, demonstrating strong cross-disciplinary transferability.
📝 Abstract
Large language models exhibit superior capabilities in processing and understanding language, yet their applications in educational contexts remain underexplored. Learnersourcing enhances learning by engaging students in creating their own educational content. When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts. However, it is often difficult for students to craft effective solution explanations, due to limited subject understanding. To help scaffold the task of automated explanation generation, we present and evaluate a framework called"ILearner-LLM", that iteratively enhances the generated explanations for the given questions with large language models. Comprising an explanation generation model and an explanation evaluation model, the framework generates high-quality student-aligned explanations by iteratively feeding the quality rating score from the evaluation model back into the instruction prompt of the explanation generation model. Experimental results demonstrate the effectiveness of our ILearner-LLM on LLaMA2-13B and GPT-4 to generate higher quality explanations that are closer to those written by students on five PeerWise datasets. Our findings represent a promising path to enrich the learnersourcing experience for students and to enhance the capabilities of large language models for educational applications.
Problem

Research questions and friction points this paper is trying to address.

Educational Quality
Knowledge Limitation
Learning Materials
Innovation

Methods, ideas, or system contributions that make the work stand out.

ILearner-LLM
Large Language Model
Educational Application
🔎 Similar Papers
No similar papers found.
Qiming Bao
Qiming Bao
University of Auckland
Artificial IntelligenceNatural Language ProcessingLLMsReasoningNeurosymbolic AI
Juho Leinonen
Juho Leinonen
Aalto University
Computing EducationLearning AnalyticsGenerative AIAI in EducationEducational Technologies
A
A. Peng
Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
Wanjun Zhong
Wanjun Zhong
Bytedance Seed Research
NLP
T
Tim Pistotti
Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
A
Alice Huang
School of Life and Environmental Sciences, University of Sydney
Paul Denny
Paul Denny
Professor, University of Auckland
Educational technologyComputer Science Education
M
M. Witbrock
Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
J
J. Liu
Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland