LLM-Driven Personalized Answer Generation and Evaluation

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In online learning environments, enhancing learner engagement while alleviating instructor workload remains challenging. Method: This study proposes the first LLM-based framework for personalized answer generation and multidimensional evaluation in education, innovatively incorporating answers from similar learners as in-context examples (0-/1-/few-shot) to improve personalization. Evaluation is conducted on a custom dual-domain StackExchange dataset (language learning and programming), integrating BERTScore-based automatic assessment, LLM-based meta-evaluation, and human evaluation. Contribution/Results: All three evaluation modalities exhibit high consistency. Empirical results demonstrate that example-guided prompting systematically enhances personalization fidelity, offering a verifiable, reproducible methodological foundation for LLM-driven adaptive educational systems.

Technology Category

Application Category

📝 Abstract
Online learning has experienced rapid growth due to its flexibility and accessibility. Personalization, adapted to the needs of individual learners, is crucial for enhancing the learning experience, particularly in online settings. A key aspect of personalization is providing learners with answers customized to their specific questions. This paper therefore explores the potential of Large Language Models (LLMs) to generate personalized answers to learners' questions, thereby enhancing engagement and reducing the workload on educators. To evaluate the effectiveness of LLMs in this context, we conducted a comprehensive study using the StackExchange platform in two distinct areas: language learning and programming. We developed a framework and a dataset for validating automatically generated personalized answers. Subsequently, we generated personalized answers using different strategies, including 0-shot, 1-shot, and few-shot scenarios. The generated answers were evaluated using three methods: 1. BERTScore, 2. LLM evaluation, and 3. human evaluation. Our findings indicated that providing LLMs with examples of desired answers (from the learner or similar learners) can significantly enhance the LLMs' ability to tailor responses to individual learners' needs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing online learning via LLM-generated personalized answers
Evaluating LLM effectiveness in personalized answer generation
Reducing educator workload with automated tailored responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate personalized answers for learners
Framework validates answers via BERTScore and LLMs
Few-shot examples enhance LLM response personalization
M
M. Molavi
Leibniz Information Centre for Science and Technology (TIB)
Mohammadreza Tavakoli
Mohammadreza Tavakoli
TIB – Leibniz Information Centre for Science and Technology, Hannover, Germany
Recommender SystemsUser Behavior AnalysisLearning AnalyticsTechnology Enhanced LearningOpen Educational Resources
M
Mohammad Moein
Leibniz Information Centre for Science and Technology (TIB)
A
Abdolali Faraji
Leibniz Information Centre for Science and Technology (TIB)
G
G'abor Kismih'ok
Leibniz Information Centre for Science and Technology (TIB)