Humanizing Automated Programming Feedback: Fine-Tuning Generative Models with Student-Written Feedback

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-generated feedback in programming education—typically based on prompt engineering—lacks personalization and naturalness, failing to emulate authentic peer review styles among students. Method: We propose a learnersourcing paradigm and introduce the first large-scale dataset of ~1,900 human-written, student-authored feedback comments. Leveraging open-weight LLMs (e.g., Llama3, Phi3), we perform supervised fine-tuning to jointly model correctness, length, and defect description. Contribution/Results: Our fine-tuned models significantly improve stylistic naturalness and content accuracy over prompt-based baselines. Evaluation shows generated feedback better aligns with students’ linguistic patterns and achieves higher correctness rates—even when ground-truth student feedback contains errors, our model maintains robust performance gains. This work transcends the limitations of conventional prompt engineering, establishing a novel, human-centered paradigm for personalized, pedagogically grounded programming feedback.

Technology Category

Application Category

📝 Abstract
The growing need for automated and personalized feedback in programming education has led to recent interest in leveraging generative AI for feedback generation. However, current approaches tend to rely on prompt engineering techniques in which predefined prompts guide the AI to generate feedback. This can result in rigid and constrained responses that fail to accommodate the diverse needs of students and do not reflect the style of human-written feedback from tutors or peers. In this study, we explore learnersourcing as a means to fine-tune language models for generating feedback that is more similar to that written by humans, particularly peer students. Specifically, we asked students to act in the flipped role of a tutor and write feedback on programs containing bugs. We collected approximately 1,900 instances of student-written feedback on multiple programming problems and buggy programs. To establish a baseline for comparison, we analyzed a sample of 300 instances based on correctness, length, and how the bugs are described. Using this data, we fine-tuned open-access generative models, specifically Llama3 and Phi3. Our findings indicate that fine-tuning models on learnersourced data not only produces feedback that better matches the style of feedback written by students, but also improves accuracy compared to feedback generated through prompt engineering alone, even though some student-written feedback is incorrect. This surprising finding highlights the potential of student-centered fine-tuning to improve automated feedback systems in programming education.
Problem

Research questions and friction points this paper is trying to address.

Generating human-like automated programming feedback
Improving feedback style and accuracy via fine-tuning
Leveraging student-written feedback for model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned generative models using student feedback
Learnersourced data collection from peer tutors
Improved accuracy and human-like feedback style
🔎 Similar Papers
No similar papers found.