A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight

πŸ“… 2025-10-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing the dual challenges of teacher shortages and poor pedagogical alignment of large language models (LLMs) in programming education, this study proposes an educator-centered β€œhuman–AI collaborative closed-loop” design paradigm. Methodologically, it integrates scaffolding-based instructional design, curriculum-specific adaptation, and human-in-the-loop intervention to anchor LLM capabilities in three core pedagogical functions: formative feedback generation, automated assessment, and student knowledge modeling. Its key contribution is a teaching-objective-driven technical alignment framework that tightly couples model outputs with authentic classroom practices. Empirical evaluation demonstrates that, compared to fully automated approaches, this collaborative mechanism significantly improves feedback accuracy, diagnostic validity, and pedagogical adaptability across diverse instructional scenarios. The work establishes reusable design principles and actionable implementation pathways for deploying LLMs in programming education.

Technology Category

Application Category

πŸ“ Abstract
Novice programmers benefit from timely, personalized support that addresses individual learning gaps, yet the availability of instructors and teaching assistants is inherently limited. Large language models (LLMs) present opportunities to scale such support, though their effectiveness depends on how well technical capabilities are aligned with pedagogical goals. This survey synthesizes recent work on LLM applications in programming education across three focal areas: formative code feedback, assessment, and knowledge modeling. We identify recurring design patterns in how these tools are applied and find that interventions are most effective when educator expertise complements model output through human-in-the-loop oversight, scaffolding, and evaluation. Fully automated approaches are often constrained in capturing the pedagogical nuances of programming education, although human-in-the-loop designs and course specific adaptation offer promising directions for future improvement. Future research should focus on improving transparency, strengthening alignment with pedagogy, and developing systems that flexibly adapt to the needs of varied learning contexts.
Problem

Research questions and friction points this paper is trying to address.

Scaling personalized programming support for novice learners using LLMs
Aligning LLM technical capabilities with educational goals in programming
Balancing automated feedback with human oversight in code education
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-in-the-loop oversight for LLM applications
Course-specific adaptation of automated feedback systems
Scaffolding model outputs with educator expertise
πŸ”Ž Similar Papers
No similar papers found.
Griffin Pitts
Griffin Pitts
North Carolina State University
AI in EducationUser ModelingComputer Science EducationHuman-Computer Interaction
A
Anurata Prabha Hridi
North Carolina State University
A
Arun-Balajiee Lekshmi-Narayanan
University of Pittsburgh