LLM Chatbots in High School Programming: Exploring Behaviors and Interventions

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-school programming instruction, students frequently employ large language models (LLMs) for execution-oriented, solution-focused queries in unguided settings—a behavior significantly negatively correlated with exam performance and resistant to self-correction, ultimately reducing engagement. Method: Through iterative design-based research, we developed and empirically validated an LLM-integrated pedagogy centered on a “help-seeking scaffolding” framework, aiming to shift students from passive problem-solving toward tool-mediated active learning. We employed controlled experiments, fine-grained query trajectory analysis, and longitudinal academic performance tracking. Contribution/Results: The intervention significantly reduced the proportion of execution-oriented queries and improved the structural efficiency of learning workflows. However, it did not yield gains in summative assessment scores, underscoring the necessity of aligning LLM usage strategies with foundational skill development. This study is the first to empirically uncover a nonlinear relationship between LLM help-seeking strategies and learning outcomes, proposing actionable pedagogical principles and evidence-based pathways for AI-augmented programming education.

Technology Category

Application Category

📝 Abstract
This study uses a Design-Based Research (DBR) cycle to refine the integration of Large Language Models (LLMs) in high school programming education. The initial problem was identified in an Intervention Group where, in an unguided setting, a higher proportion of executive, solution-seeking queries correlated strongly and negatively with exam performance. A contemporaneous Comparison Group demonstrated that without guidance, these unproductive help-seeking patterns do not self-correct, with engagement fluctuating and eventually declining. This insight prompted a mid-course pedagogical intervention in the first group, designed to teach instrumental help-seeking. The subsequent evaluation confirmed the intervention's success, revealing a decrease in executive queries, as well as a shift toward more productive learning workflows. However, this behavioral change did not translate into a statistically significant improvement in exam grades, suggesting that altering tool-use strategies alone may be insufficient to overcome foundational knowledge gaps. The DBR process thus yields a more nuanced principle: the educational value of an LLM depends on a pedagogy that scaffolds help-seeking, but this is only one part of the complex process of learning.
Problem

Research questions and friction points this paper is trying to address.

Unguided LLM use correlates with poor exam performance
Unproductive help-seeking patterns do not self-correct without guidance
Altering tool-use strategies alone may not improve exam grades
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Design-Based Research to refine LLM integration
Implementing pedagogical intervention for instrumental help-seeking
Shifting from executive queries to productive learning workflows
🔎 Similar Papers
No similar papers found.