Modeling Student Learning with 3.8 Million Program Traces

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling the cognitive processes and skill development of novice programmers from real-world interaction traces (e.g., edits, retries) remains challenging due to the complexity and variability of authentic learning behaviors. Method: Leveraging 3.8 million fine-grained programming actions from the Pencil Code platform, we present the first large-scale language modeling approach trained directly on raw interaction sequences. Our method integrates behavioral analysis with probing techniques and introduces a style-preserving code editing sequence generation mechanism. Contribution/Results: Compared to approaches relying solely on final code submissions or synthetic trajectories, our model significantly improves prediction accuracy for diverse student behaviors—including goal-directed backtracking and comment frequency—and generates personalized, pedagogically grounded correction paths that faithfully preserve individual coding styles. This advances both behavioral modeling fidelity and actionable, adaptive error guidance for novice learners.

Technology Category

Application Category

📝 Abstract
As programmers write code, they often edit and retry multiple times, creating rich "interaction traces" that reveal how they approach coding tasks and provide clues about their level of skill development. For novice programmers in particular, these traces reflect the diverse reasoning processes they employ to code, such as exploratory behavior to understand how a programming concept works, re-strategizing in response to bugs, and personalizing stylistic choices. In this work, we explore what can be learned from training language models on such reasoning traces: not just about code, but about coders, and particularly students learning to program. We introduce a dataset of over 3.8 million programming reasoning traces from users of Pencil Code, a free online educational platform used by students to learn simple programming concepts. Compared to models trained only on final programs or synthetically-generated traces, we find that models trained on real traces are stronger at modeling diverse student behavior. Through both behavioral and probing analyses, we also find that many properties of code traces, such as goal backtracking or number of comments, can be predicted from learned representations of the students who write them. Building on this result, we show that we can help students recover from mistakes by steering code generation models to identify a sequence of edits that will results in more correct code while remaining close to the original student's style. Together, our results suggest that many properties of code are properties of individual students and that training on edit traces can lead to models that are more steerable, more predictive of student behavior while programming, and better at generating programs in their final states. Code and data is available at https://github.com/meghabyte/pencilcode-public
Problem

Research questions and friction points this paper is trying to address.

Modeling student learning from programming interaction traces
Predicting student behavior through code trace analysis
Generating personalized code corrections while preserving student style
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training language models on real programming interaction traces
Predicting student behavior from learned code representations
Steering code generation to recover from student mistakes
🔎 Similar Papers
No similar papers found.