🤖 AI Summary
This study addresses a key challenge in educational AI: effectively modeling students’ common misconceptions while preserving the large language model’s capacity for correct reasoning. The authors propose two misconception-aware models—a “novice student model” that simulates individual learners and an “expert tutor model” capable of handling multiple misconception types—and introduce MalAlgoLib, a custom library for generating algebraic problems with both correct and erroneous intermediate reasoning steps. Their findings reveal that supervision based solely on final answers fails to capture misconceptions, whereas intermediate-step supervision is essential for accurate misconception modeling. While the student model tends to overgeneralize from a single misconception, impairing problem-solving performance, the tutor model successfully acquires diverse misconceptions without compromising its ability to reason correctly. The work contributes a multi-misconception joint training framework and a training strategy that balances correctness with effective misconception representation.
📝 Abstract
Effective educational AI depends on modeling student misconceptions. Such models enable realistic learner simulation and diagnostic, adaptive tutoring. However, instruction-tuning large language models on student responses containing misconception errors can degrade reasoning abilities, creating a tension between faithful misconception modeling and preserving correct reasoning in other contexts. To support both learner simulation and tutoring, we study two misconception-aware models: the Novice Student Misconception Model, trained to acquire a single misconception for simulating an individual student, and the Expert Tutor Misconception Model, trained on multiple misconceptions to capture the error patterns a tutor encounters across students. To study the misconception acquisition dynamics of both models, we develop MalAlgoLib, a library that generates algebra problems with correct solution traces and misconception-specific erroneous traces. Our experiments across three LLMs reveal that the student and the tutor model exhibit fundamentally different misconception acquisition dynamics. For the student model, a single misconception is not learned as a context-specific behavior. Models overapply it across problems, degrading correct-solving accuracy unless training includes correct examples to enforce boundaries. In contrast, the tutor model can learn multiple misconceptions jointly without sacrificing correct-solving accuracy. Critically, intermediate reasoning steps are the bottleneck. With final-answer supervision alone, models cannot learn where error enters the solution, so neither the student model nor the tutor model acquires misconceptions regardless of data size. Together, these results, enabled by MalAlgoLib, provide an interpretable account of misconception acquisition under instruction tuning and guidance for training misconception-aware LLMs while preserving correct reasoning.