🤖 AI Summary
Software engineering education lacks systematic debugging instruction, leaving students to rely on ad hoc, experience-based self-learning. Method: This study introduces an IDE-integrated, interactive debugging tutorial system designed for novices, pioneering the synergistic use of classical fault-localization algorithms and large language models (LLMs) for pedagogical guidance. The system supports intelligent breakpoint recommendation, conversational debugging assistance, and interpretable fault attribution—emphasizing debugging reasoning over mere bug resolution. It employs a modular plugin architecture and a human–computer collaborative interface. Contribution/Results: In a controlled experiment with eight undergraduate students, participants consistently affirmed the system’s value in providing structured, scaffolded guidance. Automated breakpoint placement was rated the most effective feature; interactive feedback and explanation quality also received strong positive evaluations.
📝 Abstract
Debugging software, i.e., the localization of faults and their repair, is a main activity in software engineering. Therefore, effective and efficient debugging is one of the core skills a software engineer must develop. However, the teaching of debugging techniques is usually very limited or only taught in indirect ways, e.g., during software projects. As a result, most Computer Science (CS) students learn debugging only in an ad-hoc and unstructured way. In this work, we present our approach called Simulated Interactive Debugging that interactively guides students along the debugging process. The guidance aims to empower the students to repair their solutions and have a proper"learning"experience. We envision that such guided debugging techniques can be integrated into programming courses early in the CS education curriculum. To perform an initial evaluation, we developed a prototypical implementation using traditional fault localization techniques and large language models. Students can use features like the automated setting of breakpoints or an interactive chatbot. We designed and executed a controlled experiment that included this IDE-integrated tooling with eight undergraduate CS students. Based on the responses, we conclude that the participants liked the systematic guidance by the assisted debugger. In particular, they rated the automated setting of breakpoints as the most effective, followed by the interactive debugging and chatting, and the explanations for how breakpoints were set. In our future work, we will improve our concept and implementation, add new features, and perform more intensive user studies.