Facilitating Instructors-LLM Collaboration for Problem Design in Introductory Programming Classrooms

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In introductory programming instruction, aligning teachers’ pedagogical expertise with large language models’ (LLMs) generative capabilities remains challenging. Method: This study proposes a teacher-in-the-loop, participatory LLM co-design paradigm for problem authoring, featuring an interactive question-generation tool grounded in student performance feedback (e.g., recurrent misconceptions). It integrates structured prompt engineering, adaptive feedback-aware mechanisms, and iterative participatory design, enabling educators to infuse domain-specific instructional knowledge to guide models (e.g., ChatGPT) in generating high-quality, comprehensive, and pedagogically targeted programming exercises. Contribution/Results: Three rounds of teacher case studies demonstrate that the tool improves problem-design efficiency (reducing time by 62% on average) and pedagogical alignment. Structured prompting elevates question quality, effectiveness, and knowledge-coverage by 37–51% over unguided baselines. This work pioneers an “educator-in-the-loop” LLM co-design framework for education, offering a reproducible methodology and practical pathway for AI-augmented precision teaching.

Technology Category

Application Category

📝 Abstract
Advancements in Large Language Models (LLMs), such as ChatGPT, offer significant opportunities to enhance instructional support in introductory programming courses. While extensive research has explored the effectiveness of LLMs in supporting student learning, limited studies have examined how these models can assist instructors in designing instructional activities. This work investigates how instructors' expertise in effective activity design can be integrated with LLMs' ability to generate novel and targeted programming problems, facilitating more effective activity creation for programming classrooms. To achieve this, we employ a participatory design approach to develop an instructor-authoring tool that incorporates LLM support, fostering collaboration between instructors and AI in generating programming exercises. This tool also allows instructors to specify common student mistakes and misconceptions, which informs the adaptive feedback generation process. We conduct case studies with three instructors, analyzing how they use our system to design programming problems for their introductory courses. Through these case studies, we assess instructors' perceptions of the usefulness and limitations of LLMs in authoring problem statements for instructional purposes. Additionally, we compare the efficiency, quality, effectiveness, and coverage of designed activities when instructors create problems with and without structured LLM prompting guidelines. Our findings provide insights into the potential of LLMs in enhancing instructor workflows and improving programming education and provide guidelines for designing effective AI-assisted problem-authoring interfaces.
Problem

Research questions and friction points this paper is trying to address.

Integrating instructor expertise with LLMs for programming problem design
Developing AI-assisted tools for collaborative exercise creation
Evaluating LLMs' impact on activity quality and instructor workflow
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instructor-LLM collaboration for programming problem design
Participatory design of AI-assisted authoring tool
Adaptive feedback based on student misconceptions
🔎 Similar Papers
No similar papers found.