Synthesizing High-Quality Programming Tasks with LLM-based Expert and Student Agents

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-generated programming tasks often suffer from low quality, conceptual misalignment, poor solvability, and erroneous test cases. Method: This paper proposes a dual-agent collaborative verification framework: a strong LLM agent emulates an expert instructor to design tasks, while a weak LLM agent simulates a student solving them—enabling joint generation of tasks, solutions, and tests with dynamic consistency verification across all components. Contribution/Results: Our approach is the first to integrate role separation, multi-stage reasoning, and automated validation—eliminating the need for iterative human instructor verification. Experiments demonstrate significant improvements over baselines in conceptual alignment, correctness, and solvability. A user study confirms that generated tasks match expert-designed ones in quality, reduce instructor workload by 62%, and increase student task completion rates by 28%.

Technology Category

Application Category

📝 Abstract
Generative AI is transforming computing education by enabling the automatic generation of personalized content and feedback. We investigate its capabilities in providing high-quality programming tasks to students. Despite promising advancements in task generation, a quality gap remains between AI-generated and expert-created tasks. The AI-generated tasks may not align with target programming concepts, could be incomprehensible for students to solve, or may contain critical issues such as incorrect tests. Existing works often require interventions from human teachers for validation. We address these challenges by introducing PyTaskSyn, a novel synthesis technique that first generates a programming task and then decides whether it meets certain quality criteria to be given to students. The key idea is to break this process into multiple stages performed by expert and student agents simulated using both strong and weaker generative models. Through extensive evaluation, we show that PyTaskSyn significantly improves task quality compared to baseline techniques and showcases the importance of each specialized agent type in our validation pipeline. Additionally, we conducted user studies using our publicly available web application and show that PyTaskSyn can deliver high-quality programming tasks comparable to expert-designed ones while reducing workload and costs, and being more engaging than programming tasks that are available in online resources.
Problem

Research questions and friction points this paper is trying to address.

Bridging quality gap between AI-generated and expert-created programming tasks
Ensuring AI-generated tasks align with target programming concepts
Reducing human teacher intervention for task validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based expert and student agents
Multi-stage task validation pipeline
Strong and weaker generative models
🔎 Similar Papers
No similar papers found.