🤖 AI Summary
Introductory programming instruction often lacks opportunities for students to practice clarifying ambiguous requirements, while large language models’ code-generation capabilities may further diminish engagement with traditional exercises. Method: We propose “probeable problems”—automatically graded programming tasks with intentionally underspecified requirements, requiring students to submit test inputs (“probes”) to elicit clarification. We explicitly model requirement probing as a learning activity, implement probe interaction and immediate feedback via the CodeRunner framework, and analyze over 40,000 probe submissions using behavioral logging, clustering, and association statistics, supplemented by surveys (N=1,000) and qualitative interviews. Contribution/Results: Systematic probing strategies significantly reduce erroneous submissions and strongly correlate with code correctness and final course grades. Students report enhanced critical thinking, metacognitive awareness, and authentic engineering experience. The approach enables zero-cost, large-scale deployment, shifting programming education from solution-oriented problem-solving toward problem-definition competence.
📝 Abstract
Introductory programming courses often rely on small code-writing exercises that have clearly specified problem statements. This limits opportunities for students to practice how to clarify ambiguous requirements -- a critical skill in real-world programming. In addition, the emerging capabilities of large language models (LLMs) to produce code from well-defined specifications may harm student engagement with traditional programming exercises. This study explores the use of ``Probeable Problems'', automatically gradable tasks that have deliberately vague or incomplete specifications. Such problems require students to submit test inputs, or `probes', to clarify requirements before implementation. Through analysis of over 40,000 probes in an introductory course, we identify patterns linking probing behaviors to task success. Systematic strategies, such as thoroughly exploring expected behavior before coding, resulted in fewer incorrect code submissions and correlated with course success. Feedback from nearly 1,000 participants highlighted the challenges and real-world relevance of these tasks, as well as benefits to critical thinking and metacognitive skills. Probeable Problems are easy to set up and deploy at scale, and help students recognize and resolve uncertainties in programming problems.