Interleaving Natural Language Prompting with Code Editing for Solving Programming Tasks with Generative AI Models

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how learners dynamically coordinate natural language prompting and manual code editing during AI-assisted programming, and how this coordination evolves with task complexity and learner proficiency. Drawing on large-scale empirical data—including fine-grained interaction logs and reflective textual responses—we employ a mixed-methods approach combining quantitative behavioral analysis and qualitative thematic coding. Our findings reveal: (1) manual editing primarily serves as “last-mile” refinement of AI-generated code, with editing frequency increasing significantly for high-complexity tasks; (2) prompting and editing exhibit proficiency-dependent complementarity—high-performing learners rely more on prompt-driven solution generation, whereas low-performing learners engage in frequent, fine-grained (often single-line) edits; (3) we systematically uncover their joint role in regulating cognitive load and differentiating debugging strategies—providing novel empirical foundations for designing effective AI-augmented programming pedagogy.

Technology Category

Application Category

📝 Abstract
Nowadays, computing students often rely on both natural-language prompting and manual code editing to solve programming tasks. Yet we still lack a clear understanding of how these two modes are combined in practice, and how their usage varies with task complexity and student ability. In this paper, we investigate this through a large-scale study in an introductory programming course, collecting 13,305 interactions from 355 students during a three-day laboratory activity. Our analysis shows that students primarily use prompting to generate initial solutions, and then often enter short edit-run loops to refine their code following a failed execution. We find that manual editing becomes more frequent as task complexity increases, but most edits remain concise, with many affecting a single line of code. Higher-performing students tend to succeed using prompting alone, while lower-performing students rely more on edits. Student reflections confirm that prompting is helpful for structuring solutions, editing is effective for making targeted corrections, while both are useful for learning. These findings highlight the role of manual editing as a deliberate last-mile repair strategy, complementing prompting in AI-assisted programming workflows.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how students combine natural language prompting and code editing
Investigating how task complexity and student ability affect prompting and editing usage
Understanding manual editing as a last-mile repair strategy in AI programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interleaving natural language prompting with code editing
Using short edit-run loops for code refinement
Applying manual editing as last-mile repair strategy
🔎 Similar Papers
No similar papers found.