Understanding Prompt Programming Tasks and Questions

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Developers face significant challenges in prompt programming due to inadequate tool support for core tasks and persistent issues. Method: We conducted a mixed-methods study—including interviews with 16 developers, observations of 8 practitioners, and a survey of 50 professionals—to construct the first comprehensive taxonomy of prompt programming, comprising 25 tasks and 51 associated problems. We further evaluated 48 mainstream tools against this taxonomy. Results: All identified tasks remain manual; no tool provides automated support for any task, and 34% of problems—including most high-priority ones—receive zero tool support. Our contributions are threefold: (1) the first dual-dimensional taxonomy explicitly mapping tasks to underlying problems in prompt programming; (2) empirical validation of critical gaps in the current tooling ecosystem; and (3) actionable theoretical foundations and practical guidelines for designing next-generation prompt engineering tools.

Technology Category

Application Category

📝 Abstract
Prompting foundation models (FMs) like large language models (LLMs) have enabled new AI-powered software features (e.g., text summarization) that previously were only possible by fine-tuning FMs. Now, developers are embedding prompts in software, known as prompt programs. The process of prompt programming requires the developer to make many changes to their prompt. Yet, the questions developers ask to update their prompt is unknown, despite the answers to these questions affecting how developers plan their changes. With the growing number of research and commercial prompt programming tools, it is unclear whether prompt programmers' needs are being adequately addressed. We address these challenges by developing a taxonomy of 25 tasks prompt programmers do and 51 questions they ask, measuring the importance of each task and question. We interview 16 prompt programmers, observe 8 developers make prompt changes, and survey 50 developers. We then compare the taxonomy with 48 research and commercial tools. We find that prompt programming is not well-supported: all tasks are done manually, and 16 of the 51 questions -- including a majority of the most important ones -- remain unanswered. Based on this, we outline important opportunities for prompt programming tools.
Problem

Research questions and friction points this paper is trying to address.

Identify tasks and questions in prompt programming
Assess support gaps in current prompt programming tools
Outline opportunities for improving prompt programming tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed taxonomy of 25 prompt programming tasks
Identified 51 key questions from prompt programmers
Compared findings with 48 existing tools
🔎 Similar Papers
No similar papers found.