What Should We Engineer in Prompts? Training Humans in Requirement-Driven LLM Use

📅 2024-09-13
🏛️ ACM Transactions on Computer-Human Interaction
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Current prompt engineering methodologies overemphasize automation techniques (e.g., role-playing, chain-of-thought) while neglecting users’ ability to articulate clear, customized requirements—resulting in low-quality prompts for complex tasks. Method: This paper introduces Requirement-Oriented Prompt Engineering (ROPE), a novel paradigm centered on *requirement quality* as the core training objective. ROPE establishes a human-centered instructional framework integrating expert annotation, structured training tasks, and LLM-driven real-time feedback to iteratively refine requirement formulation. Contribution/Results: Empirical analysis confirms a strong positive correlation between input requirement quality and downstream LLM performance. A randomized controlled trial with 30 novices demonstrates that ROPE improves task success rate by 20%—significantly outperforming conventional prompt training (+1%)—and this gain is not replicable via automated prompt optimization alone. The framework yields a scalable, pedagogically grounded teaching toolkit for effective prompt authoring.

Technology Category

Application Category

📝 Abstract
Prompting LLMs for complex tasks (e.g., building a trip advisor chatbot) needs humans to clearly articulate customized requirements (e.g., “start the response with a tl;dr”). However, existing prompt engineering instructions often lack focused training on requirement articulation and instead tend to emphasize increasingly automatable strategies (e.g., tricks like adding role-plays and “think step-by-step”). To address the gap, we introduce Requirement-Oriented Prompt Engineering (ROPE), a paradigm that focuses human attention on generating clear, complete requirements during prompting. We implement ROPE through an assessment and training suite that provides deliberate practice with LLM-generated feedback. In a randomized controlled experiment with 30 novices, ROPE significantly outperforms conventional prompt engineering training (20% vs. 1% gains), a gap that automatic prompt optimization cannot close. Furthermore, we demonstrate a direct correlation between the quality of input requirements and LLM outputs. Our work paves the way to empower more end-users to build complex LLM applications.
Problem

Research questions and friction points this paper is trying to address.

Training humans to articulate clear requirements for LLM prompts
Addressing lack of focus on requirement articulation in prompt engineering
Improving LLM output quality through better input requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Requirement-Oriented Prompt Engineering (ROPE)
Provides assessment and training suite with feedback
Focuses on clear, complete requirement articulation
🔎 Similar Papers
No similar papers found.