Supporting Stakeholder Requirements Expression with LLM Revisions: An Empirical Evaluation

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that stakeholders often struggle to articulate their true requirements accurately due to limited domain knowledge or cognitive biases, leading to misalignment between stated needs and underlying intentions. To bridge this gap, the authors propose a user-centered approach that leverages large language models (LLMs) to contextually rewrite initial requirements, followed by an iterative human-in-the-loop feedback mechanism for validation and refinement. As the first empirical investigation of its kind, the work demonstrates the efficacy of LLMs as assistive tools in requirements elicitation. In an evaluation involving 130 requirements from 26 participants, LLM-rewritten versions significantly outperformed original statements in intent alignment, readability, logical coherence, and unambiguity, while also uncovering latent requirement details, thereby enhancing both the accuracy and completeness of the requirements gathering process.

Technology Category

Application Category

📝 Abstract
Stakeholders often struggle to accurately express their requirements due to articulation barriers arising from limited domain knowledge or from cognitive constraints. This can cause misalignment between expressed and intended requirements, complicating elicitation and validation. Traditional elicitation techniques, such as interviews and follow-up sessions, are time-consuming and risk distorting stakeholders'original intent across iterations. Large Language Models (LLMs) can infer user intentions from context, suggesting potential for assisting stakeholders in expressing their needs. This raises the questions of (i) how effectively LLMs can support requirement expression and (ii) whether such support benefits stakeholders with limited domain expertise. We conducted a study with 26 participants who produced 130 requirement statements. Each participant first expressed requirements unaided, then evaluated LLM-generated revisions tailored to their context. Participants rated LLM revisions significantly higher than their original statements across all dimensions-alignment with intent, readability, reasoning, and unambiguity. Qualitative feedback further showed that LLM revisions often surfaced tacit details stakeholders considered important and helped them better understand their own requirements. We present and evaluate a stakeholder-centered approach that leverages LLMs as articulation aids in requirements elicitation and validation. Our results show that LLM-assisted reformulation improves perceived completeness, clarity, and alignment of requirements. By keeping stakeholders in the validation loop, this approach promotes responsible and trustworthy use of AI in Requirements Engineering.
Problem

Research questions and friction points this paper is trying to address.

requirements elicitation
stakeholder articulation
requirement expression
cognitive constraints
domain knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Requirements Engineering
Stakeholder Elicitation
Requirement Articulation
Human-AI Collaboration
🔎 Similar Papers
No similar papers found.
M
Michael Mircea
Leibniz Universität, Welfengarten 1, 30167 Hanover, Germany
E
Emre Gevrek
Leibniz Universität, Welfengarten 1, 30167 Hanover, Germany
E
Elisa Schmid
Leibniz Universität, Welfengarten 1, 30167 Hanover, Germany
Kurt Schneider
Kurt Schneider
leibniz universität hannover
informatik software engineering computer science boulder requirements stuttgart