🤖 AI Summary
In agile development, requirements analysts struggle to transform complex stakeholder needs into high-quality user stories (US), and semantic quality assessment—particularly regarding clarity and consistency—remains heavily reliant on manual effort. Method: This study conducts the first systematic evaluation of ten large language models (LLMs) on two complementary tasks: automated US generation and semantic quality assessment. USs are synthesized from simulated client interviews, and LLM outputs are rigorously compared against human-authored USs using both qualitative and quantitative metrics—including coverage, stylistic adherence, lexical diversity, and semantic quality. Contribution/Results: LLMs achieve near-human performance in coverage and stylistic fidelity; although lexical diversity and compliance rates require improvement, LLMs demonstrate reliable performance in semantic quality assessment under well-defined criteria. This significantly reduces manual review overhead. The work provides empirical evidence and a methodological framework for integrating LLMs into agile requirements engineering.
📝 Abstract
Requirements elicitation is still one of the most challenging activities of the requirements engineering process due to the difficulty requirements analysts face in understanding and translating complex needs into concrete requirements. In addition, specifying high-quality requirements is crucial, as it can directly impact the quality of the software to be developed. Although automated tools allow for assessing the syntactic quality of requirements, evaluating semantic metrics (e.g., language clarity, internal consistency) remains a manual and time-consuming activity. This paper explores how LLMs can help automate requirements elicitation within agile frameworks, where requirements are defined as user stories (US). We used 10 state-of-the-art LLMs to investigate their ability to generate US automatically by emulating customer interviews. We evaluated the quality of US generated by LLMs, comparing it with the quality of US generated by humans (domain experts and students). We also explored whether and how LLMs can be used to automatically evaluate the semantic quality of US. Our results indicate that LLMs can generate US similar to humans in terms of coverage and stylistic quality, but exhibit lower diversity and creativity. Although LLM-generated US are generally comparable in quality to those created by humans, they tend to meet the acceptance quality criteria less frequently, regardless of the scale of the LLM model. Finally, LLMs can reliably assess the semantic quality of US when provided with clear evaluation criteria and have the potential to reduce human effort in large-scale assessments.