🤖 AI Summary
Non-functional requirements (NFRs) are frequently missing or difficult to identify early in software engineering, particularly from functional requirements (FRs).
Method: This paper proposes the first quality-attribute-driven, large language model (LLM)-assisted NFR generation framework. It integrates customized prompt engineering with a Deno pipeline and strictly aligns with the ISO/IEC 25010:2023 standard. The framework supports collaborative NFR generation across eight state-of-the-art LLMs (e.g., Gemini-1.5-Pro, Llama-3.3-70B).
Contribution/Results: We conduct the first multi-LLM comparative evaluation, generating 1,593 NFRs from 34 FRs. Expert assessment yields average scores of 4.63/5.0 for NFR validity and 4.59/5.0 for attribute appropriateness, with 80.4% accuracy in quality-attribute classification—demonstrating the feasibility and practicality of automating high-quality NFR derivation in requirements engineering.
📝 Abstract
Neglecting non-functional requirements (NFRs) early in software development can lead to critical challenges. Despite their importance, NFRs are often overlooked or difficult to identify, impacting software quality. To support requirements engineers in eliciting NFRs, we developed a framework that leverages Large Language Models (LLMs) to derive quality-driven NFRs from functional requirements (FRs). Using a custom prompting technique within a Deno-based pipeline, the system identifies relevant quality attributes for each functional requirement and generates corresponding NFRs, aiding systematic integration. A crucial aspect is evaluating the quality and suitability of these generated requirements. Can LLMs produce high-quality NFR suggestions? Using 34 functional requirements - selected as a representative subset of 3,964 FRs-the LLMs inferred applicable attributes based on the ISO/IEC 25010:2023 standard, generating 1,593 NFRs. A horizontal evaluation covered three dimensions: NFR validity, applicability of quality attributes, and classification precision. Ten industry software quality evaluators, averaging 13 years of experience, assessed a subset for relevance and quality. The evaluation showed strong alignment between LLM-generated NFRs and expert assessments, with median validity and applicability scores of 5.0 (means: 4.63 and 4.59, respectively) on a 1-5 scale. In the classification task, 80.4% of LLM-assigned attributes matched expert choices, with 8.3% near misses and 11.3% mismatches. A comparative analysis of eight LLMs highlighted variations in performance, with gemini-1.5-pro exhibiting the highest attribute accuracy, while llama-3.3-70B achieved higher validity and applicability scores. These findings provide insights into the feasibility of using LLMs for automated NFR generation and lay the foundation for further exploration of AI-assisted requirements engineering.