Automated Non-Functional Requirements Generation in Software Engineering with Large Language Models: A Comparative Study

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-functional requirements (NFRs) are frequently missing or difficult to identify early in software engineering, particularly from functional requirements (FRs). Method: This paper proposes the first quality-attribute-driven, large language model (LLM)-assisted NFR generation framework. It integrates customized prompt engineering with a Deno pipeline and strictly aligns with the ISO/IEC 25010:2023 standard. The framework supports collaborative NFR generation across eight state-of-the-art LLMs (e.g., Gemini-1.5-Pro, Llama-3.3-70B). Contribution/Results: We conduct the first multi-LLM comparative evaluation, generating 1,593 NFRs from 34 FRs. Expert assessment yields average scores of 4.63/5.0 for NFR validity and 4.59/5.0 for attribute appropriateness, with 80.4% accuracy in quality-attribute classification—demonstrating the feasibility and practicality of automating high-quality NFR derivation in requirements engineering.

Technology Category

Application Category

📝 Abstract
Neglecting non-functional requirements (NFRs) early in software development can lead to critical challenges. Despite their importance, NFRs are often overlooked or difficult to identify, impacting software quality. To support requirements engineers in eliciting NFRs, we developed a framework that leverages Large Language Models (LLMs) to derive quality-driven NFRs from functional requirements (FRs). Using a custom prompting technique within a Deno-based pipeline, the system identifies relevant quality attributes for each functional requirement and generates corresponding NFRs, aiding systematic integration. A crucial aspect is evaluating the quality and suitability of these generated requirements. Can LLMs produce high-quality NFR suggestions? Using 34 functional requirements - selected as a representative subset of 3,964 FRs-the LLMs inferred applicable attributes based on the ISO/IEC 25010:2023 standard, generating 1,593 NFRs. A horizontal evaluation covered three dimensions: NFR validity, applicability of quality attributes, and classification precision. Ten industry software quality evaluators, averaging 13 years of experience, assessed a subset for relevance and quality. The evaluation showed strong alignment between LLM-generated NFRs and expert assessments, with median validity and applicability scores of 5.0 (means: 4.63 and 4.59, respectively) on a 1-5 scale. In the classification task, 80.4% of LLM-assigned attributes matched expert choices, with 8.3% near misses and 11.3% mismatches. A comparative analysis of eight LLMs highlighted variations in performance, with gemini-1.5-pro exhibiting the highest attribute accuracy, while llama-3.3-70B achieved higher validity and applicability scores. These findings provide insights into the feasibility of using LLMs for automated NFR generation and lay the foundation for further exploration of AI-assisted requirements engineering.
Problem

Research questions and friction points this paper is trying to address.

Automated generation of non-functional requirements using Large Language Models.
Evaluation of LLM-generated NFRs for validity, applicability, and classification precision.
Comparative study of eight LLMs for NFR generation performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for NFR generation from FRs
Uses custom prompting in Deno-based pipeline
Evaluates NFR quality via ISO/IEC 25010 standard
🔎 Similar Papers
No similar papers found.
J
Jomar Thomas Almonte
Engineering, University Park, The Pennsylvania State University, Pennsylvania, USA
S
Santhosh Anitha Boominathan
Engineering, Great Valley, The Pennsylvania State University, Pennsylvania, USA
Nathalia Nascimento
Nathalia Nascimento
Assistant Professor, Penn State University
Software EngineeringArtificial IntelligenceInternet of ThingsLLM agentE-nose