🤖 AI Summary
Online user feedback is highly noisy, and the application of large language models (LLMs) in requirements engineering (RE) lacks empirical, reproducible evaluation. Method: This paper presents the first systematic assessment of five lightweight open-source LLMs across three core RE tasks—user request classification, non-functional requirement classification, and requirement specification generation—using two real-world user feedback datasets. Classification performance is quantified via F1-score; generation quality is evaluated through human scoring (5-point scale). Results: F1-scores range from 0.47 to 0.68 for classification tasks; average human score for generated specifications is 3.0/5. Contribution: We release the first reproducible experimental package for lightweight LLMs in RE, empirically delineating their capabilities and limitations, identifying concrete optimization pathways, and demonstrating their feasibility and practical utility for RE in resource-constrained environments.
📝 Abstract
[Context and Motivation] Online user feedback provides valuable information to support requirements engineering (RE). However, analyzing online user feedback is challenging due to its large volume and noise. Large language models (LLMs) show strong potential to automate this process and outperform previous techniques. They can also enable new tasks, such as generating requirements specifications.
[Question-Problem] Despite their potential, the use of LLMs to analyze user feedback for RE remains underexplored. Existing studies offer limited empirical evidence, lack thorough evaluation, and rarely provide replication packages, undermining validity and reproducibility.
[Principal Idea-Results] We evaluate five lightweight open-source LLMs on three RE tasks: user request classification, NFR classification, and requirements specification generation. Classification performance was measured on two feedback datasets, and specification quality via human evaluation. LLMs achieved moderate-to-high classification accuracy (F1 ~ 0.47-0.68) and moderately high specification quality (mean ~ 3/5).
[Contributions] We newly explore lightweight LLMs for feedback-driven requirements development. Our contributions are: (i) an empirical evaluation of lightweight LLMs on three RE tasks, (ii) a replication package, and (iii) insights into their capabilities and limitations for RE.