From Online User Feedback to Requirements: Evaluating Large Language Models for Classification and Specification Tasks

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online user feedback is highly noisy, and the application of large language models (LLMs) in requirements engineering (RE) lacks empirical, reproducible evaluation. Method: This paper presents the first systematic assessment of five lightweight open-source LLMs across three core RE tasks—user request classification, non-functional requirement classification, and requirement specification generation—using two real-world user feedback datasets. Classification performance is quantified via F1-score; generation quality is evaluated through human scoring (5-point scale). Results: F1-scores range from 0.47 to 0.68 for classification tasks; average human score for generated specifications is 3.0/5. Contribution: We release the first reproducible experimental package for lightweight LLMs in RE, empirically delineating their capabilities and limitations, identifying concrete optimization pathways, and demonstrating their feasibility and practical utility for RE in resource-constrained environments.

Technology Category

Application Category

📝 Abstract
[Context and Motivation] Online user feedback provides valuable information to support requirements engineering (RE). However, analyzing online user feedback is challenging due to its large volume and noise. Large language models (LLMs) show strong potential to automate this process and outperform previous techniques. They can also enable new tasks, such as generating requirements specifications. [Question-Problem] Despite their potential, the use of LLMs to analyze user feedback for RE remains underexplored. Existing studies offer limited empirical evidence, lack thorough evaluation, and rarely provide replication packages, undermining validity and reproducibility. [Principal Idea-Results] We evaluate five lightweight open-source LLMs on three RE tasks: user request classification, NFR classification, and requirements specification generation. Classification performance was measured on two feedback datasets, and specification quality via human evaluation. LLMs achieved moderate-to-high classification accuracy (F1 ~ 0.47-0.68) and moderately high specification quality (mean ~ 3/5). [Contributions] We newly explore lightweight LLMs for feedback-driven requirements development. Our contributions are: (i) an empirical evaluation of lightweight LLMs on three RE tasks, (ii) a replication package, and (iii) insights into their capabilities and limitations for RE.
Problem

Research questions and friction points this paper is trying to address.

Evaluating lightweight LLMs for requirements classification tasks
Assessing LLM performance on user feedback analysis automation
Measuring specification generation quality from online user feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating lightweight LLMs for requirements classification tasks
Generating requirements specifications using LLMs from feedback
Assessing LLM performance via human evaluation and datasets
🔎 Similar Papers
No similar papers found.
M
Manjeshwar Aniruddh Mallya
Lero, the Research Ireland Centre for Software, University of Limerick, Ireland
Alessio Ferrari
Alessio Ferrari
Lecturer, UCD; Senior Research Scientist, ISTI CNR
Natural Language ProcessingRequirements EngineeringRequirements ElicitationFormal Methods
M
Mohammad Amin Zadenoori
University of Padova, Italy
J
Jacek Dąbrowski
Lero, the Research Ireland Centre for Software, University of Limerick, Ireland