🤖 AI Summary
This study investigates whether small language models (SLMs) can effectively replace large language models (LLMs) for classification tasks in requirements engineering—balancing performance, efficiency, and security. We systematically evaluate eight models (three LLMs and five SLMs) across three benchmark datasets—PROMISE, PROMISE Reclass, and SecReq—covering a 300× parameter range, multilingual settings, and multiple metrics (e.g., F1-score, recall). Results show that SLMs achieve comparable performance: their average F1-score is only 2% lower than LLMs (p > 0.05, statistically insignificant), and they even outperform LLMs in recall on PROMISE Reclass. Crucially, model performance is driven primarily by data characteristics rather than scale. This work provides the first empirical evidence of SLMs’ strong competitiveness in requirements classification, establishing them as viable, lightweight, controllable, and deployable alternatives for practical RE tooling.
📝 Abstract
[Context and motivation] Large language models (LLMs) show notable results in natural language processing (NLP) tasks for requirements engineering (RE). However, their use is compromised by high computational cost, data sharing risks, and dependence on external services. In contrast, small language models (SLMs) offer a lightweight, locally deployable alternative. [Question/problem] It remains unclear how well SLMs perform compared to LLMs in RE tasks in terms of accuracy. [Results] Our preliminary study compares eight models, including three LLMs and five SLMs, on requirements classification tasks using the PROMISE, PROMISE Reclass, and SecReq datasets. Our results show that although LLMs achieve an average F1 score of 2% higher than SLMs, this difference is not statistically significant. SLMs almost reach LLMs performance across all datasets and even outperform them in recall on the PROMISE Reclass dataset, despite being up to 300 times smaller. We also found that dataset characteristics play a more significant role in performance than model size. [Contribution] Our study contributes with evidence that SLMs are a valid alternative to LLMs for requirements classification, offering advantages in privacy, cost, and local deployability.