🤖 AI Summary
This study addresses the fragmented state of research on software feature request (FR) analysis and processing through a systematic literature review (SLR) encompassing 131 studies published between 2010 and 2023. Guided by requirements engineering (RE) activities, we propose a novel, unified classification framework that systematically organizes core tasks—including FR classification, specification, verification, and quality assurance. Quantitative and qualitative analyses reveal critical challenges: high data noise, inconsistent annotation practices, and the absence of domain-specific benchmarks for large language models (LLMs) in FR processing. To address these, we curate and release a comprehensive list of publicly available FR datasets and tools. Key contributions include: (1) the first structured classification scheme covering the entire RE lifecycle for FRs; (2) an empirical assessment of LLMs’ capabilities and limitations in FR understanding; and (3) a suite of reusable, open-source resources to advance both research and industrial practice in FR engineering.
📝 Abstract
Feature requests are proposed by users to request new features or enhancements of existing features of software products, which represent users' wishes and demands. Satisfying users' demands can benefit the product from both competitiveness and user satisfaction. Feature requests have seen a rise in interest in the past few years and the amount of research has been growing. However, the diversity in the research topics suggests the need for their collective analysis to identify the challenges and opportunities so as to promote new advances in the future. In this work, following a defined process and a search protocol, we provide a systematic overview of the research area by searching and categorizing relevant studies. We select and analyze 131 primary studies using descriptive statistics and qualitative analysis methods. We classify the studies into different topics and group them from the perspective of requirements engineering activities. We investigate open tools as well as datasets for future research. In addition, we identify several key challenges and opportunities, such as: (1) ensuring the quality of feature requests, (2) improving their specification and validation, and (3) developing high-quality benchmarks for large language model-driven tasks.