USeR: A Web-based User Story eReviewer for Assisted Quality Optimizations

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In agile development, user stories often exhibit inconsistent quality and lack standardized, quantitative evaluation tools. Method: This paper proposes and implements USeR, a web-based intelligent review tool that systematically integrates 34 quantifiable quality criteria, distills eight core metrics, and designs an interpretable lightweight algorithm. USeR adopts a hybrid architecture combining rule-based reasoning and machine learning, delivering real-time, consistent, and traceable quality feedback via RESTful APIs. Contribution/Results: Empirically validated on 100 user stories from two real-world projects in the automotive and healthcare domains—with assessment by four domain experts—USeR significantly improves story completeness, testability, and traceability. It constitutes the first empirically grounded, production-deployable solution for user story quality assurance, directly supporting product owners and other agile stakeholders.

Technology Category

Application Category

📝 Abstract
User stories are widely applied for conveying requirements within agile software development teams. Multiple user story quality guidelines exist, but authors like Product Owners in industry projects frequently fail to write high-quality user stories. This situation is exacerbated by the lack of tools for assessing user story quality. In this paper, we propose User Story eReviewer (USeR) a web-based tool that allows authors to determine and optimize user story quality. For developing USeR, we collected 77 potential quality metrics through literature review, practitioner sessions, and research group meetings and refined these to 34 applicable metrics through expert sessions. Finally, we derived algorithms for eight prioritized metrics using a literature review and research group meetings and implemented them with plain code and machine learning techniques. USeR offers a RESTful API and user interface for instant, consistent, and explainable user feedback supporting fast and easy quality optimizations. It has been empirically evaluated with an expert study using 100 user stories and four experts from two real-world agile software projects in the automotive and health sectors.
Problem

Research questions and friction points this paper is trying to address.

Lack of tools for assessing user story quality in agile development.
Need for instant, consistent, and explainable feedback on user stories.
Challenges in writing high-quality user stories by Product Owners.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Web-based tool for user story quality assessment
Combines plain code and machine learning techniques
Provides RESTful API and user interface for feedback
🔎 Similar Papers
No similar papers found.