Classification of Quality Characteristics in Online User Feedback using Linguistic Analysis, Crowdsourcing and LLMs

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of identifying software quality attributes (e.g., usability, reliability) from mobile app user feedback under scarce labeled data, this paper systematically investigates three low-resource classification approaches: linguistic pattern matching, crowdsourced micro-annotation (employing a two-stage design), and large language model (LLM) prompt engineering. It presents the first comparative evaluation of these methods in a fine-grained multi-class setting: crowdsourcing achieves an average accuracy of 0.72; LLMs attain up to 0.66 under optimal prompting; and majority-voting ensemble reaches 0.68. While linguistic patterns exhibit high precision variance (0.38–0.92) and low recall, they—alongside crowdsourcing and LLMs—demonstrate that expert-labeled data is unnecessary for high-accuracy classification and enable efficient construction of high-quality training corpora. The core contribution is the proposal and empirical validation of a scalable, cost-effective identification paradigm for low-resource software quality analysis.

Technology Category

Application Category

📝 Abstract
Software qualities such as usability or reliability are among the strongest determinants of mobile app user satisfaction and constitute a significant portion of online user feedback on software products, making it a valuable source of quality-related feedback to guide the development process. The abundance of online user feedback warrants the automated identification of quality characteristics, but the online user feedback's heterogeneity and the lack of appropriate training corpora limit the applicability of supervised machine learning. We therefore investigate the viability of three approaches that could be effective in low-data settings: language patterns (LPs) based on quality-related keywords, instructions for crowdsourced micro-tasks, and large language model (LLM) prompts. We determined the feasibility of each approach and then compared their accuracy. For the complex multiclass classification of quality characteristics, the LP-based approach achieved a varied precision (0.38-0.92) depending on the quality characteristic, and low recall; crowdsourcing achieved the best average accuracy in two consecutive phases (0.63, 0.72), which could be matched by the best-performing LLM condition (0.66) and a prediction based on the LLMs' majority vote (0.68). Our findings show that in this low-data setting, the two approaches that use crowdsourcing or LLMs instead of involving experts achieve accurate classifications, while the LP-based approach has only limited potential. The promise of crowdsourcing and LLMs in this context might even extend to building training corpora.
Problem

Research questions and friction points this paper is trying to address.

Automated identification of quality characteristics in online user feedback
Overcoming heterogeneity and lack of training data for classification
Comparing effectiveness of linguistic patterns, crowdsourcing, and LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language patterns for quality keyword identification
Crowdsourced micro-tasks for accurate classification
LLM prompts matching crowdsourcing accuracy
🔎 Similar Papers
No similar papers found.