🤖 AI Summary
Traditional subjective questionnaires for assessing user experience (UX) are prone to cognitive and response biases, limiting objectivity and ecological validity.
Method: This study proposes an objective, speech-based UX quantification framework. We extract acoustic features (e.g., RMS, zero-crossing rate), prosodic features (e.g., jitter, shimmer), and socio-linguistic features (e.g., vocal activity, engagement) to construct a multidimensional speech feature space. Statistical hypothesis testing and behavioral modeling are then applied to automatically discriminate between positive and neutral UX states.
Contribution/Results: To our knowledge, this is the first systematic integration of multi-source speech features into UX evaluation—eliminating reliance on subjective Likert scales and establishing a non-intrusive, real-time, bias-robust paradigm. Experimental results demonstrate statistically significant separation of UX states (p < 0.01); RMS, shimmer, and vocal activity emerge as key discriminative indicators. We release a fully reproducible, open-source toolchain, advancing objective, data-driven UX assessment in human–computer interaction research.
📝 Abstract
User satisfaction plays a crucial role in user experience (UX) evaluation. Traditionally, UX measurements are based on subjective scales, such as questionnaires. However, these evaluations may suffer from subjective bias. In this paper, we explore the acoustic and prosodic features of speech to differentiate between positive and neutral UX during interactive sessions. By analyzing speech features such as root-mean-square (RMS), zero-crossing rate(ZCR), jitter, and shimmer, we identified significant differences between the positive and neutral user groups. In addition, social speech features such as activity and engagement also show notable variations between these groups. Our findings underscore the potential of speech analysis as an objective and reliable tool for UX measurement, contributing to more robust and bias-resistant evaluation methodologies. This work offers a novel approach to integrating speech features into UX evaluation and opens avenues for further research in HCI.