🤖 AI Summary
Reliable assessment of the relationship between objective performance metrics and Quality of Experience (QoE) in Mission-Critical Voice (MCV) communication systems remains challenging under realistic public safety scenarios, due to laboratory environment distortions and scalability limitations of human subject studies.
Method: We establish a high-fidelity simulation framework integrating the NIST PSCR testbed with Amazon Mechanical Turk, and innovatively deploy ASR-based robots—rather than human transcribers—for partial speech transcription. A quantifiable QoE metric is proposed, grounded in Levenshtein distance between reference and transcribed utterances.
Contribution/Results: Large-scale human–machine collaborative experiments demonstrate that human transcription significantly outperforms state-of-the-art ASR systems in accuracy and perceptual fidelity. Furthermore, codec type emerges as the most influential system parameter affecting both QoE and speech intelligibility. This framework provides a reproducible, scalable, and empirically grounded methodology for QoE modeling in MCV systems.
📝 Abstract
Mission-critical voice (MCV) communications systems have been a critical tool for the public safety community for over eight decades. Public safety users expect MCV systems to operate reliably and consistently, particularly in challenging conditions. Because of these expectations, the Public Safety Communications Research (PSCR) Division of the National Institute of Standards and Technology (NIST) has been interested in correlating impairments in MCV communication systems and public safety user quality of experience (QoE). Previous research has studied MCV voice quality and intelligibility in a controlled environment. However, such research has been limited by the challenges inherent in emulating real-world environmental conditions. Additionally, there is the question of the best metric to use to reflect QoE accurately.
This paper describes our efforts to develop the methodology and tools for human-subject experiments with MCV. We illustrate their use in human-subject experiments in emulated real-world environments. The tools include a testbed for emulating real-world MCV systems and an automated speech recognition (ASR) robot approximating human subjects in transcription tasks. We evaluate QoE through a Levenshtein Distance-based metric, arguing it is a suitable proxy for measuring comprehension and the QoE. We conducted human-subject studies with Amazon MTurk volunteers to understand the influence of selected system parameters and impairments on human subject performance and end-user QoE. We also compare the performance of several ASR system configurations with human-subject performance. We find that humans generally perform better than ASR in accuracy-related MCV tasks and that the codec significantly influences the end-user QoE and ASR performance.