🤖 AI Summary
Current automatic speech recognition (ASR) systems underperform in real-world voice assistant scenarios, primarily because existing evaluations lack systematic coverage of realistic factors such as environmental noise, demographic variability, and linguistic diversity, thereby failing to anticipate performance degradation and safety risks. This work proposes WildASR—the first factor-disentangled, multilingual benchmark for diagnosing ASR robustness in the wild—enabling independent assessment along three dimensions: environmental degradation, demographic shifts, and language diversity, accompanied by an analysis toolkit to inform deployment decisions. Evaluations across seven state-of-the-art ASR systems reveal significant and uneven performance drops under realistic conditions, non-transferable robustness across languages and conditions, and a propensity for input corruption to induce hallucinatory outputs that pose safety hazards. The study underscores the necessity of factor-isolated evaluation for enhancing the reliability of production ASR systems.
📝 Abstract
Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.