🤖 AI Summary
This study addresses the robustness evaluation of automatic speech recognition (ASR) systems against spontaneous speech errors. We introduce SFUSED—the first English spontaneous speech error corpus with multi-level linguistic annotations (word- and syllable-level error localization, context sensitivity, correction patterns), comprising 5,300 utterances—and employ its structured error annotation schema for ASR diagnostics, conducting a systematic evaluation of WhisperX. Methodologically, we integrate degraded word detection, error type classification, and correction behavior modeling to enable fine-grained error attribution. Results demonstrate that SFUSED effectively exposes ASR vulnerabilities in realistic conversational contexts: WhisperX exhibits strong robustness against repetitions and fillers but shows significant limitations in handling phonological illusions and context-dependent corrections. This work establishes a reproducible benchmark framework and diagnostic paradigm for ASR robustness assessment.
📝 Abstract
The Simon Fraser University Speech Error Database (SFUSED) is a public data collection developed for linguistic and psycholinguistic research. Here we demonstrate how its design and annotations can be used to test and evaluate speech recognition models. The database comprises systematically annotated speech errors from spontaneous English speech, with each error tagged for intended and actual error productions. The annotation schema incorporates multiple classificatory dimensions that are of some value to model assessment, including linguistic hierarchical level, contextual sensitivity, degraded words, word corrections, and both word-level and syllable-level error positioning. To assess the value of these classificatory variables, we evaluated the transcription accuracy of WhisperX across 5,300 documented word and phonological errors. This analysis demonstrates the atabase's effectiveness as a diagnostic tool for ASR system performance.