🤖 AI Summary
Language identification (LID) models suffer significant performance degradation on accented speech, primarily because they implicitly encode accent features and over-rely on short-term phonotactic cues rather than linguistically meaningful sequential structure. This work provides the first systematic analysis of LID’s intrinsic accent sensitivity. We propose two key innovations: (1) attribution analysis grounded in phoneme-segment permutation invariance to quantitatively localize accent-induced interference; and (2) a chunked input strategy coupled with a lightweight sequential modeling module that explicitly decouples accent representations from language representations—without requiring monolingual ASR supervision. Experiments demonstrate that our approach substantially reduces accent–language confusion across diverse accented test sets, yielding large gains in LID accuracy while preserving strong performance on standard benchmarks.
📝 Abstract
Prior research indicates that LID model performance significantly declines on accented speech; however, the specific causes, extent, and characterization of these errors remain under-explored. (i) We identify a common failure mode on accented speech whereby LID systems often misclassify L2 accented speech as the speaker's native language or a related language. (ii) We present evidence suggesting that state-of-the-art models are invariant to permutations of short spans of speech, implying they classify on the basis of short phonotactic features indicative of accent rather than language. Our analysis reveals a simple method to enhance model robustness to accents through input chunking. (iii) We present an approach that integrates sequence-level information into our model without relying on monolingual ASR systems; this reduces accent-language confusion and significantly enhances performance on accented speech while maintaining comparable results on standard LID.