🤖 AI Summary
This work addresses the challenges of speech perception modeling in low-resource, multilingual, and acoustically complex environments by drawing inspiration from human auditory mechanisms. It proposes an adaptive inference framework that integrates acoustic-phonological modeling with language-knowledge-guided adaptation. The approach achieves state-of-the-art performance with only 100 hours of training data and enables zero-shot transfer to 95 unseen languages. Notably, the model attains the lowest phoneme error rates on five English benchmarks, demonstrating substantial improvements in cross-lingual generalization capabilities.
📝 Abstract
We propose HuPER, a human-inspired framework that models phonetic perception as adaptive inference over acoustic-phonetics evidence and linguistic knowledge. With only 100 hours of training data, HuPER achieves state-of-the-art phonetic error rates on five English benchmarks and strong zero-shot transfer to 95 unseen languages. HuPER is also the first framework to enable adaptive, multi-path phonetic perception under diverse acoustic conditions. All training data, models, and code are open-sourced. Code and demo avaliable at https://github.com/HuPER29/HuPER.