🤖 AI Summary
This study addresses language identification in child-directed bilingual speech featuring Mandarin–English code-switching and severe class imbalance. We propose an enhanced Zipformer-based approach: (1) an intra-layer adaptive embedding extraction mechanism to strengthen feature modeling for low-resource languages—particularly English in child speech contexts; and (2) a Zipformer–Transformer hybrid encoder jointly optimized with multi-level feature selection and a discriminative backend classifier to improve robustness. Evaluated on a real-world children’s bilingual speech dataset, our model achieves a balanced accuracy (BAC) of 81.89%, outperforming the strongest baseline by 15.47 percentage points and surpassing prior methods. The results demonstrate significant gains in handling non-stationary, low-resource child speech, establishing a transferable technical framework for challenging bilingual language identification scenarios.
📝 Abstract
Code-switching and language identification in child-directed scenarios present significant challenges, particularly in bilingual environments. This paper addresses this challenge by using Zipformer to handle the nuances of speech, which contains two imbalanced languages, Mandarin and English, in an utterance. This work demonstrates that the internal layers of the Zipformer effectively encode the language characteristics, which can be leveraged in language identification. We present the selection methodology of the inner layers to extract the embeddings and make a comparison with different back-ends. Our analysis shows that Zipformer is robust across these backends. Our approach effectively handles imbalanced data, achieving a Balanced Accuracy (BAC) of 81.89%, a 15.47% improvement over the language identification baseline. These findings highlight the potential of the transformer encoder architecture model in real scenarios.