Leveraging Zipformer Model for Effective Language Identification in Code-Switched Child-Directed Speech

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses language identification in child-directed bilingual speech featuring Mandarin–English code-switching and severe class imbalance. We propose an enhanced Zipformer-based approach: (1) an intra-layer adaptive embedding extraction mechanism to strengthen feature modeling for low-resource languages—particularly English in child speech contexts; and (2) a Zipformer–Transformer hybrid encoder jointly optimized with multi-level feature selection and a discriminative backend classifier to improve robustness. Evaluated on a real-world children’s bilingual speech dataset, our model achieves a balanced accuracy (BAC) of 81.89%, outperforming the strongest baseline by 15.47 percentage points and surpassing prior methods. The results demonstrate significant gains in handling non-stationary, low-resource child speech, establishing a transferable technical framework for challenging bilingual language identification scenarios.

Technology Category

Application Category

📝 Abstract
Code-switching and language identification in child-directed scenarios present significant challenges, particularly in bilingual environments. This paper addresses this challenge by using Zipformer to handle the nuances of speech, which contains two imbalanced languages, Mandarin and English, in an utterance. This work demonstrates that the internal layers of the Zipformer effectively encode the language characteristics, which can be leveraged in language identification. We present the selection methodology of the inner layers to extract the embeddings and make a comparison with different back-ends. Our analysis shows that Zipformer is robust across these backends. Our approach effectively handles imbalanced data, achieving a Balanced Accuracy (BAC) of 81.89%, a 15.47% improvement over the language identification baseline. These findings highlight the potential of the transformer encoder architecture model in real scenarios.
Problem

Research questions and friction points this paper is trying to address.

Identify languages in code-switched child-directed speech
Handle imbalanced Mandarin-English speech data effectively
Improve language identification accuracy using Zipformer layers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Zipformer for language identification
Extracts embeddings from inner layers
Handles imbalanced Mandarin-English data
🔎 Similar Papers
No similar papers found.
Lavanya Shankar
Lavanya Shankar
Student at Johns Hopkins University
L
Leibny Paola Garcia Perera
Johns Hopkins University, Baltimore, USA