🤖 AI Summary
This study addresses the lack of high-quality open-source speech corpora supporting bilingual (Mandarin–English) phonetic comparison and its association with language attitudes. To this end, we present the MELI bilingual corpus, comprising 29.8 hours of read and spontaneous interview speech (14.7 hours of Mandarin and 15.1 hours of English) from 51 bilingual speakers, recorded in high-fidelity stereo at 44.1 kHz/16-bit. The corpus includes word- and phoneme-level aligned transcriptions, anonymized audio, rich metadata, and code-switching annotations. Notably, it is the first resource to systematically integrate matched Mandarin–English speech samples, acoustic features, and language attitude survey data, enabling research on cross-linguistic and inter-speaker variation as well as the relationship between linguistic attitudes and phonetic variability. All data are released under a CC BY-NC 4.0 license.
📝 Abstract
We introduce the Mandarin-English Language Interview (MELI) Corpus, an open-source resource of 29.8 hours of speech from 51 Mandarin-English bilingual speakers. MELI combines matched sessions in Mandarin and English with two speaking styles: read sentences and spontaneous interviews about language varieties, standardness, and learning experiences. Audio was recorded at 44.1 kHz (16-bit, stereo). Interviews were fully transcribed, force-aligned at word and phone levels, and anonymized. Descriptively, the Mandarin component totals ~14.7 hours (mean duration 17.3 minutes) and the English component ~15.1 hours (mean duration 17.8 minutes). We report token/type statistics for each language and document code-switching patterns (frequent in Mandarin sessions; more limited in English sessions). The corpus design supports within-/cross-speaker, within/cross-language acoustic comparison and links acoustics to speakers' stated language attitudes, enabling both quantitative and qualitative analyses. The MELI Corpus will be released with transcriptions, alignments, metadata, scans of labelled maps and documentation under a CC BY-NC 4.0 license.