🤖 AI Summary
This study investigates the semantic–tonal coordination mechanism in the pitch realization of natural spoken Taiwanese Mandarin. Addressing all 20 tone combinations in disyllabic words, it employs corpus-based phonetic analysis and generalized additive mixed models (GAMMs) to systematically test, for the first time, whether lexical meaning predicts fundamental frequency (F₀) contours. Innovatively, contextualized word embeddings derived from GPT-2 are used to represent semantics; these embeddings significantly outperform traditional tone-pattern baselines in explaining F₀ variation and emerge as the strongest predictor (largest effect size). Results demonstrate that semantics is not a background variable in phonological encoding but an active, real-time contributor to pitch shaping—challenging the classical phonological assumption of strict phoneme–meaning separation and providing empirical support for “phoneme–meaning coupling” in speech production.
📝 Abstract
A growing body of literature has demonstrated that semantics can co-determine fine phonetic detail. However, the complex interplay between phonetic realization and semantics remains understudied, particularly in pitch realization. The current study investigates the tonal realization of Mandarin disyllabic words with all 20 possible combinations of two tones, as found in a corpus of Taiwan Mandarin spontaneous speech. We made use of Generalized Additive Mixed Models (GAMs) to model f0 contours as a function of a series of predictors, including gender, tonal context, tone pattern, speech rate, word position, bigram probability, speaker and word. In the GAM analysis, word and sense emerged as crucial predictors of f0 contours, with effect sizes that exceed those of tone pattern. For each word token in our dataset, we then obtained a contextualized embedding by applying the GPT-2 large language model to the context of that token in the corpus. We show that the pitch contours of word tokens can be predicted to a considerable extent from these contextualized embeddings, which approximate token-specific meanings in contexts of use. The results of our corpus study show that meaning in context and phonetic realization are far more entangled than standard linguistic theory predicts.