🤖 AI Summary
This study investigates whether the pitch contour of Mandarin disyllabic words is independently influenced by lexical semantics, beyond established phonetic–phonological factors such as tone category, speech rate, coarticulation, segmental structure, and predictability.
Method: Analyzing spontaneous Mandarin conversational data from Taiwan, we employ generalized additive models (GAMs) integrated with context-sensitive word embeddings and fine-grained tone-contour modeling to quantify the predictive power of lexical meaning on pitch realization.
Contribution/Results: Semantic information significantly improves prediction accuracy—by up to 40% for tone contours and 50% for word-class identification—substantially exceeding random baselines and outperforming the combined explanatory power of all traditional phonetic–phonological variables. This provides the first empirical evidence that semantics is a robust predictor of tonal realization in Mandarin, leading us to propose the “semantics–tone functional association” hypothesis and thereby extending the cognitive dimension of Mandarin tone theory.
📝 Abstract
The pitch contours of Mandarin two-character words are generally understood as being shaped by the underlying tones of the constituent single-character words, in interaction with articulatory constraints imposed by factors such as speech rate, co-articulation with adjacent tones, segmental make-up, and predictability. This study shows that tonal realization is also partially determined by words' meanings. We first show, on the basis of a corpus of Taiwan Mandarin spontaneous conversations, using a generalized additive regression model, and focusing on the rise-fall tone pattern, that after controlling for effects of speaker and context, word type is a stronger predictor of tonal realization than all the previously established word-form related predictors combined. Importantly, the addition of information about meaning in context improves prediction accuracy even further. We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 40% accuracy. These accuracies, which are an order of magnitude above chance level, suggest that the relation between words' pitch contours and their meanings are sufficiently strong to be potentially functional for language users. The theoretical implications of these empirical findings are discussed.