🤖 AI Summary
How does the brain neurally represent speaker-invariant pitch contours in Mandarin tone perception, given large inter-speaker differences in absolute pitch yet stable lexical tone identification? Method: We collected high-density EEG data from participants listening to monosyllabic tonal stimuli produced by multiple speakers, and proposed a Channel-Enhanced Spatiotemporal Vision Transformer (CE-ViViT) for end-to-end regression decoding of continuous pitch contours directly from raw EEG. Crucially, we applied speaker-wise pitch normalization to isolate relative pitch cues. Contribution/Results: We demonstrate—first in Mandarin—that normalized relative pitch exhibits significantly stronger neural encoding robustness than absolute pitch. Normalized pitch decoding achieved substantially lower error than raw pitch decoding, attaining state-of-the-art performance among current EEG-based regression methods. These findings provide novel neurophysiological evidence for the neural basis of perceptual invariance in speech.
📝 Abstract
The same speech content produced by different speakers exhibits significant differences in pitch contour, yet listeners' semantic perception remains unaffected. This phenomenon may stem from the brain's perception of pitch contours being independent of individual speakers' pitch ranges. In this work, we recorded electroencephalogram (EEG) while participants listened to Mandarin monosyllables with varying tones, phonemes, and speakers. The CE-ViViT model is proposed to decode raw or speaker-normalized pitch contours directly from EEG. Experimental results demonstrate that the proposed model can decode pitch contours with modest errors, achieving performance comparable to state-of-the-art EEG regression methods. Moreover, speaker-normalized pitch contours were decoded more accurately, supporting the neural encoding of relative pitch.