đ¤ AI Summary
This paper addresses the graphemeâphoneme and graphemeâprosody inconsistency problem in speech phoneme and prosody annotation. We propose an end-to-end grapheme-consistency modeling framework that jointly integrates implicit grapheme modelingâvia a BERT-based prompt encoderâand explicit grapheme constraintsâimplemented through grapheme-consistency pruningâto construct speechâannotationâtext triple parallel data. To our knowledge, this is the first approach to achieve fully automatic speech annotation with strict grapheme consistency *without* manual alignment. We validate its effectiveness on downstream tasks including text-to-speech (TTS) and accent estimation: the generated parallel data significantly improves accent recognition accuracy. Our work establishes a reliable weakly supervised annotation paradigm for speech representation learning, offering both methodological noveltyâthrough unified implicit/explicit grapheme modelingâand practical utility in low-resource annotation scenarios.
đ Abstract
We propose a model to obtain phonemic and prosodic labels of speech that are coherent with graphemes. Unlike previous methods that simply fine-tune a pre-trained ASR model with the labels, the proposed model conditions the label generation on corresponding graphemes by two methods: 1) Add implicit grapheme conditioning through prompt encoder using pre-trained BERT features. 2) Explicitly prune the label hypotheses inconsistent with the grapheme during inference. These methods enable obtaining parallel data of speech, the labels, and graphemes, which is applicable to various downstream tasks such as text-to-speech and accent estimation from text. Experiments showed that the proposed method significantly improved the consistency between graphemes and the predicted labels. Further, experiments on accent estimation task confirmed that the created parallel data by the proposed method effectively improve the estimation accuracy.