Multi-Accent Mandarin Dry-Vocal Singing Dataset: Benchmark for Singing Accent Recognition

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing singing datasets suffer from audio quality degradation due to vocal-instrumental source separation and lack systematic regional accent annotations, severely hindering research on singing accent. To address this, we introduce the first multi-accent a cappella singing dataset covering nine major Chinese dialect regions, comprising 4,206 native Mandarin speakers and 670 hours of high-fidelity dry vocal recordings—including structured vowel pronunciation exercises—fully annotated with fine-grained geographical accent labels. This dataset uniquely integrates phoneme-level accent annotation with comprehensive singing tasks spanning pop karaoke-style singing and sustained vowel production across the full vocal range. Its validity is confirmed through rigorous vocal source separation evaluation and benchmarking on deep learning models. Experiments uncover region-specific acoustic effects of dialects on vowel production in singing, establishing a new standard for singing accent modeling and advancing interdisciplinary research at the intersection of speech science and music technology.

Technology Category

Application Category

📝 Abstract
Singing accent research is underexplored compared to speech accent studies, primarily due to the scarcity of suitable datasets. Existing singing datasets often suffer from detail loss, frequently resulting from the vocal-instrumental separation process. Additionally, they often lack regional accent annotations. To address this, we introduce the Multi-Accent Mandarin Dry-Vocal Singing Dataset (MADVSD). MADVSD comprises over 670 hours of dry vocal recordings from 4,206 native Mandarin speakers across nine distinct Chinese regions. In addition to each participant recording audio of three popular songs in their native accent, they also recorded phonetic exercises covering all Mandarin vowels and a full octave range. We validated MADVSD through benchmark experiments in singing accent recognition, demonstrating its utility for evaluating state-of-the-art speech models in singing contexts. Furthermore, we explored dialectal influences on singing accent and analyzed the role of vowels in accentual variations, leveraging MADVSD's unique phonetic exercises.
Problem

Research questions and friction points this paper is trying to address.

Addresses the scarcity of datasets for singing accent research
Provides a dataset with regional accent annotations for Mandarin singing
Enables evaluation of speech models in singing accent recognition contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created large-scale multi-accent Mandarin dry vocal singing dataset
Included phonetic exercises covering vowels and octave range
Benchmarked singing accent recognition using speech models
🔎 Similar Papers
No similar papers found.
Z
Zihao Wang
Zhejiang University, Hangzhou, China; Carnegie Mellon University, Pittsburgh, United States
Ruibin Yuan
Ruibin Yuan
HKUST
Artificial IntelligenceMusic GenerationMusic Information RetrievalComputer Music
Z
Ziqi Geng
Hengjia Li
Hengjia Li
Zhejiang University
image generationvideo generation
X
Xingwei Qu
X
Xinyi Li
S
Songye Chen
H
Haoying Fu
Mei KTV, Beijing, China
Roger B. Dannenberg
Roger B. Dannenberg
Professor of Computer Science, Carnegie Mellon University
Computer Music
K
Kejun Zhang
Zhejiang University, Hangzhou, China; Innovation Center of Yangtze River Delta, Zhejiang University, Hangzhou, China