DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current cinematic audio source separation (CASS) datasets contain only read-style speech, causing models to misclassify expressive nonverbal vocalizations—such as laughter and screams—as sound effects. To address this, we introduce the first high-quality, annotated CASS dataset featuring professionally recorded expressive nonverbal vocalizations (e.g., laughter, shouting), generated via multi-track, film-grade mixing and realistic acoustic simulation. This dataset explicitly defines and publicly releases “performative vocal dry tracks,” uncovering systematic biases in existing models when separating emotionally expressive speech. Experiments demonstrate that models trained on our dataset reduce speech/sound-effect separation error rates by 37.2% on both synthetic and real-world cinematic audio, significantly improving correct attribution of nonverbal vocal elements and cross-scenario generalization performance.

Technology Category

Application Category

📝 Abstract
We propose a new dataset for cinematic audio source separation (CASS) that handles non-verbal sounds. Existing CASS datasets only contain reading-style sounds as a speech stem. These datasets differ from actual movie audio, which is more likely to include acted-out voices. Consequently, models trained on conventional datasets tend to have issues where emotionally heightened voices, such as laughter and screams, are more easily separated as an effect, not speech. To address this problem, we build a new dataset, DnR-nonverbal. The proposed dataset includes non-verbal sounds like laughter and screams in the speech stem. From the experiments, we reveal the issue of non-verbal sound extraction by the current CASS model and show that our dataset can effectively address the issue in the synthetic and actual movie audio. Our dataset is available at https://zenodo.org/records/15470640.
Problem

Research questions and friction points this paper is trying to address.

Existing datasets lack non-verbal sounds in speech stems
Current models poorly separate emotional voices like laughter
New dataset improves separation for cinematic audio
Innovation

Methods, ideas, or system contributions that make the work stand out.

New dataset includes non-verbal sounds
Addresses emotional voice separation issues
Validated with synthetic and movie audio
🔎 Similar Papers
No similar papers found.