LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robust voice activity detection (VAD) in unknown noisy environments faces dual bottlenecks: scarcity of large-scale, controllable, open-source benchmark datasets and insufficient out-of-distribution (OOD) generalization of existing models. To address this, we introduce LibriVAD—the first large-scale, systematically controlled VAD benchmark built upon LibriSpeech, spanning 15 GB to 1.5 TB across multiple scales, integrating both real and synthetic noise, and uniquely enabling systematic control over signal-to-noise ratio (SNR), signal-to-speech ratio (SSR), and noise diversity. Methodologically, we pioneer the adaptation of Vision Transformers (ViTs) to VAD, incorporating MFCC and raw waveform features with noise-mixing data augmentation. Experiments demonstrate that ViT+MFCC consistently outperforms state-of-the-art models across seen, unseen, and OOD scenarios—including VOiCES. All data, code, and models are publicly released.

Technology Category

Application Category

📝 Abstract
Robust Voice Activity Detection (VAD) remains a challenging task, especially under noisy, diverse, and unseen acoustic conditions. Beyond algorithmic development, a key limitation in advancing VAD research is the lack of large-scale, systematically controlled, and publicly available datasets. To address this, we introduce LibriVAD - a scalable open-source dataset derived from LibriSpeech and augmented with diverse real-world and synthetic noise sources. LibriVAD enables systematic control over speech-to-noise ratio, silence-to-speech ratio (SSR), and noise diversity, and is released in three sizes (15 GB, 150 GB, and 1.5 TB) with two variants (LibriVAD-NonConcat and LibriVAD-Concat) to support different experimental setups. We benchmark multiple feature-model combinations, including waveform, Mel-Frequency Cepstral Coefficients (MFCC), and Gammatone filter bank cepstral coefficients, and introduce the Vision Transformer (ViT) architecture for VAD. Our experiments show that ViT with MFCC features consistently outperforms established VAD models such as boosted deep neural network and convolutional long short-term memory deep neural network across seen, unseen, and out-of-distribution (OOD) conditions, including evaluation on the real-world VOiCES dataset. We further analyze the impact of dataset size and SSR on model generalization, experimentally showing that scaling up dataset size and balancing SSR noticeably and consistently enhance VAD performance under OOD conditions. All datasets, trained models, and code are publicly released to foster reproducibility and accelerate progress in VAD research.
Problem

Research questions and friction points this paper is trying to address.

Lack large-scale controlled datasets for voice activity detection research
Need robust VAD performance under noisy diverse acoustic conditions
Require systematic evaluation of model generalization across unseen scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created scalable open dataset LibriVAD with controlled noise
Introduced Vision Transformer architecture for voice activity detection
Showed dataset size and balance improve out-of-distribution performance
🔎 Similar Papers