WESR: Scaling and Evaluating Word-level Event-Speech Recognition

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of precisely localizing nonverbal vocal events—such as laughter and crying—in speech, a task hindered by limited event categories, coarse temporal resolution, and the absence of standardized evaluation protocols in existing approaches. To overcome these limitations, the study introduces the first fine-grained 21-class taxonomy that distinguishes between discrete and continuous vocal events, constructs WESR-Bench—a rigorously expert-annotated benchmark dataset—and proposes a position-aware evaluation protocol that decouples automatic speech recognition (ASR) errors from event detection performance. Leveraging over 1,700 hours of multi-event speech data, the authors train a dedicated acoustic event recognition model that surpasses both open-source audio language models and commercial APIs in event localization accuracy while maintaining high-quality ASR performance, thereby establishing a critical benchmark and technical foundation for real-world auditory modeling.

Technology Category

Application Category

📝 Abstract
Speech conveys not only linguistic information but also rich non-verbal vocal events such as laughing and crying. While semantic transcription is well-studied, the precise localization of non-verbal events remains a critical yet under-explored challenge. Current methods suffer from insufficient task definitions with limited category coverage and ambiguous temporal granularity. They also lack standardized evaluation frameworks, hindering the development of downstream applications. To bridge this gap, we first develop a refined taxonomy of 21 vocal events, with a new categorization into discrete (standalone) versus continuous (mixed with speech) types. Based on the refined taxonomy, we introduce WESR-Bench, an expert-annotated evaluation set (900+ utterances) with a novel position-aware protocol that disentangles ASR errors from event detection, enabling precise localization measurement for both discrete and continuous events. We also build a strong baseline by constructing a 1,700+ hour corpus, and train specialized models, surpassing both open-source audio-language models and commercial APIs while preserving ASR quality. We anticipate that WESR will serve as a foundational resource for future research in modeling rich, real-world auditory scenes.
Problem

Research questions and friction points this paper is trying to address.

event-speech recognition
non-verbal vocal events
temporal localization
evaluation framework
speech transcription
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-speech recognition
vocal event taxonomy
position-aware evaluation
WESR-Bench
discrete vs. continuous events
🔎 Similar Papers
No similar papers found.
C
Chenchen Yang
Fudan University
Kexin Huang
Kexin Huang
Fudan University
LLMAlignmentNLP
L
Liwei Fan
Fudan University
Q
Qian Tu
Fudan University
B
Botian Jiang
Fudan University
Dong Zhang
Dong Zhang
Fudan University
Natural Language ProcessingSpeech Processing
L
Linqi Yin
Fudan University
Shimin Li
Shimin Li
Fudan University
Large Language ModelSpeech Language Model
Zhaoye Fei
Zhaoye Fei
Fudan University
Natural Language Processing
Q
Qinyuan Cheng
Fudan University
X
Xipeng Qiu
Fudan University