Evaluating Speech-to-Text Systems with PennSound

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the underexplored, high-difficulty ASR task of poetry speech recognition. We introduce the first multi-system benchmark for poetry reading—built upon nearly 10 hours of authentic recordings from PennSound, encompassing diverse acoustic conditions and stylistically rich, prosodic, and colloquial speech. We systematically evaluate eight mainstream commercial and open-source ASR systems, including AWS, Azure, and Whisper. Our contributions are threefold: (1) pioneering the use of poetry corpora for cross-system ASR evaluation; (2) revealing, for the first time, Whisper’s hallucination propensity as strongly correlated with decoding parameters; and (3) proposing a novel multi-dimensional evaluation framework integrating Word Error Rate (WER) and Diarization Error Rate (DER). Results show Rev.ai achieves the best overall performance; Whisper is the top open-source model—provided hallucinations are mitigated via targeted tuning; and AWS excels in speaker diarization. Notably, performance gaps among systems are narrow, underscoring the critical importance of task-specific adaptation.

Technology Category

Application Category

📝 Abstract
A random sample of nearly 10 hours of speech from PennSound, the world's largest online collection of poetry readings and discussions, was used as a benchmark to evaluate several commercial and open-source speech-to-text systems. PennSound's wide variation in recording conditions and speech styles makes it a good representative for many other untranscribed audio collections. Reference transcripts were created by trained annotators, and system transcripts were produced from AWS, Azure, Google, IBM, NeMo, Rev.ai, Whisper, and Whisper.cpp. Based on word error rate, Rev.ai was the top performer, and Whisper was the top open source performer (as long as hallucinations were avoided). AWS had the best diarization error rates among three systems. However, WER and DER differences were slim, and various tradeoffs may motivate choosing different systems for different end users. We also examine the issue of hallucinations in Whisper. Users of Whisper should be cautioned to be aware of runtime options, and whether the speed vs accuracy trade off is acceptable.
Problem

Research questions and friction points this paper is trying to address.

Evaluating speech-to-text systems using diverse poetry recordings
Comparing performance of commercial and open-source transcription tools
Analyzing Whisper's hallucination issues and accuracy tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used PennSound for speech-to-text evaluation
Compared multiple commercial and open-source systems
Analyzed Whisper's hallucinations and tradeoffs
🔎 Similar Papers
No similar papers found.