Recording for Eyes, Not Echoing to Ears: Contextualized Spoken-to-Written Conversion of ASR Transcripts

📅 2024-08-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor readability, frequent errors, and strong orality in ASR transcripts, this paper introduces the novel document-level contextualized Spoken-to-Written (CoS2W) conversion task—aiming to jointly correct recognition errors, repair grammatical flaws, and enhance linguistic formality. Our contributions are fourfold: (1) a formal definition of the CoS2W task; (2) construction of SWAB, the first benchmark dataset specifically designed for CoS2W; (3) an enhanced modeling framework integrating contextual information with auxiliary signals—including speech-text alignment and semantic constraints; and (4) empirical validation that large language models (LLMs) serve as highly reliable automatic evaluators for faithfulness and formality, achieving strong correlation (ρ > 0.9) with human judgments. Experiments on SWAB demonstrate that our method significantly improves grammatical correctness and formality over prior approaches, establishing new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Automatic Speech Recognition (ASR) transcripts exhibit recognition errors and various spoken language phenomena such as disfluencies, ungrammatical sentences, and incomplete sentences, hence suffering from poor readability. To improve readability, we propose a Contextualized Spoken-to-Written conversion (CoS2W) task to address ASR and grammar errors and also transfer the informal text into the formal style with content preserved, utilizing contexts and auxiliary information. This task naturally matches the in-context learning capabilities of Large Language Models (LLMs). To facilitate comprehensive comparisons of various LLMs, we construct a document-level Spoken-to-Written conversion of ASR Transcripts Benchmark (SWAB) dataset. Using SWAB, we study the impact of different granularity levels on the CoS2W performance, and propose methods to exploit contexts and auxiliary information to enhance the outputs. Experimental results reveal that LLMs have the potential to excel in the CoS2W task, particularly in grammaticality and formality, our methods achieve effective understanding of contexts and auxiliary information by LLMs. We further investigate the effectiveness of using LLMs as evaluators and find that LLM evaluators show strong correlations with human evaluations on rankings of faithfulness and formality, which validates the reliability of LLM evaluators for the CoS2W task.
Problem

Research questions and friction points this paper is trying to address.

Automatic Speech Recognition Error Correction
Colloquial to Formal Language Conversion
Contextual Information Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextualized Oral-to-Written Conversion (CoS2W)
Standard Dataset SWAB
Large Language Models (LLMs) Evaluation
🔎 Similar Papers
No similar papers found.