Speech Retrieval-Augmented Generation without Automatic Speech Recognition

πŸ“… 2024-12-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address retrieval inaccuracy and generation degradation caused by ASR errors in spoken open-domain question answering, this paper proposes the first end-to-end ASR-free speech RAG framework. Methodologically: (1) a fine-tuned speech encoder directly models raw audio, enabling speech–text retrieval via cross-modal embedding alignment while reusing a frozen text retriever; (2) for the first time, an unmodified speech language model (SLM) is employed for RAG-conditioned answer generation, thereby avoiding ASR error propagation. Experiments show that speech retrieval performance matches or exceeds text-based baselines; under high word error rate (WER) conditions, answer generation quality significantly outperforms ASR-cascaded approaches; and robustness and effectiveness are validated across multiple spoken QA benchmarks. This work establishes a transcription-free paradigm for speech understanding, eliminating reliance on intermediate ASR outputs while preserving semantic fidelity throughout the RAG pipeline.

Technology Category

Application Category

πŸ“ Abstract
One common approach for question answering over speech data is to first transcribe speech using automatic speech recognition (ASR) and then employ text-based retrieval-augmented generation (RAG) on the transcriptions. While this cascaded pipeline has proven effective in many practical settings, ASR errors can propagate to the retrieval and generation steps. To overcome this limitation, we introduce SpeechRAG, a novel framework designed for open-question answering over spoken data. Our proposed approach fine-tunes a pre-trained speech encoder into a speech adapter fed into a frozen large language model (LLM)--based retrieval model. By aligning the embedding spaces of text and speech, our speech retriever directly retrieves audio passages from text-based queries, leveraging the retrieval capacity of the frozen text retriever. Our retrieval experiments on spoken question answering datasets show that direct speech retrieval does not degrade over the text-based baseline, and outperforms the cascaded systems using ASR. For generation, we use a speech language model (SLM) as a generator, conditioned on audio passages rather than transcripts. Without fine-tuning of the SLM, this approach outperforms cascaded text-based models when there is high WER in the transcripts.
Problem

Research questions and friction points this paper is trying to address.

Automatic Speech Recognition Errors
Voice Search Accuracy
Answer Generation Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpeechRAG
DirectAudioApproach
SpeechLanguageModel
πŸ”Ž Similar Papers
No similar papers found.
D
Do June Min
University of Michigan
K
Karel Mundnich
AWS AI Labs
A
Andy Lapastora
AWS AI Labs
Erfan Soltanmohammadi
Erfan Soltanmohammadi
AWS AI Labs
Srikanth Ronanki
Srikanth Ronanki
Amazon
Speech RecognitionNatural language processingArtificial intelligence
K
Kyu Han
AWS AI Labs