Feedback-driven Retrieval-augmented Audio Generation with Large Audio Language Models

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the frequent omission or distortion of specific sound events in text-to-audio (TTA) generation, this paper proposes a feedback-driven retrieval-augmented generation (RAG) framework. Our method introduces, for the first time, a large audio-language model (LALM) as the feedback analysis module within RAG: it automatically detects semantic gaps in generated audio and retrieves semantically matched audio segments from an external audio database for dynamic supplementation and fusion. Crucially, the approach requires no fine-tuning or retraining of the underlying TTA model, thereby significantly enhancing generalizability and adaptability. Experiments across multiple state-of-the-art TTA models demonstrate consistent improvements in modeling critical sound events. Our framework achieves superior performance over existing RAG baselines across key metrics—including sound completeness, fidelity, and text-audio alignment—without architectural modifications to the base generators.

Technology Category

Application Category

📝 Abstract
We propose a general feedback-driven retrieval-augmented generation (RAG) approach that leverages Large Audio Language Models (LALMs) to address the missing or imperfect synthesis of specific sound events in text-to-audio (TTA) generation. Unlike previous RAG-based TTA methods that typically train specialized models from scratch, we utilize LALMs to analyze audio generation outputs, retrieve concepts that pre-trained models struggle to generate from an external database, and incorporate the retrieved information into the generation process. Experimental results show that our method not only enhances the ability of LALMs to identify missing sound events but also delivers improvements across different models, outperforming existing RAG-specialized approaches.
Problem

Research questions and friction points this paper is trying to address.

Improves missing sound event synthesis in text-to-audio generation
Addresses imperfect audio generation using retrieval-augmented approach
Enhances Large Audio Language Models' ability to identify missing sounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feedback-driven retrieval-augmented generation with audio models
Leveraging LALMs to analyze outputs and retrieve concepts
Incorporating retrieved information to enhance generation process
🔎 Similar Papers
No similar papers found.
J
Junqi Zhao
Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK
C
Chenxing Li
Tencent AI Lab, Beijing, China
J
Jinzheng Zhao
Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK
R
Rilin Chen
Tencent AI Lab, Beijing, China
D
Dong Yu
Tencent AI Lab, Seattle, USA
M
Mark D. Plumbley
Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK
Wenwu Wang
Wenwu Wang
Professor, University of Surrey, UK
signal processingmachine learningmachine listeningaudio/speech/audio-visualmultimodal fusion