🤖 AI Summary
To address the frequent omission or distortion of specific sound events in text-to-audio (TTA) generation, this paper proposes a feedback-driven retrieval-augmented generation (RAG) framework. Our method introduces, for the first time, a large audio-language model (LALM) as the feedback analysis module within RAG: it automatically detects semantic gaps in generated audio and retrieves semantically matched audio segments from an external audio database for dynamic supplementation and fusion. Crucially, the approach requires no fine-tuning or retraining of the underlying TTA model, thereby significantly enhancing generalizability and adaptability. Experiments across multiple state-of-the-art TTA models demonstrate consistent improvements in modeling critical sound events. Our framework achieves superior performance over existing RAG baselines across key metrics—including sound completeness, fidelity, and text-audio alignment—without architectural modifications to the base generators.
📝 Abstract
We propose a general feedback-driven retrieval-augmented generation (RAG) approach that leverages Large Audio Language Models (LALMs) to address the missing or imperfect synthesis of specific sound events in text-to-audio (TTA) generation. Unlike previous RAG-based TTA methods that typically train specialized models from scratch, we utilize LALMs to analyze audio generation outputs, retrieve concepts that pre-trained models struggle to generate from an external database, and incorporate the retrieved information into the generation process. Experimental results show that our method not only enhances the ability of LALMs to identify missing sound events but also delivers improvements across different models, outperforming existing RAG-specialized approaches.