Resonate: Reinforcing Text-to-Audio Generation via Online Feedback from Large Audio Language Models

๐Ÿ“… 2026-03-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing text-to-audio (TTA) generation approaches, which predominantly rely on offline optimization and coarse-grained rewards, lacking effective online reinforcement learning mechanisms. To overcome this, the study introduces Group Relative Policy Optimization (GRPO)โ€”an online reinforcement learning algorithmโ€”into TTA for the first time. By leveraging fine-grained perceptual alignment signals provided by a large audio language model (LALM), the method performs policy gradient updates to fine-tune a Flow Matching audio generation model. Evaluated on the TTA-Bench benchmark using the 470-million-parameter Resonate model, the proposed approach achieves significant improvements in both audio quality and semantic consistency, establishing a new state-of-the-art performance on this benchmark.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement Learning (RL) has become an effective paradigm for enhancing Large Language Models (LLMs) and visual generative models. However, its application in text-to-audio (TTA) generation remains largely under-explored. Prior work typically employs offline methods like Direct Preference Optimization (DPO) and leverages Contrastive Language-Audio Pretraining (CLAP) models as reward functions. In this study, we investigate the integration of online Group Relative Policy Optimization (GRPO) into TTA generation. We adapt the algorithm for Flow Matching-based audio models and demonstrate that online RL significantly outperforms its offline counterparts. Furthermore, we incorporate rewards derived from Large Audio Language Models (LALMs), which can provide fine-grained scoring signals that are better aligned with human perception. With only 470M parameters, our final model, \textbf{Resonate}, establishes a new SOTA on TTA-Bench in terms of both audio quality and semantic alignment.
Problem

Research questions and friction points this paper is trying to address.

text-to-audio generation
reinforcement learning
audio quality
semantic alignment
online feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

online reinforcement learning
text-to-audio generation
Large Audio Language Models
Flow Matching
Group Relative Policy Optimization
๐Ÿ”Ž Similar Papers
No similar papers found.