🤖 AI Summary
This work addresses the suboptimal generation quality of Speech-Aware Large Language Models (SALLMs) on open-ended speech understanding tasks—such as spoken question answering and automatic speech translation. We propose the first end-to-end reinforcement learning framework that integrates Group Relative Policy Optimization (GRPO) with BLEU scoring. Unlike conventional supervised fine-tuning (SFT), our method employs BLEU as a differentiable reward signal and directly optimizes the speech-to-text generation process under off-policy sampling. Experiments demonstrate substantial improvements over SFT baselines across multiple open-format speech understanding benchmarks. These results validate GRPO’s effectiveness and generalizability in speech–language joint modeling and establish a novel paradigm for generative speech understanding in SALLMs.
📝 Abstract
In this paper, we introduce a Group Relative Policy Optimization (GRPO)-based method for training Speech-Aware Large Language Models (SALLMs) on open-format speech understanding tasks, such as Spoken Question Answering and Automatic Speech Translation. SALLMs have proven highly effective for speech understanding tasks. GRPO has recently gained traction for its efficiency in training LLMs, and prior work has explored its application to SALLMs, primarily in multiple-choice tasks. Building on this, we focus on open-format tasks that better reflect the generative abilities of the models. Our approach leverages GRPO with BLEU as the reward signal to optimize SALLMs, and we demonstrate empirically that it surpasses standard SFT across several key metrics. Finally, we explore the potential of incorporating off-policy samples within GRPO for these tasks, highlighting avenues for further improvement and further research.