Advancing Speech Understanding in Speech-Aware Language Models with GRPO

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal generation quality of Speech-Aware Large Language Models (SALLMs) on open-ended speech understanding tasks—such as spoken question answering and automatic speech translation. We propose the first end-to-end reinforcement learning framework that integrates Group Relative Policy Optimization (GRPO) with BLEU scoring. Unlike conventional supervised fine-tuning (SFT), our method employs BLEU as a differentiable reward signal and directly optimizes the speech-to-text generation process under off-policy sampling. Experiments demonstrate substantial improvements over SFT baselines across multiple open-format speech understanding benchmarks. These results validate GRPO’s effectiveness and generalizability in speech–language joint modeling and establish a novel paradigm for generative speech understanding in SALLMs.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce a Group Relative Policy Optimization (GRPO)-based method for training Speech-Aware Large Language Models (SALLMs) on open-format speech understanding tasks, such as Spoken Question Answering and Automatic Speech Translation. SALLMs have proven highly effective for speech understanding tasks. GRPO has recently gained traction for its efficiency in training LLMs, and prior work has explored its application to SALLMs, primarily in multiple-choice tasks. Building on this, we focus on open-format tasks that better reflect the generative abilities of the models. Our approach leverages GRPO with BLEU as the reward signal to optimize SALLMs, and we demonstrate empirically that it surpasses standard SFT across several key metrics. Finally, we explore the potential of incorporating off-policy samples within GRPO for these tasks, highlighting avenues for further improvement and further research.
Problem

Research questions and friction points this paper is trying to address.

Training speech-aware language models on open-format understanding tasks
Improving generative abilities for spoken question answering and translation
Optimizing models using GRPO with BLEU reward signal
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRPO method trains speech-aware language models
BLEU reward signal optimizes open-format tasks
Off-policy samples enhance GRPO training efficiency
🔎 Similar Papers
No similar papers found.