DualSpeechLM: Towards Unified Speech Understanding and Generation via Dual Speech Token Modeling with Large Language Models

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unified speech understanding and generation face two major challenges: significant modality disparity between speech and text, and conflicting task objectives. This paper proposes DualSpeechLM, a unified framework addressing these issues. Its core contributions are threefold: (1) a comprehension-driven speech tokenizer (USTokenizer) that bridges the speech–text modality gap; (2) a dual-speech token modeling mechanism that separately encodes phoneme-level generative information and semantic-level understanding information; and (3) a semantic supervision loss coupled with a Chain-of-Condition training strategy to enable bidirectional capability co-optimization. DualSpeechLM jointly models multiple tasks—including ASR, TTS, and speech translation—within a single end-to-end architecture. Experiments demonstrate substantial improvements in both understanding and generation performance, with strong cross-task complementarity. Notably, it is the first approach to achieve simultaneous enhancement of both capabilities under a unified end-to-end framework.

Technology Category

Application Category

📝 Abstract
Extending pre-trained Large Language Models (LLMs)'s speech understanding or generation abilities by introducing various effective speech tokens has attracted great attention in the speech community. However, building a unified speech understanding and generation model still faces the following challenges: (1) Due to the huge modality gap between speech tokens and text tokens, extending text LLMs to unified speech LLMs relies on large-scale paired data for fine-tuning, and (2) Generation and understanding tasks prefer information at different levels, e.g., generation benefits from detailed acoustic features, while understanding favors high-level semantics. This divergence leads to difficult performance optimization in one unified model. To solve these challenges, in this paper, we present two key insights in speech tokenization and speech language modeling. Specifically, we first propose an Understanding-driven Speech Tokenizer (USTokenizer), which extracts high-level semantic information essential for accomplishing understanding tasks using text LLMs. In this way, USToken enjoys better modality commonality with text, which reduces the difficulty of modality alignment in adapting text LLMs to speech LLMs. Secondly, we present DualSpeechLM, a dual-token modeling framework that concurrently models USToken as input and acoustic token as output within a unified, end-to-end framework, seamlessly integrating speech understanding and generation capabilities. Furthermore, we propose a novel semantic supervision loss and a Chain-of-Condition (CoC) strategy to stabilize model training and enhance speech generation performance. Experimental results demonstrate that our proposed approach effectively fosters a complementary relationship between understanding and generation tasks, highlighting the promising strategy of mutually enhancing both tasks in one unified model.
Problem

Research questions and friction points this paper is trying to address.

Bridging modality gap between speech and text tokens
Balancing acoustic and semantic needs in one model
Unifying speech understanding and generation capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Understanding-driven Speech Tokenizer for semantics
Dual-token modeling for unified tasks
Semantic supervision loss stabilizes training
🔎 Similar Papers
No similar papers found.
Y
Yuanyuan Wang
The Chinese University of Hong Kong
Dongchao Yang
Dongchao Yang
Chinese University of Hong Kong
TTSTTAAudio CodecMulti-modal Audio Fundation Models
Yiwen Shao
Yiwen Shao
Johns Hopkins University
speech recognitionmachine learningdeep learningNatural Language Processing
Hangting Chen
Hangting Chen
Tencent Hunyuan
signal processingspeech separationDCASE
J
Jiankun Zhao
The Chinese University of Hong Kong
Z
Zhiyong Wu
The Chinese University of Hong Kong, Tsinghua University
H
Helen Meng
The Chinese University of Hong Kong
Xixin Wu
Xixin Wu
The Chinese University of Hong Kong