LLaSE-G1: Incentivizing Generalization Capability for LLaMA-based Speech Enhancement

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech enhancement (SE) methods over-rely on semantic modeling from large language models (LLMs), neglecting acoustic consistency—leading to distorted enhanced speech and poor cross-task generalization. To address this, we propose a generalized LLaMA architecture specifically designed for SE. Our approach introduces: (1) a novel acoustic consistency constraint that explicitly models speech time-frequency structure; (2) a dual-channel unified input-output framework enabling zero-ID generalization across diverse SE tasks; and (3) integration of WavLM’s continuous representations, X-Codec2-based speech token prediction, and cross-modal fine-tuning to support test-time scaling and emergent capabilities on unseen tasks. Evaluated on denoising, dereverberation, and separation benchmarks, our method consistently outperforms both discriminative and generative state-of-the-art models. Code and pretrained models are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advancements in language models (LMs) have demonstrated strong capabilities in semantic understanding and contextual modeling, which have flourished in generative speech enhancement (SE). However, many LM-based SE approaches primarily focus on semantic information, often neglecting the critical role of acoustic information, which leads to acoustic inconsistency after enhancement and limited generalization across diverse SE tasks. In this paper, we introduce LLaSE-G1, a LLaMA-based language model that incentivizes generalization capabilities for speech enhancement. LLaSE-G1 offers the following key contributions: First, to mitigate acoustic inconsistency, LLaSE-G1 employs continuous representations from WavLM as input and predicts speech tokens from X-Codec2, maximizing acoustic preservation. Second, to promote generalization capability, LLaSE-G1 introduces dual-channel inputs and outputs, unifying multiple SE tasks without requiring task-specific IDs. Third, LLaSE-G1 outperforms prior task-specific discriminative and generative SE models, demonstrating scaling effects at test time and emerging capabilities for unseen SE tasks. Additionally, we release our code and models to support further research in this area.
Problem

Research questions and friction points this paper is trying to address.

Addresses acoustic inconsistency in LM-based speech enhancement.
Enhances generalization across diverse speech enhancement tasks.
Unifies multiple SE tasks without task-specific IDs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses WavLM for acoustic preservation
Implements dual-channel for task generalization
Outperforms task-specific SE models
🔎 Similar Papers
No similar papers found.
Boyi Kang
Boyi Kang
The Hong Kong University of Science and Technology
Multimodal IntelligenceAudio Processing
Xinfa Zhu
Xinfa Zhu
Northwestern Polytechnical University
speech generation
Z
Zihan Zhang
Northwestern Polytechnical University
Z
Zhen Ye
The Hong Kong University of Science and Technology
M
Mingshuai Liu
Northwestern Polytechnical University
Z
Ziqian Wang
Northwestern Polytechnical University
Y
Yike Zhu
Northwestern Polytechnical University
Guobin Ma
Guobin Ma
Northwestern Polytechnical University
J
Jun Chen
Huawei Technologies Co., Ltd.
L
Longshuai Xiao
Huawei Technologies Co., Ltd.
Chao Weng
Chao Weng
Anuttacon
Audio LLMsMultimodal LLMs
W
Wei Xue
The Hong Kong University of Science and Technology
L
Lei Xie
Northwestern Polytechnical University