Covo-Audio Technical Report

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Covo-Audio, a 7-billion-parameter end-to-end language-audio foundation model designed to unify the processing of continuous audio inputs and outputs for multitask speech interaction and understanding. It represents the first 7B-scale model to achieve joint speech-text modeling, featuring an intelligence-voice decoupled architecture that balances high-performance dialogue capabilities with cost-effective voice customization. Through extensive pretraining and post-training, Covo-Audio attains state-of-the-art performance among models of comparable size across multiple speech-related benchmarks. Its variant, Covo-Audio-Chat, demonstrates strong conversational abilities, while Covo-Audio-Chat-FD significantly enhances robustness in full-duplex interactive scenarios.

Technology Category

Application Category

📝 Abstract
In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.
Problem

Research questions and friction points this paper is trying to address.

end-to-end LALM
audio understanding
spoken dialogue
full-duplex interaction
voice interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

end-to-end LALM
full-duplex voice interaction
intelligence-speaker decoupling
audio-language modeling
spoken dialogue system
🔎 Similar Papers
No similar papers found.
W
Wenfu Wang
C
Chenxing Li
L
Liqiang Zhang
Yiyang Zhao
Yiyang Zhao
Ingdan Labs
Internet of ThingsMobile Computing
Y
Yuxiang Zou
Hanzhao Li
Hanzhao Li
Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern
Speech SynthesisSpontaneous SpeechSpeech Codec
Mingyu Cui
Mingyu Cui
The Chinese University of Hong Kong
Speech RecognitionMachine Learning
H
Hao Zhang
Kun Wei
Kun Wei
School of Computer Science, Northwestern Polytechnical University
deep learningcompute sciencespeech
L
Le Xu
Z
Zikang Huang
J
Jiajun Xu
J
Jiliang Hu
X
Xiang He
Z
Zeyu Xie
J
Jiawen Kang
Y
Youjun Chen
M
Meng Yu
D
Dong Yu
R
Rilin Chen
L
Linlin Di
S
Shulin Feng
N
Na Hu
Y
Yang Liu
B
Bang Wang