🤖 AI Summary
This work proposes Covo-Audio, a 7-billion-parameter end-to-end language-audio foundation model designed to unify the processing of continuous audio inputs and outputs for multitask speech interaction and understanding. It represents the first 7B-scale model to achieve joint speech-text modeling, featuring an intelligence-voice decoupled architecture that balances high-performance dialogue capabilities with cost-effective voice customization. Through extensive pretraining and post-training, Covo-Audio attains state-of-the-art performance among models of comparable size across multiple speech-related benchmarks. Its variant, Covo-Audio-Chat, demonstrates strong conversational abilities, while Covo-Audio-Chat-FD significantly enhances robustness in full-duplex interactive scenarios.
📝 Abstract
In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.