LLaMA-Omni: Seamless Speech Interaction with Large Language Models

📅 2024-09-10
🏛️ arXiv.org
📈 Citations: 18
Influential: 4
📄 PDF
🤖 AI Summary
To address high latency and reliance on automatic speech recognition (ASR) transcription in open-source large language models (LLMs) for spoken interaction, this paper proposes an end-to-end low-latency speech dialogue system. Methodologically, we design a unified architecture integrating a pre-trained speech encoder (Whisper), a learnable speech adapter, LLaMA-3.1-8B-Instruct, and a streaming Transformer-based speech decoder, trained via speech-to-speech supervised fine-tuning. We further release InstructS2S-200K, a large-scale speech instruction dataset. Our key contributions are: (1) the first end-to-end speech response system on open-source LLaMA-3.1 achieving ultra-low latency of 226 ms (<230 ms); (2) superior content accuracy and prosodic naturalness compared to existing speech-language models; and (3) strong practicality and reproducibility—trainable on just four GPUs within three days.

Technology Category

Application Category

📝 Abstract
Models like GPT-4o enable real-time interaction with large language models (LLMs) through speech, significantly enhancing user experience compared to traditional text-based interaction. However, there is still a lack of exploration on how to build speech interaction models based on open-source LLMs. To address this, we propose LLaMA-Omni, a novel model architecture designed for low-latency and high-quality speech interaction with LLMs. LLaMA-Omni integrates a pretrained speech encoder, a speech adaptor, an LLM, and a streaming speech decoder. It eliminates the need for speech transcription, and can simultaneously generate text and speech responses directly from speech instructions with extremely low latency. We build our model based on the latest Llama-3.1-8B-Instruct model. To align the model with speech interaction scenarios, we construct a dataset named InstructS2S-200K, which includes 200K speech instructions and corresponding speech responses. Experimental results show that compared to previous speech-language models, LLaMA-Omni provides better responses in both content and style, with a response latency as low as 226ms. Additionally, training LLaMA-Omni takes less than 3 days on just 4 GPUs, paving the way for the efficient development of speech-language models in the future.
Problem

Research questions and friction points this paper is trying to address.

Develop open-source LLM-based speech interaction models.
Achieve low-latency, high-quality speech-text response generation.
Create efficient training methods for speech-language models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLaMA-Omni integrates speech encoder and decoder
Eliminates need for speech transcription
Low-latency speech interaction with LLMs
🔎 Similar Papers
No similar papers found.
Qingkai Fang
Qingkai Fang
Institute of Computing Technology, Chinese Academy of Sciences
Large Language ModelsSpeech Language ModelsMultimodal LLMsSpeech Translation
S
Shoutao Guo
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS); University of Chinese Academy of Sciences, Beijing, China
Y
Yan Zhou
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS); University of Chinese Academy of Sciences, Beijing, China
Zhengrui Ma
Zhengrui Ma
Institute of Computing Technology, Chinese Academy of Sciences
Language Modeling
Shaolei Zhang
Shaolei Zhang
Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
Natural Language ProcessingLarge Language ModelMultimodal LLMsSimultaneous Translation
Y
Yang Feng
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS); Key Laboratory of AI Safety, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China